I haven’t tried RustRover yet, but based on having used their other IDEs for ages I’d say it’s a bit of pricing and UI.
Generally the language specific IDEs are extremely tailored to the language, IDEA itself still at times feel largely oriented towards the JVM ecosystem and using other languages can seem a bit clunky in terms of how you configure the toolchain and other language specific things.
They cost less than IDEA, but if you buy a few of the IDEs you are better off buying the subscription for all products. I generally live in a terminal, but for certain languages/tasks I still prefer a full IDE and the all products subscription definitely has been worth it for me.
For the others JetBrains IDE, the difference is mostly UI and pricing.
The pricing part is self evident: WebStorm and Pycharm are much more affordable than IntelliJ ultimate.
The UI part is more subtle. IntelliJ mostly feel like it is geared toward Java/kotlin development, and using it for Python or JavaScript mean that you will carry that bagage in your projects. While never a PITA, you always end up feeling other language don’t have the same focus as java. For example, you always seems to have an option for specifying a JDK, even though it make no sense for your project. The project structure window is very Java centric. Etc.
By having a dedicated IDE, you can remove those part that exists to accommodate Java, and sell it cheaper.
I wasn’t there, but I imagine the main explanation here are business and technical historical reasons — IntelliJ IDEA started as a Java IDE, and PyCharm came after IDEA already were successful. I would imagine, technically, it’s much easier to “fork” an IDE for X and turn it into an IDE for Y, than to make an IDE for X and Y: refactors which change cardinality from singular to plural are painful. And, from the business perspective, if you have a wildly successful product, it might be scary to branch that directly into a mostly orthogonal market niche, much easier to experiment with a separate brand.
Today, having separate products I imagine is still great from the business perspective — you can clearly track which languages people buy most, you could price individual products differently, and you could have “all products” pack as well.
For the user, it could be a rather significant negative. If you hop between languages, and use more esoteric ones, then JetBrains model would be fairly inconvenient. That’s the reason I still use VS Code, although I much prefer the IJ platform.
OTOH, many users work overwhelmingly with a single language (or a single project even). In that context, having a GUI that is exactly tailored for what you do out-of-the-box is a benefit. For every language, there are certain aspects that you want to show in GUI by default, but if you do this for every language, you run out of pixels (and user’s attention before that). One great example here comes from the early days of IntelliJ Rust, when I added initial support for the CLion. Those days, when you open a Rust project in CLion, everything worked, but there was also this giant panel in UI warning you that “CMakeLists.txt not found”. This makes total sense for CLion — it requires CMake to work, and so must warn the user pretty aggressively if it can’t make head or tails of a C++ project. But of course there’s no CMake in Rust projects!
Unclear. It does make sense for Java since LSP support for that language is abysmal, and IntelliJ works wonders. But rust-analyzer is a first-class LSP server that plugs into virtually anything. Maybe they are catering to their existing CLion user base.
There is a reason why I have an iron rule for managing Ubuntu servers: Never make a dist-upgrade. It was a painful lesson to learn, it fails more often than it works. If you need to go to another release, throw away the old server and set up a new, clean one. Bonus points for validating that your procedure to set up the server from a clean Ubuntu image works.
I manage ~200 Ubuntu LTS servers since 2012 and have never had a dist-upgrade fail unless it exposed an underlying hardware failure that has been lying dormant, which appears to have been the problem here.
It really depends on what you have installed and how customized your system is. In essence, it’s a lottery and you seem to have been winning a lot, congratulations. However, there is a myriad of things that can go wrong:
APT index or packages are broken (especially if you use custom PPAs)
Old configuration files confuse new services or binaries
Dependencies become incompatible (e.g. too new libraries, output of binaries changes format or semantics, etc.)
Drivers become incompatible
Etc.
Life is better if everything is dockerized and your Ubuntu is just vanilla plus a docker installation.
It depends on how old your Debian was. I tried to upgrade an old Debian MIPS system that I inherited and discovered that the new packages required a newer version of apt. The newer version of apt needed a newer version of glibc. The machine became unbootable trying to get out of that mess. I believe newer versions of Debian ship a statically linked version of apt to avoid this problem.
Out of curiosity, did this happen during an N+1 upgrade, or was it a situation where the system went directly from several-versions-old to the most recent?
This whole thread feels like an advert for FreeBSD. When you run freebsd-update (to update the base system), it creates a new ZFS boot environment (snapshot of the root filesystem). No matter how badly it goes wrong, you can always revert to the old one (as long as you can connect to the console). After the update, you delete the old one. If it’s a major update, the new version will default to installing the the ABI compat components and so you can update packages later (and do the same kind of snapshot of /usr/local if you want to be able to roll those back). Doesn’t Ubuntu do something like this? I’d hate to do any kind of serious system maintenance without an undo button.
That’s a really clever strategy and no, I’m not aware of any Linux distro that does something like that. Guix and Nix get to an even broader reversibility with a very different strategy: the packages are managed like an immutable hash data structure in a functional language. The packages are the keys, and the files are left in-place on disk as the package is changed on-disk by creating a new set of symlinks (pointers) into them. There’s an eventual GC process for long-unreferenced data.
As an aside, this is so obviously the right way to do package and configuration management that I evaluate them ~yearly for use on Lobsters and personal projects, but the documentation/maturity aren’t there yet and I’ve never seen a working install of a Rails apps as they have significant impedance with language-specific package managers/apps distributed as source.
The root snapshot reminded me of transactional-update and ABRoot, which are (as best as I can tell, I haven’t used them) both snapshot-based strategies for applying updates.
I’m curious why this isn’t standard practice even for distributions that you do trust to upgrade versions correctly: spin up a new VPS with the new distribution, copy over (hopefully a small amount of) state, switch DNS to point to the new VPS. What are the arguments against that approach?
The argument in our case is that ansbile has been brittle. Some of that is investing more time in cold-start provisioning, but some of that has been ansible making backwards-incompatible changes more often than we make any changes to our playbook. I never wanted to spend the (it turns out) ~5 hours getting that working.
Also, “switch DNS to the new VPS” has bitten us, the new one was assigned an IP that’s on many spam blacklists. I didn’t think to check and reroll until getting a clean one (this is apparently the best practice for DigitalOcean).
Yeah keeping up with ansible changes is tiresome. I found that it’s helpful to use a scratch server to debug the playbooks without any time pressure.
When I was doing email, we ended up making some effort to keep the IP addresses of our mail servers as stable as possible. Rather than updating the DNS we would just move the service IP address(es) from the old server to the new one - these were virtual service addresses, in addition to the per-instance addresses.
Dunno how easy it is to fling IP addresses around like that in a VPS provider: probably hard or impossible! In which case I might treat the scratch server as a practice dry run for an in-place upgrade of the live server.
On Linode, you can swap IP addresses between machines. On DigitalOcean, it looks like Reserved IP addresses fill a similar niche. It looks like that would be for a new IP address, not the current ones, though.
Yeah, I brought this up last night. I was planning to file an issue or something like that when the current crisis had passed so we can do an orderly transition to a new IP and not have as many problems next time.
I’m chewing on this one. I’m a bit irked that the product is basically paying the hosting provider to avoid dealing with problems caused by inadequate enforcement against bad behavior by other customers. The most uncharitable version of this is that the company is extorting me to avoid the expenses of acting responsibly. So my hobbyist sensibilities about the way things Ought To Be are clashing with my professional sensibilities about how cheap an improvement to prod uptime it’d be. Probably I’ll get over things and set it up soonish.
Oh, I totally misunderstood the pricing and thought it was a flat amount per month. Thank you, I’ve added one to the web01 vps and I’ll transition DNS to it soon.
Disadvantage: You temporarily double the resource usage, as both the old and the new server exist at the same time for a short duration. Another issue is DNS propagation which is not instantaneous.
Overall, it is still my preferred method because if anything goes wrong, I notice it early and can keep things running on the old server while I troubleshoot the new, upgraded one. For example, the outage described above wouldn’t have happened if this procedure was followed.
Lobsters finances (my wallet) is fine with this kind of expense. Running Lobsters would be a cheap hobby at twice the price. Just wanted to post a reminder as people occasionally express concern about hosting costs, and I prefer not to take donations.
Considering the free nature of lobsters a $7/hr VM would quickly increase hosting costs even during cut overs if they things don’t go as planned.
The approach of running two VMs during cut-over doubles the cost only during the fraction of time that a whole-system upgrade is being made. In extreme cases that might be, what, 8 hours a year? That would add 0.1% to cost of the VM. I don’t see that it would “quickly increase VM costs”!
Consumer ISPs often have their own idea of TTLs independent what the authoritative server tells them.
Perhaps the correct approach is to use some sort of reverse proxy but if that requires a VM of its own it would definitely add a lot more cost.
What sort of costs do we expect to incur in consequence?
That depends very much on the server/plan. But yes, in the cloud, it usually costs peanuts. But if you have a beefy on-premise server, you can’t just double your hardware on a whim.
Would setting DNS TTL to 60 (seconds) sufficiently far in advance mitigate that disadvantage?
I don’t have enough experience with DNS management to know for sure. All I know is that this stuff keeps being cached at every step of the way and it makes life difficult sometimes. Unfortunately, it isn’t just a switch you can flip and it’s done.
There is a reason why I have an iron rule for managing Ubuntu servers: Never make a dist-upgrade.
That’s crazy. I’ve used “apt-get dist-upgrade” as the sole means of updating my Debian machines for 19 or 20 years now. Granted they’re desktops, and 90% of the time I’m going from testing to testing, but still, other than an Nvidia driver fubar once, it always works great for me.
If you need to go to another release, throw away the old server and set up a new, clean one.
The advantage of routine rebuild-from-scratch is that you get to practise parts of your disaster recovery process. When something surprising like a driver fubar happens, it does not happen on a production machine and it does not make a planned outage take much longer than expected.
I have also had no problems with Debian upgrades long-term. I’ve had two long-running Debian servers: one on physical hardware, installed in 2001 and dist-upgraded until I retired the machine in 2010, and one on a VPS, installed in 2013 and still running. Been very impressed on how it all just works. In the early years (2001-2006 especially) an upgrade would often break X11 and require me to futz with XF86Config, but that was eventually sorted out with better auto-detection. The underlying OS never had issues though.
I’m about 50/50 for do-release-upgrade working on my home servers. Failures are usually due to some weird configuration I did (hello, years of changing network configuration systems) or a third party package, I’m not mad about this, trying to support absolutely every possible Ubuntu system is impossible. But it is an issue for me.
You know, I was going to say that its OK, as it’s a handful of unsafe blocks around FFI, its comparatively easy to just eyeball them, and, unless you go full CHERI, unsafety has to live somewhere.
And I think that’s a bug and technically UB! CString::new allocates, and you can’t allocate in pre-exec.
So kinda no maybe?
But also maybe yes, or at least much better than alternatives? This bug really jumps out at me, it’s trivial to notice for someone unfamiliar with the code. pre_exec is tricky, and unsafe makes it really stick out, much better then if it were hidden in the guts of the standard library or some native extension.
(also, obligatory PSA: the biggest problem with sudo is not memory unsafety per se, but just the vast scope for this security sensitive utility. For personal use, you can replace sudo with doas, and it’ll probably make a bigger dent in insecurity. If you have to be API and feature-compatible with sudo though, then, yes, what Ferrous folks are doing makes most sense to me)
I often see doas recommended as simpler than sudu. When I compare the documentation I see an almost identical feature set. The only difference in security surface are seems to be the configuration. Is there more that sudo does that doas does not do?
That’s also the funny thing — I don’t actually know what sudo does. I know that OpenDoas clocs at under 5k lines of code, while sudo is a lot more (see the nearby comment by andyc). So, that’s a non-constructive proof that it does something extra!
You’re allowed to allocate in pre-exec. The restrictions the documentation mentions mostly stem from other threads potentially holding locks or having been mid-modification of the env variables.
If you guarantee that there is no threads (and other pre-conditions like reentrancy) running at the time of fork(), you can in fact malloc without any big issues.
I think in case of Rust it’s actually quite murky technically:
First, Rust doesn’t have assert_no_thread!() functionality, so, if you assume at pre_exec time that the program is single-threaded, the calling function should be marked as unsafe with the safety precondition of “must be single threaded”. Which mostly boils down to just “don’t allocate” in practice.
Second, allocation in Rust calls #[global_allocator], which is arbitrary user code. As a general pattern, when you call arbitrary code from within unsafe blocks, there usually is some Rube Goldberg contraption which ends with a shotgun aimed at your feet. That is, if the user tags their own static as a global allocator, they can call various API on that static directly, and that should be enough to maneuver the thing into “technically UB” even without threads. In particular, you could smuggle something like https://github.com/rust-lang/rust/issues/39575#issuecomment-437658766 that way.
But yeah, this is quite subtle, there was some debate whether before_exec needs to be unsafe at all, and I personally am not clear as to what’s the safety contract of before_exec actually is, in terms of Rust APIs.
To be fair, pre_exec is an unsafe function for precisely this reason and it should stay that way. The API contract is largely unspecified because fork() does some amazing things to program state. I think the solution here would be to swap in a CStr over a CString, to avoid the allocation.
edit: One way we could avoid it is by introducing a new unsafe trait; Interruptable. It must be explicitly declared on structs. The trait would simply declare the struct and it’s functions reentrancy safe. A function is automatically interruptible if all non-local data it uses is also interruptible. Then PreExec could simply require the function to also be Interruptable.
To be fair, pre_exec is an unsafe function for precisely this reason and it should stay that way. The API contract is largely unspecified because fork() does some amazing things to program state.
It looks as if it’s intended as an abstraction over different process-creation things and fork and vfork do different amazing things to process state.
Fork creates a copy of the address space and file descriptor table, but that copy has only the current thread in it. If other threads are holding locks, can cannot acquire those locks without deadlocking. You need to register pre-fork hooks to drop them (typically, the pre-fork hook should acquire the locks in the prepare stage and then drop them in the child, it should also try to guarantee consistent state in both). It is unsafe to call malloc from a multithreaded program between fork and execve because it may deadlock with a not-copied thread.
Vfork creates a copy of the file descriptor table but does not create a copy of the address space. You can modify the file-descriptor table until you execve and then you effectively longjmp back to the vfork call (I believe this is actually how Cygwin implements vfork). Because the code in the vfork context is just normal code, it is safe to call malloc there, but anything you don’t free will leak.
This is the main reason that I prefer using vfork: I can use RAII in my setup code and, as long as I reach the end of the scope before calling execve, everything is fine.
This is often a very constrained environment where normal operations like malloc, accessing environment variables through std::env or acquiring a mutex are not guaranteed to work
Which doesn’t read like memory allocations are forbidden in that closure to me.
Ninja edit: To me, it reads like if e.g: your program is single-threaded then you’re fine.
Yes, that rule is mostly about multithreaded environments. And violations usually don’t result in memory corruption but in deadlocks.
The reason is that after forking, you only have the current thread, all the other ones are “frozen” in time. So if you forked while some other thread held a lock inside malloc and then call malloc yourself, you can deadlock.
And also, glibc in particular has special code to make this work, so you can indeed malloc after fork there safely.
So IMO this is POSIX UB and therefore Rust UB by fiat, but very likely not a security issue in practice. It should be fixed, but it’s not super alarming to me.
You definitely can replace sudo with doas… unless you run a RHEL or a clone. I have two Rocky machines and my Ansible does not coexist with the RHEL clones as compared to FreeBSD and my myriad of other operating systems.
I have effectively replaced sudo with doas in all situations except for those platforms.
unsafe does not necessarily mean it does not have memory safety, but that that code needs more scrutiny. I am curious why there aren’t SAFETY comment blocks on each instance of unsafe to explain the invariants that makes it safe or necessary.
Most of them look pretty clearly justified, although it’d still be helpful to have an argument why they’re correct. The unsafe blocks I looked at are all used to interact with unsafe APIs, which is sort of a given for this sort of program, but they’re short and at first glance don’t seem to do anything that would be hard to reason about.
This would mean that the whole titular claim, that this is the “first stable release of a memory safe sudo implementation”, is probably wrong. C code can also “have memory safety”, so the first stable release of a memory safe sudo implementation is probably (at least some version of) sudo itself.
It might be my own failing, but I’m having trouble understanding your comment. That the “memory safe version of sudo itself has no bugs leading to memory errors” seems like a tautology.
If you mean we don’t know if there’s any such version, that’s technically true, and why I said “probably”. It seems pretty likely, though, and it’s certainly at least possible, bar any proof otherwise.
The point is that it doesn’t seem likely to me that sudo, as a reasonably large program in a language with few safety features, has no more such bugs. Sudo has had memory safety bugs in the past, and if I had to guess, I’d expect more to emerge at some point. You may be able to prove that there’s no memory-safe version simply by waiting for that to happen.
Of course, if there is such a version, finding it and proving it to be safe may be considerably more challenging. Which is important in itself: its safety wouldn’t have much value to us unless we knew about it.
You may be able to prove that there’s no memory-safe version simply by waiting for that to happen.
That would show there is (was) a memory-error bug in some version(s), not all prior versions. And it’s all a bit hand-wavy; that we know that there were memory safety bugs in the past only shows that such bugs were found and fixed. From reading elsewhere sudo sounds like it’s more complex than I would’ve thought it should be, but it’s still not so big that it’s impossible that it is (or has been at some point) free of such bugs.
If you prefer, I can rephrase my point (making it slightly weaker): the linked article is claiming the “first stable release of a memory safe sudo implementation” has been made, referring to this Rust implementation, but there is no certainty that all prior implementations had memory-error bugs, so if we take “memory safe” to mean “has no memory-error bugs” then the claim is unproven and could be wrong. (It seems we can’t take “memory safe” to mean “written completely in a memory safe language”, since it apparently uses “unsafe”).
(As to how likely the claim is to be wrong, we may disagree, but there’s probably not much in the way of objective argument that we can make about it).
Even if you implement a program in a memory safe language you do not know for sure that it is memory safe.
You have a very strong reason to believe so, but there is always the possibility of a mistake in the type system design or the language implementation. After a few years the rust language has been addressed with formal methods that mathematically prove correctness for rust programs as well as specifically proving the correctness of various parts of the rust standard library that make use of unsafe blocks. This helps give very strong confidence about memory safety, but again there is the possibility of gaps.
What I find irritating - and apple mail does this too - is that the name of the sender is so small. In my head conversations are tied to people so I am more likely looking for the person first. I may be an outlier, but this strikes me as strange.
I may be an outlier, but this strikes me as strange.
Nah I get what you are saying. I don’t know if it’s because of the era when I started using GUI mail clients, but these days I feel like I struggle to quickly see the information I want to see in many graphical mail clients.
In Thunderbird I’d love to know of a way to move the expandable thread widget thingy to the far left, instead it seems be attached to the Subject, which I don’t want on the far left. I can add an icon that indicates the email is part of a thread, but the clickable UI element is still by the subject.
The unfortunate thing with these sort of malware detectors is that they operate with an expected false-positive rate. The industry generally thinks that’s fine, because its better to hit more malware than to miss any.
That only works if the malware decector authors are receptive to feedback, though.
CrowdStrike Falcon on macOS and SentinelOne on Windows have cost me so much wasted time as an employee of companies that uses them. Falcon routinely kills make, autoconf, etc. SentinelOne does the same thing when using msys2 or cygwin on Windows.
At least SentinelOne tells me, Falcon tries its best to leave zero useful information on the the host. When processes start randomly being terminated it takes a bit of effort to find out what the hell is actually happening. Often I realized it was Falcon only after some poor desktop security tech gets assigned a ticket and they reach out to me with a lot of confusion around some crazy complex command lines being sent through the various exec calls.
Because of the high frequency in which I encounter the issue, if something randomly fails in a way I don’t expect I immediately suspect Falcon.
My understanding is that signed binaries don’t help - if a binary is rarely run (because it’s just been released, or it’s just not a mainstream tool) there’s a good chance it’ll be detected as malicious no matter what.
I believe it depends on the kind of signature. Last time I checked, companies can buy more expensive certificates which are privileged insofar that the binaries don’t need to be run on a lot of machines to be considered safe.
I’ve worked at a company where the IT was so terrible that their actions bordered on sabotage. They caused more damage and outages than actual hackers would. The anti-virus would delete our toolchains and fill the disk with ominous files until there was no space left. Luckily, they left our Linux machines alone, so we put everything we cared about on Linux (without GUI) and hoped that their lack of know-how would prevent them from messing with those machines. It worked.
It would be nice to talk about hardware implications in this setup. My assumption is any class-compliant multi-IO interface should work fine in FreeBSD but I have never taken the time to look into myself. I have done a lot of Audio work in the Linux world in the past and it was always a hassle to get the right JACK + ALSA configuration so I could do multi-track recording.
I can’t speak to FreeBSD, but on Linux a lot of work is put into various generic(usb is what I’m mainly thinking these days) drivers to implement work workarounds for the never ending list of buggy devices that don’t actually comply with various specs. I learned the hard way when the audio interface I bought was so new that it wouldn’t work at all until some things were fixed in the kernel driver.
I don’t do audio work for a living but lately I’ve had mostly good experiences with PipeWire. My setup is fairly esoteric in terms of odd sinks/sources spread across multiple machines, but overall it has worked well.
Admittedly I’ve not done any recording in the traditional sense, just using it as a giant goofy mixer for having all my computers use a single microphone and headset.
I had one of these. I got an 8 MiB version for Christmas and remember being a bit disappointed because I’d hoped to get a second one a few years later when they became cheap and pair it with SLI, but that was a bit of a waste with the 8 MiB ones. In the end, by the time they were cheap enough to get another one, so were other graphics cards that outperformed an SLI VooDoo2. My next card was an ATi All-in-Wonder, which had a Rage128 chipset and ran things in 1024x768 about as well as the VooDoo2 and also did TV input and hardware-accelerated MPEG-2 encoding (I was at university then and it let me use my huge 19” CRT monitor [bought dirt cheap at a computer surplus place] as a TV).
This was also the last model where 3dfx didn’t make cards themselves, they just sold the chips to card manufacturers. There was a company (Obsidian?) that made a single card with two VooDoo2 chips on it in SLI mode. They had two-page adverts in the inside cover of computer magazines and I wanted one so much. They looked incredibly impressive but cost about as much as a complete computer.
Most games that used it used the proprietary GLide APIs, which were vaguely OpenGL-like. There were DOS and Windows drivers and most of the games had custom 3dfx-specific code. Some folks (not sure if this was 3dfx, Id, or a third party) wrote a ‘mini-GL’ driver that wrapped GLide in OpenGL calls. Quake 1 was a DOS game but the codebase was fairly modular. If I remember correctly, it was originally written on a NeXT workstation and an OpenGL renderer was added on some UNIX graphics workstation. This was merged with the WinQuake code (which ran, unsurprisingly, on Windows instead of DOS) to allow you to run Quake with on a Windows NT workstation with an OpenGL accelerator. The mini-GL driver allowed the same code to run on Windows NT or Windows 95 with a 3dfx card. It implemented just the subset of OpenGL that GLQuake needed. You could drop it in your system32 folder and make it the default OpenGL implementation (replacing the API-complete software renderer), which had some very fun side effects: any OpenGL window would become full screen, including things like the tiny preview of the OpenGL screen savers that shipped with NT4.
The most interesting thing about this era, from an historical perspective, is how quickly 3dfx lost the crown. They were taken completely by surprise by the nVidia GeForce. The nVidia Riva TNT was fairly comparable to the 3dfx offerings of the same time but the GeForce added transform and lighting to the accelerated pipeline and completely blew the 3dfx cards away. The long lead times in hardware from design to shipping meant that 3dfx went from market leader to has-been in under a year.
Let’s step back and look at that first Voodoo Graphics chipset, called SST1 and built by TSMC on a 500nm process
Lines like that remind me how amazing it is to live in the future.
Yep, GLQuake also ran under Win NT 4, but the normal one didn’t. That’s the first time I remember thinking deeper about these driver architectures and issues.
I had a Diamond Voodoo 3D and then another Voodoo II, but I don’t remember the brand. Nice throwback when I found the unused SLI cable many years later in a box. The only time we tried SLI was at LAN parties where we had enough cards…
Also I’m kinda sure I went from the Voodoo II directly to a GeForce II, but I’d have to do a reality check with release dates. The Voodoo II was certainly cool at the time, although the looping through wasn’t perfect, but I might be misremembering some issues when running 2d stuff on 1600x1200 on my 49kg 22” iiyama screen…
If I remember correctly, WinQuake and GLQuake used the same networking protocol, but DOS Quake used a different one. This led to some arguments among folks at LAN parties because DOS Quake had better frame rates than WinQuake for the non-3dfx owners, but GLQuake was much better for the others. Eventually, CPUs got fast enough that WinQuake was better because it could run at higher resolutions than DOS Quake.
It has been a few minutes, but if I remember correctly 3dfx released MiniGL to implement just enough OpenGL to allow the OpenGL version of Quake to run. I believe originally quake only supported software rendering and vquake supported Rendition accelerators.
Yep Quake was created on NeXT, the DOS version was compiled with djgpp and on Win95 could use a vxd to get IP networking going from DOS.
I had that same ATI AIW card, it was pretty mind blowing to me at the time that a computer could suddenly do so much and be the nexus of many activities. I can’t think of any novelty in PCs that can compare since. Outside of that the smartphone with 3g or better connectivity was the next tectonic shift.
It was great for watching TV, but the utility of the MPEG encoding was problematic because of limitations of the AVI file format. I can’t quite remember the details, I think sound and video tracks were each encoded with time stamps in ticks of different sizes, which meant that rounding errors would accumulate and, after about an hour, they’d be about half a second off for things recorded from TV. If you recorded a complete film then you’d end up with the second half being painful to watch.
The core premise of the article is completely mistaken. The database key was never intended to be a secret. At-rest encryption is not something that Signal Desktop is currently trying to provide or has ever claimed to provide. Full-disk encryption can be enabled at the OS level on most desktop platforms.
Yikes. Disk encryption covers the “dude swiped my laptop” attack vector but not the malicious npm package (or whatever) attack vector. Isn’t this terrifyingly short-sighted of Signal?
What would you propose as a fix for the problem? Whatever you can come up: As long as the key is stored somewhere, it’s available for malware to get it. Is it in the OS keychain? Inject code into the signal binary and query for it. Is it on disk? Read it from there. Is it encrypted on disk? Take the key from the signal binary and decrypt it. Is it in memory of the signal app? Take it from there.
Whatever you can come up can possibly be classified as “defense in depth”, but there’s nothing (short of having to manually authenticate whenever the app does any network request / access to stored data) that can be done to protect secrets in light of malware attacks.
I don’t know about windows and Linux, but on macOS keychain material is encrypted by the SEP, and access to the data requires user authentication and if set correctly, requires it on a case by case basis.
By requires I mean it is not possible for any software at any privilege level to bypass it.
I understand that there doesn’t exist perfect security in the face of arbitrary malware but we have OS key stores for good reason.
If I told someone it would be extremely trivial to write a malicious npm package that stole all of their Signal messages, most people would be very surprised and some would perhaps be less likely to use Signal Desktop for very sensitive conversations. (There is no analogous attack on iOS, right?)
While I agree that Electron isn’t to blame, I will say in my experience Electron apps for networked applications rarely seem to use a proper secure storage system.
For accessibility purposes I use a hacked together terminal Slack client for most of my Slack usage. Originally I followed the advice of most 3rd party Slack clients on how to get a better token to use with 3rd party clients, but realized why bother when I can just write something that constantly scrapes various IndexedDB, Session/Local Storage databases.
I have a script that runs finds my Slack workspaces’ tokens, validates them, then shoves them into a secret store(org.freedesktop.secrets) and sends my slack client a signal to reload the secrets from the secret store. I do run the client for audio calls frequently enough that my local creds stay refreshed.
I’ve lost track of how many networked electron apps that I’ve encountered that I’ve been able to abuse the local storage being unencrypted to gain api credentials for scripting purposes.
This seems to be a side-effect of how many of these apps are fairly simple wrappers around their web versions and they don’t do the due-diligence on securing that data as they are used to browsers being in charge of protecting their data.
While I agree that Electron isn’t to blame, I will say in my experience Electron apps for networked applications rarely seem to use a proper secure storage system.
Yeah, I’d agree here. But I feel that a lot of electron apps half ass pretty much anything that isn’t absolutely core to the app.
This seems to be a side-effect of how many of these apps are fairly simple wrappers around their web versions and they don’t do the due-diligence on securing that data as they are used to browsers being in charge of protecting their data.
Yeah, many seem like low effort “we made an App!” that is just a multi-hundred meg wrapper around a web page, but without doing any of the work an actual browser does (even chrome) to protect user data and privacy
After months of weekends consisting of house work or work work, finally taking a mental break and going to attempt to-do something somewhat social. My mental health tends to spiral downward easily with isolation from friends and social circles, so going to focus on breathing, living, and safely socializing.
My wife and I adopted a dog, who we named Ahsoka Tano. I’ll be taking care of her along with our existing three-and-a-half year old dog, Vader. I gotta work Sunday to make up for hours missed yesterday since we took the day yesterday going through the adoption process.
I don’t celebrate American Thanksgiving, but I had the day off anyways. So I decided to check an item off my bucket list - write a driver. A writeup of what it took is in the README, and a binary build is in the releases. (The licensing is unclear, considering it’s based on the DDK sample driver, but everyone based their drivers on the DDK samples; I’m just wondering how I make clear my changes are some kind of free license even if the base is just the royalty-free non-exclusive whatever sample drivers are under.)
Fascinating read. Audio was the thing that made me switch from Linux to FreeBSD around 2003. A bit before then, audio was provided by OSS, which was upstream in the kernel and maintained by a company that sold drivers that plugged into the framework. This didn’t make me super happy because those drivers were really expensive. My sound card cost about £20 and the driver cost £15. My machine had an on-board thing as well, so I ended up using that when I was running Linux.
A bit later, a new version of OSS came out, OSS 4, which was not released as open source. The Linux developers had a tantrum and decided to deprecate OSS and replace it with something completely new: ALSA. If your apps were rewritten to use ALSA they got new features, but if they used OSS (as everything did back then) they didn’t. There was only one feature that really mattered from a user perspective: audio mixing. I wanted two applications to be able both open the sound device and go ‘beep’. I think ALSA on Linux exposed hardware channels for mixing if your card supported it (my on-board one didn’t), OSS didn’t support it at all. I might be misremembering and ALSA supported software mixing, OSS only hardware mixing. Either way, only one OSS application could use the sound device at the time and very few things had been updated to use ASLA.
GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’. There was just one problem: I wanted to use Evolution (GNOME) for mail and Psi (KDE) for chat. Only one out of the KDE and GNOME sound daemons could play sound at a time and they were incompatible. Oh, and XMMS didn’t support ALSA and so if I played music the neither of them could do audio notifications.
Meanwhile, the FreeBSD team just forked the last BSD licensed OSS release and added support for OSS 4 and in-kernel low-latency sound mixing. On FreeBSD 4.x, device nodes were static so you had to configure the number of channels that it exposed but then you got /dev/dsp.0, /dev/dsp.1, and so on. I could configure XMMS and each of the GNOME and KDE sound daemons to use one of these, leaving the default /dev/dsp (a symlink to /dev/dsp.0, as I recall) for whatever ran in the foreground and wanted audio (typically BZFlag). When FreeBSD 5.0 rolled out, this manual configuration went away and you just opened /dev/dsp and got a new vchan. Nothing needed porting to use ALSA, GNOME’s sound daemon, KDE’s sound daemon, PulseAudio, or anything else: the OSS APIs just worked.
It was several years before audio became reliable on Linux again and it was really only after everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire. PipeWire does have some advantages, but there’s no reason that it can’t be used as a back end for the virtual_oss thing mentioned in this article, so software written with OSS could automatically support it, rather than requiring the constant churn of the Linux ecosystem. Software written against OSS 3 20 years ago will still work unmodified on FreeBSD and will have worked every year since it was written.
There was technically no need for a rewrite from ALSA to PulseAudio, either, because PulseAudio had an ALSA compat module.
But most applications got a PulseAudio plug-in anyway because the best that could be said about the compat module is that it made your computer continue to go beep – otherwise, it made everything worse.
I am slightly more hopeful for PipeWire, partly because (hopefully) some lessons have been drawn from PA’s disastrous roll-out, partly for reasons that I don’t quite know how to formulate without sounding like an ad-hominem attack (tl;dr some of the folks behind PipeWire really do know a thing or two about multimedia and let’s leave it at that). But bridging sound stacks is rarely a simple affair, and depending on how the two stacks are designed, some problems are simply not tractable.
One could also say that a lot of groundwork was done by PulseAudio, revealing bugs etc so the landscape that PipeWire enters in 2021 is not the same that PulseAudio entered in 2008. For starters there’s no Arts, ESD etc. anymore, these are long dead and gone, the only thing that matters these days is the PulseAudio API and the JACK API.
I may be misremembering the timeline but as far as I remember it, aRts, ESD & friends were long dead, gone and buried by 2008, as alsa had been supporting proper (eh…) software mixing for several years by then. aRts itself stopped being developed around 2004 or so. It was definitely no longer present in KDE 4, which was launched in 2008, and while it still shipped with KDE 3, it didn’t really see much use outside KDE applications anyway. I don’t recall how things were in Gnome land, I think ESD was dropped around 2009, but pretty much everything had been ported to canberra long before then.
I, for one, don’t recall seeing either of them or using either of them after 2003, 2004 or so, but I did have some generic Intel on-board sound card, which was probably one of the first ones to get proper software mixing support on alsa, so perhaps my experience wasn’t representative.
I don’t know how many bugs PulseAudio revealed but the words “PulseAudio” and “bugs” are enough to make me stop consider going back to Linux for at least six months :-D. The way bug reports, and contributors in general, technical and non-technical alike were treated, is one of the reasons why PulseAudio’s reception was not very warm to say the least, and IMHO it’s one of the projects that kickstarted a very hostile and irresponsible attitude that prevails in many Linux-related open-source projects to this day.
I might be misremembering and ALSA supported software mixing, OSS only hardware mixing.
That’s more like it on Linux. ALSA did software mixing, enabled by default, in a 2005 release. So it was a pain before then (you could enable it at least as early as 2004, but it didn’t start being easy until 1.0.9 in 2005)… but long before godawful PulseAudio was even minimally usable.
BSD did the right thing though, no doubt about that. Linux never learns its lesson. Now Wayland lololol.
GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’.
Things got pretty hilarious when you inevitably mixed an OSS app (or maybe an ALSA app, by that time? It’s been a while for me, too…) and one that used, say, aRTs (KDE’s sound daemon).
What would happen is that the non-aRTs app would grab the sound device and clung to it very, very tight. The sound daemon couldn’t play anything for a while, but it kept queuing sounds. Like, say, Gaim alerts (anyone remember Gaim? I think it was still gAIM at that point, this was long before it was renamed to Pidgin).
Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.
I’d forgotten about that. I remember this happening when people logged out of computers: they’d quit BZFlag (yes, that’s basically what people used computers for in 2002) and log out, aRTs would get access to the sound device and write as many of the notification beeps as it could to the DSP device before it responded to the signal to quit.
ICQ-inspired systems back then really liked notification beeps. Psi would make a noise both when you sent and when you received a message (we referred to IM as bing-bong because it would go ‘bing’ when you sent a message and ‘bong’ when you received one). If nothing was draining the queue, it could really fill up!
Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.
This is exactly what happens with PulseAudio to me today, provided the applications trying to play the sounds come from different users.
Back in 2006ish though, alsa apps would mix sound, but OSS ones would queue, waiting to grab the device. I actually liked this a lot because I’d use an oss play command line program and just type up the names of files I want to play. It was an ad-hoc playlist in the shell!
This is just an example of what the BSDs get right in general. For example, there is no world in which FreeBSD would remove ifconfig and replace it with an all-new command just because the existing code doesn’t have support for a couple of cool features - it gets patched or rewritten instead.
I’m not sure I’d say “get right” in a global sense, but definitely it’s a matter of differing priorities. Having a stable user experience really isn’t a goal for most Linux distros, so if avoiding user facing churn is a priority, BSDs are a good place to be.
I don’t know; the older I get the more heavily I value minimizing churn and creating a system that can be intuitively “modeled” by the brain just from exposure, i.e. no surprises. If there are architectural reasons why something doesn’t work (e.g. the git command line), I can get behind fixing it. But stuff that just works?
Same as with systemd, there were dozens of us where everything worked before. I mean, I mostly liked pulseaudio because it brought a few cool features, but I don’t remember sound simply stopping to work before. Sure, it was complicated to setup, but if you didn’t change anything, it simply worked.
I don’t see this as blaming. Just stating the fact that if it works for some people, it’s not broken.
Well, can’t blame him personally, but the distros who pushed that PulseAudio trash? Absolutely yes they can be blamed. ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!
But nah better to force everyone to constantly churn toward the next shiny thing.
ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!
Huh? I just setup ALSA recently and you very much had to specifically configure dmix, if that’s what you’re referring to. Here’s the official docs on software mixing. It doesn’t do anything as sophisticated as PulseAudio does by default. Not to mention that on a given restart ALSA devices frequently change their device IDs. I have a little script on a Void Linux box that I used to run as a media PC which creates the asoundrc file based on outputs from lspci. I don’t have any such issue with PulseAudio at all.
dmix has been enabled by default since 2005 in alsa upstream. If it wasn’t on your system, perhaps your distro changed things or something. The only alsa config I’ve ever had to do is change the default device from the hdmi to analog speakers.
And yeah, it isn’t sophisticated. But I don’t care, it actually works, which is more than I can say about PulseAudio, which even to this day, has random lag and updates break the multi-user setup (which very much did not just work). I didn’t want PA but Firefox kinda forced my hand and I hate it. I should have just ditched Firefox.
Everyone tells me the pipewire is better though, but I wish it could just go back to the default alsa setup again.
Shrug, I guess in my experience PulseAudio has “just worked” for me since 2006 or so. I admit that the initial rollout was chaotic, but ever since it’s been fine. I’ve never had random lag and my multi-user setup has never had any problems. It’s been roughly 15 years, so almost half my life, since PulseAudio has given me issues, so at this point I largely consider it stable, boring software. I still find ALSA frustrating to configure to this day, and I’ve used ALSA for even longer. Going forward I don’t think I’ll ever try to use raw ALSA ever again.
I cannot up this comment more. The migration to ALSA was a mess, and the introductions of Gstreamer*, Pulse*, or *sound_daemon fractured the system more. Things in BSD land stayed much simpler.
I was also ‘forced’ out of Linux ecosystem because of mess in sound subsystem.
After spending some years on FreeBSD land I got hardware that was not FreeBSD supported at that moment so I tried Ubuntu … what a tragedy it was. When I was using FreeBSD I got my system run for months and rebooted only to install security updates or to upgrade. Everything just worked. Including sound. In Ubuntu land I needed to do HARD RESET every 2-3 days because sound will went dead and I could not find a way to reload/restart anything that caused that ‘glitch’.
From time to time I try to run my DAW (Bitwig Studio) in Linux. A nice thing about using DAWs from Mac OS X is that, they just find the audio and midi sources and you don’t have to do a lot of setup. There’s a MIDI router application you can use if you want to do something complex.
Using the DAW from Linux, if it connects via ALSA or PulseAudio, mostly just works, although it won’t find my audio interface from PulseAudio. But the recommended configuration is with JACK, and despite reading the manual a couple times and trying various recommended distributions, I just can’t seem to wrap my head around it.
I should try running Bitwig on FreeBSD via the Linux compatibility layer. It’s just a Java application after all.
Try updating to Pipewire if your distribution supports it already. Then you get systemwide Jack compatibility with no extra configuration/effort and it doesn’t matter much which interface the app uses. Then you can route anything the way you like (audio and MIDI) with even fewer restrictions than MacOS.
Some of us hang out in forums where people literally start posting minutes after a Python release that they don’t understand why NumPy isn’t installing on the new version.
Waiting at least a little bit for the ecosystem to catch up is sound advice.
I don’t understand why you say that when the article was very clearly a meta-discussion of how to approach Python version upgrades. It is not asking users to hold off indefinitely, but instead is reacting to the availability and how that plays out with updates throughout the ecosystem.
A “product manager” for Python could take a lot away from how clearly the pain points were laid out. As a platform, it’s advantageous for Python to tackle a lot of the issues pointed out, but it’s hard because of the number of stakeholders for things like packages. Getting a Docker image out more quickly seems like low-hanging fruit, but delaying a few days could perhaps be intentional.
For what it is worth, the Docker container, like many very popular containers on the official docker registry, are in fact owned and maintained by the Docker community themselves. I am unsure if it is really their duty to-do that.
Many of the listed things in the article are indeed painful things to deal with, but some of them I’m not sure if the PSF is really the right entity to have had them fixed on launch day.
edit: clarified that is the docker community that maintains it, not Docker the corporate entity.
Also, as the author suggested it could be, it’s fixed already:
Digest: sha256:05ff1b50a28aaf96f696a1e6cdc2ed2c53e1d03e3a87af402cab23905c8a2df0
Status: Downloaded newer image for python:3.10
Python 3.10.0 (default, Oct 5 2021, 23:39:58) [GCC 10.2.1 20210110] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
They had to hit publish pretty quickly to release that complaint while it was still true.
Some of concerns seem reasonable, for example the tooling catching up with the new pattern matching syntax blocks (match, case). If you use the popular Black code formatter, for example, it doesn’t yet handle pattern matching (and it looks like it’s going to be a bit of a job to update that).
TOML is really the best of both worlds IMO. Easy to read/write, hard to screw up.
I’d also say that there’s really no human serialization language that handles repetition well. YAML has its anchors and stuff but that’s footgun city. HCL has for_each which is good but also has a steep learning curve. Writing real code and dumping to something else is my preferred method if HCL isn’t an option.
All JSON is also valid UCL as well. UCL actually supports all of the things I want from a human-friendly configuration language, such as well-defined semantics for loading multiple files with overrides (including deletion), so if you want to avoid complex parsing in your main application then you can have a stand-alone unprivileged process that parses UCL and generates JSON.
Anything where I might want YAML, I’ve found either JSON or UCL a better choice. JSON is much simpler to parse and has good (small!) high-performance parsers, UCL is more human-friendly. YAML is somewhere between the two.
One small nit: the YAML spec disallows tabs, while JSON allows them. In practice, I don’t know of any YAML parser implementations that will actually complain, though.
I agree with several of the issues pointed out on that page and it’s sibling pages, but some of the headache experienced is a direct result of libyaml(and thus pyyaml). It still doesn’t properly support YAML 1.2, which defines booleans as true/false. That is still a nasty thing though when wanting to use the literal word true or false, but at least it avoids the n,no,y,yes, and the various case differences. 1.2 only supports true, false.
libyaml also doesn’t generate errors on duplicate keys, which is incredibly frustrating as well.
The criticism of implicit typing, tagging, typing, flows, yaml -> language datatype, etc. are all spot on. They are prone to errors and in the latter case make it really easy to introduce security issues.
That is still a nasty thing though when wanting to use the literal word true or false, but at least it avoids the n,no,y,yes, and the various case differences. 1.2 only supports true, false.
Are you sure? https://yaml.org/spec/1.2.2/#other-schemas seems to be saying that it’s fine to extend the rules for interpreting untagged nodes in arbitrary ways. That wouldn’t be part of the “core schema”, but then nothing in the spec says parsers have to implement the core schema.
The problem with the bare string interpretation, as I see it, is not so much the fact that it exists. It’s that it’s not part of the spec proper, but a set of recommended additional tags. What do you think 2001:40:0:0:0:0:0:1 is? Not the number 5603385600000001? The YAML spec, to the extent it has an opinion, actually agrees, but many YAML parsers will interpret it as 5603385600000001 by default, because they implement the optional timestamp type.
YAML 1.2 doesn’t recommend https://yaml.org/type/ any more, but it doesn’t disallow it, either. The best part of all this is that there are no strict rules about which types parsers should implement. If you use a bare string anywhere in a YAML document, even one that already has a well-understood meaning, the spec doesn’t guarantee that it will keep its meaning tomorrow.
I think this was just mistagged as satire and have corrected it. Though it’ll be pretty embarrassing if it also went over my head, what with taking my handle from x86 assembly.
Assembly programming didn’t stay in the 50s, it evolved along with high-level languages incorporating structural, functional, and objective-oriented programming elements. It plays well with modern APIs and DOMs
At work our team tends to be a team that inherits a lot of messed up stuff and has to fix it. This week my partner in crime and I are replacing the deployment method for some services. Every time a new release is made the current method requires touching 3-4 different git repos to have the new release deployed to production, then to top it off it is incredibly slow and fragile.
Many HPE servers have a dedicated network ports for the iLO card but can also optionally share one of the regular network ports if needed. When in shared mode, you can indeed configure a VLAN tag for the management traffic, which can be different to the VLAN tag used by the host operating system normally.
Unfortunately, in the same way that chris explained that a any compromised host might be able to switch the device IPMI mode from dedicated to shared, using a VLAN for segregation can have a similar problem. If the compromised host adds a sub-interface with the tagged VLAN to their networking stack they now can gain network access to the entire IPMI VLAN.
In addition there are other annoyance with using a shared interface. Because the OS has control of the NIC it can reset the PHY. If the PHY is interrupted while, for example, you’re connected over Serial over LAN or a virtual KVM, you lose access. If you’re lucky, that’s temporary. If you’re really unlucky the OS can continually reset the PHY making IPMI access unusable. A malicious actor could abuse this to lock out someone from remote management.
That can’t happen when you use a dedicated interface for IPMI (other than explicit IPMI commands sent over /dev/ipmi0). Generally switching a BMC from dedicated mode to shared mode requires a BIOS/UEFI configuration change and a server reset.
(Speaking from experience with shared mode and the OS resetting the NIC. The malicious actor is merely a scenario I just dreamt up.)
Indeed, although I suspect in many cases these IPMI modules are already accessible from the compromised host over SMBus/SMIC or direct serial interfaces anyway - possibly even with more privileged access than over the network. That’s how iLOs and DRACs can have their network and user/group settings configured from the operating system.
The increased risk mostly isn’t to the compromised host’s own IPMI; as you note, that’s more or less under the control of the attacker once they compromise the host (although network access might allow password extraction attacks and so on). The big risk is to all of the other IPMIs on the IPMI VLAN, which would let an attacker compromise their hosts in turn. Even if an attacker doesn’t compromise the hosts, network access to an IPMI often allows all sorts of things you won’t like, such as discovering your IPMI management passwords and accounts (which are probably common across your fleet).
The L2 feature you are looking for is called a protected port. This should be available on any managed switch, but I’ll link to the cisco documentation:
In a previous life at a large hosting we used this feature on switch ports that were connected to servers for the purposes of using our managed backup services.
Hm, obvious question – what’s better about a standalone Rust IDE?
Is this a pricing issue, a UI issue, or features? Or all?
It says you can use the Rust features within IntelliJ, so I guess it’s UI or pricing.
I haven’t tried RustRover yet, but based on having used their other IDEs for ages I’d say it’s a bit of pricing and UI.
Generally the language specific IDEs are extremely tailored to the language, IDEA itself still at times feel largely oriented towards the JVM ecosystem and using other languages can seem a bit clunky in terms of how you configure the toolchain and other language specific things.
They cost less than IDEA, but if you buy a few of the IDEs you are better off buying the subscription for all products. I generally live in a terminal, but for certain languages/tasks I still prefer a full IDE and the all products subscription definitely has been worth it for me.
For the others JetBrains IDE, the difference is mostly UI and pricing.
The pricing part is self evident: WebStorm and Pycharm are much more affordable than IntelliJ ultimate.
The UI part is more subtle. IntelliJ mostly feel like it is geared toward Java/kotlin development, and using it for Python or JavaScript mean that you will carry that bagage in your projects. While never a PITA, you always end up feeling other language don’t have the same focus as java. For example, you always seems to have an option for specifying a JDK, even though it make no sense for your project. The project structure window is very Java centric. Etc.
By having a dedicated IDE, you can remove those part that exists to accommodate Java, and sell it cheaper.
I wasn’t there, but I imagine the main explanation here are business and technical historical reasons — IntelliJ IDEA started as a Java IDE, and PyCharm came after IDEA already were successful. I would imagine, technically, it’s much easier to “fork” an IDE for X and turn it into an IDE for Y, than to make an IDE for X and Y: refactors which change cardinality from singular to plural are painful. And, from the business perspective, if you have a wildly successful product, it might be scary to branch that directly into a mostly orthogonal market niche, much easier to experiment with a separate brand.
Today, having separate products I imagine is still great from the business perspective — you can clearly track which languages people buy most, you could price individual products differently, and you could have “all products” pack as well.
For the user, it could be a rather significant negative. If you hop between languages, and use more esoteric ones, then JetBrains model would be fairly inconvenient. That’s the reason I still use VS Code, although I much prefer the IJ platform.
OTOH, many users work overwhelmingly with a single language (or a single project even). In that context, having a GUI that is exactly tailored for what you do out-of-the-box is a benefit. For every language, there are certain aspects that you want to show in GUI by default, but if you do this for every language, you run out of pixels (and user’s attention before that). One great example here comes from the early days of IntelliJ Rust, when I added initial support for the CLion. Those days, when you open a Rust project in CLion, everything worked, but there was also this giant panel in UI warning you that “CMakeLists.txt not found”. This makes total sense for CLion — it requires CMake to work, and so must warn the user pretty aggressively if it can’t make head or tails of a C++ project. But of course there’s no CMake in Rust projects!
Unclear. It does make sense for Java since LSP support for that language is abysmal, and IntelliJ works wonders. But rust-analyzer is a first-class LSP server that plugs into virtually anything. Maybe they are catering to their existing CLion user base.
There is a reason why I have an iron rule for managing Ubuntu servers: Never make a
dist-upgrade
. It was a painful lesson to learn, it fails more often than it works. If you need to go to another release, throw away the old server and set up a new, clean one. Bonus points for validating that your procedure to set up the server from a clean Ubuntu image works.I manage ~200 Ubuntu LTS servers since 2012 and have never had a
dist-upgrade
fail unless it exposed an underlying hardware failure that has been lying dormant, which appears to have been the problem here.It really depends on what you have installed and how customized your system is. In essence, it’s a lottery and you seem to have been winning a lot, congratulations. However, there is a myriad of things that can go wrong:
Life is better if everything is dockerized and your Ubuntu is just vanilla plus a docker installation.
I’ve never had a Debian version upgrade fail on me yet. I guess Ruby version upgrades are always going to be painful though?
It depends on how old your Debian was. I tried to upgrade an old Debian MIPS system that I inherited and discovered that the new packages required a newer version of apt. The newer version of apt needed a newer version of glibc. The machine became unbootable trying to get out of that mess. I believe newer versions of Debian ship a statically linked version of apt to avoid this problem.
Out of curiosity, did this happen during an N+1 upgrade, or was it a situation where the system went directly from several-versions-old to the most recent?
It might have been multiple versions, it was 10 years ago and I don’t remember the exact sequence of things.
This whole thread feels like an advert for FreeBSD. When you run
freebsd-update
(to update the base system), it creates a new ZFS boot environment (snapshot of the root filesystem). No matter how badly it goes wrong, you can always revert to the old one (as long as you can connect to the console). After the update, you delete the old one. If it’s a major update, the new version will default to installing the the ABI compat components and so you can update packages later (and do the same kind of snapshot of /usr/local if you want to be able to roll those back). Doesn’t Ubuntu do something like this? I’d hate to do any kind of serious system maintenance without an undo button.That’s a really clever strategy and no, I’m not aware of any Linux distro that does something like that. Guix and Nix get to an even broader reversibility with a very different strategy: the packages are managed like an immutable hash data structure in a functional language. The packages are the keys, and the files are left in-place on disk as the package is changed on-disk by creating a new set of symlinks (pointers) into them. There’s an eventual GC process for long-unreferenced data.
As an aside, this is so obviously the right way to do package and configuration management that I evaluate them ~yearly for use on Lobsters and personal projects, but the documentation/maturity aren’t there yet and I’ve never seen a working install of a Rails apps as they have significant impedance with language-specific package managers/apps distributed as source.
This is a standard feature of all versions of openSUSE, and it is also a feature of Spiral Linux, Garuda Linux, and siduction.
honestly I have never had such problems running debian, so maybe just an ad for not running ubuntu on servers ?
The root snapshot reminded me of transactional-update and ABRoot, which are (as best as I can tell, I haven’t used them) both snapshot-based strategies for applying updates.
I’m curious why this isn’t standard practice even for distributions that you do trust to upgrade versions correctly: spin up a new VPS with the new distribution, copy over (hopefully a small amount of) state, switch DNS to point to the new VPS. What are the arguments against that approach?
The argument in our case is that ansbile has been brittle. Some of that is investing more time in cold-start provisioning, but some of that has been ansible making backwards-incompatible changes more often than we make any changes to our playbook. I never wanted to spend the (it turns out) ~5 hours getting that working.
Also, “switch DNS to the new VPS” has bitten us, the new one was assigned an IP that’s on many spam blacklists. I didn’t think to check and reroll until getting a clean one (this is apparently the best practice for DigitalOcean).
Yeah keeping up with ansible changes is tiresome. I found that it’s helpful to use a scratch server to debug the playbooks without any time pressure.
When I was doing email, we ended up making some effort to keep the IP addresses of our mail servers as stable as possible. Rather than updating the DNS we would just move the service IP address(es) from the old server to the new one - these were virtual service addresses, in addition to the per-instance addresses.
Dunno how easy it is to fling IP addresses around like that in a VPS provider: probably hard or impossible! In which case I might treat the scratch server as a practice dry run for an in-place upgrade of the live server.
On Linode, you can swap IP addresses between machines. On DigitalOcean, it looks like Reserved IP addresses fill a similar niche. It looks like that would be for a new IP address, not the current ones, though.
Yeah, I brought this up last night. I was planning to file an issue or something like that when the current crisis had passed so we can do an orderly transition to a new IP and not have as many problems next time.
I’m chewing on this one. I’m a bit irked that the product is basically paying the hosting provider to avoid dealing with problems caused by inadequate enforcement against bad behavior by other customers. The most uncharitable version of this is that the company is extorting me to avoid the expenses of acting responsibly. So my hobbyist sensibilities about the way things Ought To Be are clashing with my professional sensibilities about how cheap an improvement to prod uptime it’d be. Probably I’ll get over things and set it up soonish.
Reserved IPs are free when assigned to a droplet. They only bill you when you’re not using it.
Oh, I totally misunderstood the pricing and thought it was a flat amount per month. Thank you, I’ve added one to the web01 vps and I’ll transition DNS to it soon.
Interesting, thanks. So is it a case of live state that is difficult to copy to a new VM?
And thanks for the info on the IP address! I never realised that blacklisted IPs could be recycled to poor, innocent VM subscribers.
And I’ll add my vote of thanks for keeping Lobsters running!
Disadvantage: You temporarily double the resource usage, as both the old and the new server exist at the same time for a short duration. Another issue is DNS propagation which is not instantaneous.
Overall, it is still my preferred method because if anything goes wrong, I notice it early and can keep things running on the old server while I troubleshoot the new, upgraded one. For example, the outage described above wouldn’t have happened if this procedure was followed.
What sort of costs do we expect to incur in consequence? For example, Linode’s most expensive plan (dedicated 512 GB) is $7 per hour.
Would setting DNS TTL to 60 (seconds) sufficiently far in advance mitigate that disadvantage?
Considering the free nature of lobsters a $7/hr VM would quickly increase hosting costs even during cut overs if they things don’t go as planned.
Consumer ISPs often have their own idea of TTLs independent what the authoritative server tells them.
Lobsters finances (my wallet) is fine with this kind of expense. Running Lobsters would be a cheap hobby at twice the price. Just wanted to post a reminder as people occasionally express concern about hosting costs, and I prefer not to take donations.
The approach of running two VMs during cut-over doubles the cost only during the fraction of time that a whole-system upgrade is being made. In extreme cases that might be, what, 8 hours a year? That would add 0.1% to cost of the VM. I don’t see that it would “quickly increase VM costs”!
Perhaps the correct approach is to use some sort of reverse proxy but if that requires a VM of its own it would definitely add a lot more cost.
That depends very much on the server/plan. But yes, in the cloud, it usually costs peanuts. But if you have a beefy on-premise server, you can’t just double your hardware on a whim.
I don’t have enough experience with DNS management to know for sure. All I know is that this stuff keeps being cached at every step of the way and it makes life difficult sometimes. Unfortunately, it isn’t just a switch you can flip and it’s done.
That’s crazy. I’ve used “apt-get dist-upgrade” as the sole means of updating my Debian machines for 19 or 20 years now. Granted they’re desktops, and 90% of the time I’m going from testing to testing, but still, other than an Nvidia driver fubar once, it always works great for me.
Hard to argue with that advice, though.
The advantage of routine rebuild-from-scratch is that you get to practise parts of your disaster recovery process. When something surprising like a driver fubar happens, it does not happen on a production machine and it does not make a planned outage take much longer than expected.
I have also had no problems with Debian upgrades long-term. I’ve had two long-running Debian servers: one on physical hardware, installed in 2001 and dist-upgraded until I retired the machine in 2010, and one on a VPS, installed in 2013 and still running. Been very impressed on how it all just works. In the early years (2001-2006 especially) an upgrade would often break X11 and require me to futz with XF86Config, but that was eventually sorted out with better auto-detection. The underlying OS never had issues though.
I just upgraded from ubuntu 14.04 to 23.04 a month or so ago; I was pretty surprised how smoothly it went
I’m about 50/50 for
do-release-upgrade
working on my home servers. Failures are usually due to some weird configuration I did (hello, years of changing network configuration systems) or a third party package, I’m not mad about this, trying to support absolutely every possible Ubuntu system is impossible. But it is an issue for me.I don’t run any servers any more, which gives me considerable pleasure.
However my main laptop install of Ubuntu has been upgraded since 13.10 and still works fine. I got to the next LTS and through every LTS since.
2 machines now have 23.04 on as well.
I see a bunch of instances of
unsafe
in the codebase. Does it still count as memory safe?You know, I was going to say that its OK, as it’s a handful of unsafe blocks around FFI, its comparatively easy to just eyeball them, and, unless you go full CHERI, unsafety has to live somewhere.
But also, one of those blocks is not FFI:
https://github.com/memorysafety/sudo-rs/blob/9a7f38fbddc59f40a5e0a57555131b36e09811e4/src/exec/mod.rs#L92-L97
And I think that’s a bug and technically UB!
CString::new
allocates, and you can’t allocate in pre-exec.So kinda no maybe?
But also maybe yes, or at least much better than alternatives? This bug really jumps out at me, it’s trivial to notice for someone unfamiliar with the code. pre_exec is tricky, and unsafe makes it really stick out, much better then if it were hidden in the guts of the standard library or some native extension.
(also, obligatory PSA: the biggest problem with sudo is not memory unsafety per se, but just the vast scope for this security sensitive utility. For personal use, you can replace
sudo
withdoas
, and it’ll probably make a bigger dent in insecurity. If you have to be API and feature-compatible with sudo though, then, yes, what Ferrous folks are doing makes most sense to me)I often see doas recommended as simpler than sudu. When I compare the documentation I see an almost identical feature set. The only difference in security surface are seems to be the configuration. Is there more that sudo does that doas does not do?
Generally when I think of sudo’s features where replacing it is somewhat difficult it is usually around plugins and things like storing rules in LDAP.
That’s also the funny thing — I don’t actually know what sudo does. I know that OpenDoas clocs at under 5k lines of code, while sudo is a lot more (see the nearby comment by andyc). So, that’s a non-constructive proof that it does something extra!
You’re allowed to allocate in pre-exec. The restrictions the documentation mentions mostly stem from other threads potentially holding locks or having been mid-modification of the env variables.
If you guarantee that there is no threads (and other pre-conditions like reentrancy) running at the time of fork(), you can in fact malloc without any big issues.
I think in case of Rust it’s actually quite murky technically:
First, Rust doesn’t have
assert_no_thread!()
functionality, so, if you assume at pre_exec time that the program is single-threaded, the calling function should be marked asunsafe
with the safety precondition of “must be single threaded”. Which mostly boils down to just “don’t allocate” in practice.Second, allocation in Rust calls
#[global_allocator]
, which is arbitrary user code. As a general pattern, when you call arbitrary code from withinunsafe
blocks, there usually is some Rube Goldberg contraption which ends with a shotgun aimed at your feet. That is, if the user tags their own static as a global allocator, they can call various API on that static directly, and that should be enough to maneuver the thing into “technically UB” even without threads. In particular, you could smuggle something like https://github.com/rust-lang/rust/issues/39575#issuecomment-437658766 that way.But yeah, this is quite subtle, there was some debate whether
before_exec
needs to beunsafe
at all, and I personally am not clear as to what’s the safety contract ofbefore_exec
actually is, in terms of Rust APIs.To be fair, pre_exec is an unsafe function for precisely this reason and it should stay that way. The API contract is largely unspecified because fork() does some amazing things to program state. I think the solution here would be to swap in a CStr over a CString, to avoid the allocation.
edit: One way we could avoid it is by introducing a new unsafe trait; Interruptable. It must be explicitly declared on structs. The trait would simply declare the struct and it’s functions reentrancy safe. A function is automatically interruptible if all non-local data it uses is also interruptible. Then PreExec could simply require the function to also be Interruptable.
It looks as if it’s intended as an abstraction over different process-creation things and
fork
andvfork
do different amazing things to process state.Fork creates a copy of the address space and file descriptor table, but that copy has only the current thread in it. If other threads are holding locks, can cannot acquire those locks without deadlocking. You need to register pre-fork hooks to drop them (typically, the pre-fork hook should acquire the locks in the prepare stage and then drop them in the child, it should also try to guarantee consistent state in both). It is unsafe to call
malloc
from a multithreaded program betweenfork
andexecve
because it may deadlock with a not-copied thread.Vfork creates a copy of the file descriptor table but does not create a copy of the address space. You can modify the file-descriptor table until you
execve
and then you effectivelylongjmp
back to thevfork
call (I believe this is actually how Cygwin implementsvfork
). Because the code in the vfork context is just normal code, it is safe to callmalloc
there, but anything you don’tfree
will leak.This is the main reason that I prefer using
vfork
: I can use RAII in my setup code and, as long as I reach the end of the scope before callingexecve
, everything is fine.Looking at the documentation, it only mentions the following:
Which doesn’t read like memory allocations are forbidden in that closure to me.
Ninja edit: To me, it reads like if e.g: your program is single-threaded then you’re fine.
Yes, that rule is mostly about multithreaded environments. And violations usually don’t result in memory corruption but in deadlocks. The reason is that after
fork
ing, you only have the current thread, all the other ones are “frozen” in time. So if you forked while some other thread held a lock insidemalloc
and then callmalloc
yourself, you can deadlock.And also, glibc in particular has special code to make this work, so you can indeed
malloc
afterfork
there safely.So IMO this is POSIX UB and therefore Rust UB by fiat, but very likely not a security issue in practice. It should be fixed, but it’s not super alarming to me.
You definitely can replace sudo with doas… unless you run a RHEL or a clone. I have two Rocky machines and my Ansible does not coexist with the RHEL clones as compared to FreeBSD and my myriad of other operating systems.
I have effectively replaced sudo with doas in all situations except for those platforms.
unsafe
does not necessarily mean it does not have memory safety, but that that code needs more scrutiny. I am curious why there aren’tSAFETY
comment blocks on each instance of unsafe to explain the invariants that makes it safe or necessary.Most of them look pretty clearly justified, although it’d still be helpful to have an argument why they’re correct. The unsafe blocks I looked at are all used to interact with unsafe APIs, which is sort of a given for this sort of program, but they’re short and at first glance don’t seem to do anything that would be hard to reason about.
This would mean that the whole titular claim, that this is the “first stable release of a memory safe sudo implementation”, is probably wrong. C code can also “have memory safety”, so the first stable release of a memory safe sudo implementation is probably (at least some version of) sudo itself.
Yes, but the memory safe version of sudo itself has no bugs leading to memory errors. I don’t know if that version has been written yet.
It might be my own failing, but I’m having trouble understanding your comment. That the “memory safe version of sudo itself has no bugs leading to memory errors” seems like a tautology.
If you mean we don’t know if there’s any such version, that’s technically true, and why I said “probably”. It seems pretty likely, though, and it’s certainly at least possible, bar any proof otherwise.
The point is that it doesn’t seem likely to me that sudo, as a reasonably large program in a language with few safety features, has no more such bugs. Sudo has had memory safety bugs in the past, and if I had to guess, I’d expect more to emerge at some point. You may be able to prove that there’s no memory-safe version simply by waiting for that to happen.
Of course, if there is such a version, finding it and proving it to be safe may be considerably more challenging. Which is important in itself: its safety wouldn’t have much value to us unless we knew about it.
That would show there is (was) a memory-error bug in some version(s), not all prior versions. And it’s all a bit hand-wavy; that we know that there were memory safety bugs in the past only shows that such bugs were found and fixed. From reading elsewhere sudo sounds like it’s more complex than I would’ve thought it should be, but it’s still not so big that it’s impossible that it is (or has been at some point) free of such bugs.
If you prefer, I can rephrase my point (making it slightly weaker): the linked article is claiming the “first stable release of a memory safe sudo implementation” has been made, referring to this Rust implementation, but there is no certainty that all prior implementations had memory-error bugs, so if we take “memory safe” to mean “has no memory-error bugs” then the claim is unproven and could be wrong. (It seems we can’t take “memory safe” to mean “written completely in a memory safe language”, since it apparently uses “unsafe”).
(As to how likely the claim is to be wrong, we may disagree, but there’s probably not much in the way of objective argument that we can make about it).
Even if you implement a program in a memory safe language you do not know for sure that it is memory safe.
You have a very strong reason to believe so, but there is always the possibility of a mistake in the type system design or the language implementation. After a few years the rust language has been addressed with formal methods that mathematically prove correctness for rust programs as well as specifically proving the correctness of various parts of the rust standard library that make use of unsafe blocks. This helps give very strong confidence about memory safety, but again there is the possibility of gaps.
I don’t disagree. (Or at least, I’m not arguing any of those points).
No.
What’s this “Supernova” the article keeps mentioning? Are they making some separate edition or is it just a marketing name for this version?
I believe that is the name they are calling the release since it includes a lot of UI changes.
What I find irritating - and apple mail does this too - is that the name of the sender is so small. In my head conversations are tied to people so I am more likely looking for the person first. I may be an outlier, but this strikes me as strange.
Nah I get what you are saying. I don’t know if it’s because of the era when I started using GUI mail clients, but these days I feel like I struggle to quickly see the information I want to see in many graphical mail clients.
In Thunderbird I’d love to know of a way to move the expandable thread widget thingy to the far left, instead it seems be attached to the Subject, which I don’t want on the far left. I can add an icon that indicates the email is part of a thread, but the clickable UI element is still by the subject.
The unfortunate thing with these sort of malware detectors is that they operate with an expected false-positive rate. The industry generally thinks that’s fine, because its better to hit more malware than to miss any.
That only works if the malware decector authors are receptive to feedback, though.
Basically every new release of Rust on Windows is detected as malware by one vendor or another.
I guess it probably doesn’t help that the binaries aren’t yet signed.
CrowdStrike Falcon on macOS and SentinelOne on Windows have cost me so much wasted time as an employee of companies that uses them. Falcon routinely kills make, autoconf, etc. SentinelOne does the same thing when using msys2 or cygwin on Windows.
At least SentinelOne tells me, Falcon tries its best to leave zero useful information on the the host. When processes start randomly being terminated it takes a bit of effort to find out what the hell is actually happening. Often I realized it was Falcon only after some poor desktop security tech gets assigned a ticket and they reach out to me with a lot of confusion around some crazy complex command lines being sent through the various exec calls.
Because of the high frequency in which I encounter the issue, if something randomly fails in a way I don’t expect I immediately suspect Falcon.
My understanding is that signed binaries don’t help - if a binary is rarely run (because it’s just been released, or it’s just not a mainstream tool) there’s a good chance it’ll be detected as malicious no matter what.
I believe it depends on the kind of signature. Last time I checked, companies can buy more expensive certificates which are privileged insofar that the binaries don’t need to be run on a lot of machines to be considered safe.
And still miss a lot of malware. ;)
I wonder what would be the best way to find out what the real rate of false positives is.
I’ve worked at a company where the IT was so terrible that their actions bordered on sabotage. They caused more damage and outages than actual hackers would. The anti-virus would delete our toolchains and fill the disk with ominous files until there was no space left. Luckily, they left our Linux machines alone, so we put everything we cared about on Linux (without GUI) and hoped that their lack of know-how would prevent them from messing with those machines. It worked.
It would be nice to talk about hardware implications in this setup. My assumption is any class-compliant multi-IO interface should work fine in FreeBSD but I have never taken the time to look into myself. I have done a lot of Audio work in the Linux world in the past and it was always a hassle to get the right JACK + ALSA configuration so I could do multi-track recording.
I can’t speak to FreeBSD, but on Linux a lot of work is put into various generic(usb is what I’m mainly thinking these days) drivers to implement work workarounds for the never ending list of buggy devices that don’t actually comply with various specs. I learned the hard way when the audio interface I bought was so new that it wouldn’t work at all until some things were fixed in the kernel driver.
I don’t do audio work for a living but lately I’ve had mostly good experiences with PipeWire. My setup is fairly esoteric in terms of odd sinks/sources spread across multiple machines, but overall it has worked well.
Admittedly I’ve not done any recording in the traditional sense, just using it as a giant goofy mixer for having all my computers use a single microphone and headset.
I had one of these. I got an 8 MiB version for Christmas and remember being a bit disappointed because I’d hoped to get a second one a few years later when they became cheap and pair it with SLI, but that was a bit of a waste with the 8 MiB ones. In the end, by the time they were cheap enough to get another one, so were other graphics cards that outperformed an SLI VooDoo2. My next card was an ATi All-in-Wonder, which had a Rage128 chipset and ran things in 1024x768 about as well as the VooDoo2 and also did TV input and hardware-accelerated MPEG-2 encoding (I was at university then and it let me use my huge 19” CRT monitor [bought dirt cheap at a computer surplus place] as a TV).
This was also the last model where 3dfx didn’t make cards themselves, they just sold the chips to card manufacturers. There was a company (Obsidian?) that made a single card with two VooDoo2 chips on it in SLI mode. They had two-page adverts in the inside cover of computer magazines and I wanted one so much. They looked incredibly impressive but cost about as much as a complete computer.
Most games that used it used the proprietary GLide APIs, which were vaguely OpenGL-like. There were DOS and Windows drivers and most of the games had custom 3dfx-specific code. Some folks (not sure if this was 3dfx, Id, or a third party) wrote a ‘mini-GL’ driver that wrapped GLide in OpenGL calls. Quake 1 was a DOS game but the codebase was fairly modular. If I remember correctly, it was originally written on a NeXT workstation and an OpenGL renderer was added on some UNIX graphics workstation. This was merged with the WinQuake code (which ran, unsurprisingly, on Windows instead of DOS) to allow you to run Quake with on a Windows NT workstation with an OpenGL accelerator. The mini-GL driver allowed the same code to run on Windows NT or Windows 95 with a 3dfx card. It implemented just the subset of OpenGL that GLQuake needed. You could drop it in your system32 folder and make it the default OpenGL implementation (replacing the API-complete software renderer), which had some very fun side effects: any OpenGL window would become full screen, including things like the tiny preview of the OpenGL screen savers that shipped with NT4.
The most interesting thing about this era, from an historical perspective, is how quickly 3dfx lost the crown. They were taken completely by surprise by the nVidia GeForce. The nVidia Riva TNT was fairly comparable to the 3dfx offerings of the same time but the GeForce added transform and lighting to the accelerated pipeline and completely blew the 3dfx cards away. The long lead times in hardware from design to shipping meant that 3dfx went from market leader to has-been in under a year.
Lines like that remind me how amazing it is to live in the future.
Yep, GLQuake also ran under Win NT 4, but the normal one didn’t. That’s the first time I remember thinking deeper about these driver architectures and issues.
I had a Diamond Voodoo 3D and then another Voodoo II, but I don’t remember the brand. Nice throwback when I found the unused SLI cable many years later in a box. The only time we tried SLI was at LAN parties where we had enough cards… Also I’m kinda sure I went from the Voodoo II directly to a GeForce II, but I’d have to do a reality check with release dates. The Voodoo II was certainly cool at the time, although the looping through wasn’t perfect, but I might be misremembering some issues when running 2d stuff on 1600x1200 on my 49kg 22” iiyama screen…
If I remember correctly, WinQuake and GLQuake used the same networking protocol, but DOS Quake used a different one. This led to some arguments among folks at LAN parties because DOS Quake had better frame rates than WinQuake for the non-3dfx owners, but GLQuake was much better for the others. Eventually, CPUs got fast enough that WinQuake was better because it could run at higher resolutions than DOS Quake.
It has been a few minutes, but if I remember correctly 3dfx released MiniGL to implement just enough OpenGL to allow the OpenGL version of Quake to run. I believe originally quake only supported software rendering and vquake supported Rendition accelerators.
Yep Quake was created on NeXT, the DOS version was compiled with djgpp and on Win95 could use a vxd to get IP networking going from DOS.
A semi related project I came across is this thing: https://github.com/kjliew/qemu-3dfx
I had that same ATI AIW card, it was pretty mind blowing to me at the time that a computer could suddenly do so much and be the nexus of many activities. I can’t think of any novelty in PCs that can compare since. Outside of that the smartphone with 3g or better connectivity was the next tectonic shift.
It was great for watching TV, but the utility of the MPEG encoding was problematic because of limitations of the AVI file format. I can’t quite remember the details, I think sound and video tracks were each encoded with time stamps in ticks of different sizes, which meant that rounding errors would accumulate and, after about an hour, they’d be about half a second off for things recorded from TV. If you recorded a complete film then you’d end up with the second half being painful to watch.
I’ve had good luck using mailctl for wrangling credentials for oauth based imap/smtp with whatever365 and gmail.
Signal Desktop similarly stores its auth token in plaintext: https://www.bleepingcomputer.com/news/security/signal-desktop-leaves-message-decryption-key-in-plain-sight/
The response from Signal was:
Yikes. Disk encryption covers the “dude swiped my laptop” attack vector but not the malicious npm package (or whatever) attack vector. Isn’t this terrifyingly short-sighted of Signal?
What would you propose as a fix for the problem? Whatever you can come up: As long as the key is stored somewhere, it’s available for malware to get it. Is it in the OS keychain? Inject code into the signal binary and query for it. Is it on disk? Read it from there. Is it encrypted on disk? Take the key from the signal binary and decrypt it. Is it in memory of the signal app? Take it from there.
Whatever you can come up can possibly be classified as “defense in depth”, but there’s nothing (short of having to manually authenticate whenever the app does any network request / access to stored data) that can be done to protect secrets in light of malware attacks.
I don’t know about windows and Linux, but on macOS keychain material is encrypted by the SEP, and access to the data requires user authentication and if set correctly, requires it on a case by case basis.
By requires I mean it is not possible for any software at any privilege level to bypass it.
I understand that there doesn’t exist perfect security in the face of arbitrary malware but we have OS key stores for good reason.
If I told someone it would be extremely trivial to write a malicious npm package that stole all of their Signal messages, most people would be very surprised and some would perhaps be less likely to use Signal Desktop for very sensitive conversations. (There is no analogous attack on iOS, right?)
Welcome to the dumpster fire that is Electron
This isn’t related to Electron at all.
yeah, especially given electron provides what looks to be a fairly trivial api for secure storage.
There a lots of things that make electron apps a bad experience, but this is not one of them.
While I agree that Electron isn’t to blame, I will say in my experience Electron apps for networked applications rarely seem to use a proper secure storage system.
For accessibility purposes I use a hacked together terminal Slack client for most of my Slack usage. Originally I followed the advice of most 3rd party Slack clients on how to get a better token to use with 3rd party clients, but realized why bother when I can just write something that constantly scrapes various IndexedDB, Session/Local Storage databases.
I have a script that runs finds my Slack workspaces’ tokens, validates them, then shoves them into a secret store(org.freedesktop.secrets) and sends my slack client a signal to reload the secrets from the secret store. I do run the client for audio calls frequently enough that my local creds stay refreshed.
I’ve lost track of how many networked electron apps that I’ve encountered that I’ve been able to abuse the local storage being unencrypted to gain api credentials for scripting purposes.
This seems to be a side-effect of how many of these apps are fairly simple wrappers around their web versions and they don’t do the due-diligence on securing that data as they are used to browsers being in charge of protecting their data.
Yeah, I’d agree here. But I feel that a lot of electron apps half ass pretty much anything that isn’t absolutely core to the app.
Yeah, many seem like low effort “we made an App!” that is just a multi-hundred meg wrapper around a web page, but without doing any of the work an actual browser does (even chrome) to protect user data and privacy
After months of weekends consisting of house work or work work, finally taking a mental break and going to attempt to-do something somewhat social. My mental health tends to spiral downward easily with isolation from friends and social circles, so going to focus on breathing, living, and safely socializing.
Break a leg!
My wife and I adopted a dog, who we named Ahsoka Tano. I’ll be taking care of her along with our existing three-and-a-half year old dog, Vader. I gotta work Sunday to make up for hours missed yesterday since we took the day yesterday going through the adoption process.
OMG that dog is cute
Attempting to stay sane while dealing with the purchase process of a house in a stressful market.
Doing a lot of random “mark box with an X so auditors are happy” work. Meh
I don’t celebrate American Thanksgiving, but I had the day off anyways. So I decided to check an item off my bucket list - write a driver. A writeup of what it took is in the README, and a binary build is in the releases. (The licensing is unclear, considering it’s based on the DDK sample driver, but everyone based their drivers on the DDK samples; I’m just wondering how I make clear my changes are some kind of free license even if the base is just the royalty-free non-exclusive whatever sample drivers are under.)
I love this!
I saw El Reg picked this up this morning, congratulations!
Fascinating read. Audio was the thing that made me switch from Linux to FreeBSD around 2003. A bit before then, audio was provided by OSS, which was upstream in the kernel and maintained by a company that sold drivers that plugged into the framework. This didn’t make me super happy because those drivers were really expensive. My sound card cost about £20 and the driver cost £15. My machine had an on-board thing as well, so I ended up using that when I was running Linux.
A bit later, a new version of OSS came out, OSS 4, which was not released as open source. The Linux developers had a tantrum and decided to deprecate OSS and replace it with something completely new: ALSA. If your apps were rewritten to use ALSA they got new features, but if they used OSS (as everything did back then) they didn’t. There was only one feature that really mattered from a user perspective: audio mixing. I wanted two applications to be able both open the sound device and go ‘beep’. I think ALSA on Linux exposed hardware channels for mixing if your card supported it (my on-board one didn’t), OSS didn’t support it at all. I might be misremembering and ALSA supported software mixing, OSS only hardware mixing. Either way, only one OSS application could use the sound device at the time and very few things had been updated to use ASLA.
GNOME and KDE both worked around this by providing userspace sound mixing. These weren’t great for latency (sound was written to a pipe, then at some point later the userspace sound daemon was scheduled and then did the mixing and wrote the output) but they were fine for going ‘bing’. There was just one problem: I wanted to use Evolution (GNOME) for mail and Psi (KDE) for chat. Only one out of the KDE and GNOME sound daemons could play sound at a time and they were incompatible. Oh, and XMMS didn’t support ALSA and so if I played music the neither of them could do audio notifications.
Meanwhile, the FreeBSD team just forked the last BSD licensed OSS release and added support for OSS 4 and in-kernel low-latency sound mixing. On FreeBSD 4.x, device nodes were static so you had to configure the number of channels that it exposed but then you got /dev/dsp.0, /dev/dsp.1, and so on. I could configure XMMS and each of the GNOME and KDE sound daemons to use one of these, leaving the default /dev/dsp (a symlink to /dev/dsp.0, as I recall) for whatever ran in the foreground and wanted audio (typically BZFlag). When FreeBSD 5.0 rolled out, this manual configuration went away and you just opened /dev/dsp and got a new vchan. Nothing needed porting to use ALSA, GNOME’s sound daemon, KDE’s sound daemon, PulseAudio, or anything else: the OSS APIs just worked.
It was several years before audio became reliable on Linux again and it was really only after everything was, once again, rewritten for PulseAudio. Now it’s being rewritten for PipeWire. PipeWire does have some advantages, but there’s no reason that it can’t be used as a back end for the virtual_oss thing mentioned in this article, so software written with OSS could automatically support it, rather than requiring the constant churn of the Linux ecosystem. Software written against OSS 3 20 years ago will still work unmodified on FreeBSD and will have worked every year since it was written.
Luckily there’s no need for such a rewrite because pipewire has a PulseAudio API.
There was technically no need for a rewrite from ALSA to PulseAudio, either, because PulseAudio had an ALSA compat module.
But most applications got a PulseAudio plug-in anyway because the best that could be said about the compat module is that it made your computer continue to go beep – otherwise, it made everything worse.
I am slightly more hopeful for PipeWire, partly because (hopefully) some lessons have been drawn from PA’s disastrous roll-out, partly for reasons that I don’t quite know how to formulate without sounding like an ad-hominem attack (tl;dr some of the folks behind PipeWire really do know a thing or two about multimedia and let’s leave it at that). But bridging sound stacks is rarely a simple affair, and depending on how the two stacks are designed, some problems are simply not tractable.
One could also say that a lot of groundwork was done by PulseAudio, revealing bugs etc so the landscape that PipeWire enters in 2021 is not the same that PulseAudio entered in 2008. For starters there’s no Arts, ESD etc. anymore, these are long dead and gone, the only thing that matters these days is the PulseAudio API and the JACK API.
I may be misremembering the timeline but as far as I remember it, aRts, ESD & friends were long dead, gone and buried by 2008, as alsa had been supporting proper (eh…) software mixing for several years by then. aRts itself stopped being developed around 2004 or so. It was definitely no longer present in KDE 4, which was launched in 2008, and while it still shipped with KDE 3, it didn’t really see much use outside KDE applications anyway. I don’t recall how things were in Gnome land, I think ESD was dropped around 2009, but pretty much everything had been ported to canberra long before then.
I, for one, don’t recall seeing either of them or using either of them after 2003, 2004 or so, but I did have some generic Intel on-board sound card, which was probably one of the first ones to get proper software mixing support on alsa, so perhaps my experience wasn’t representative.
I don’t know how many bugs PulseAudio revealed but the words “PulseAudio” and “bugs” are enough to make me stop consider going back to Linux for at least six months :-D. The way bug reports, and contributors in general, technical and non-technical alike were treated, is one of the reasons why PulseAudio’s reception was not very warm to say the least, and IMHO it’s one of the projects that kickstarted a very hostile and irresponsible attitude that prevails in many Linux-related open-source projects to this day.
That’s more like it on Linux. ALSA did software mixing, enabled by default, in a 2005 release. So it was a pain before then (you could enable it at least as early as 2004, but it didn’t start being easy until 1.0.9 in 2005)… but long before godawful PulseAudio was even minimally usable.
BSD did the right thing though, no doubt about that. Linux never learns its lesson. Now Wayland lololol.
Things got pretty hilarious when you inevitably mixed an OSS app (or maybe an ALSA app, by that time? It’s been a while for me, too…) and one that used, say, aRTs (KDE’s sound daemon).
What would happen is that the non-aRTs app would grab the sound device and clung to it very, very tight. The sound daemon couldn’t play anything for a while, but it kept queuing sounds. Like, say, Gaim alerts (anyone remember Gaim? I think it was still gAIM at that point, this was long before it was renamed to Pidgin).
Then you’d close the non-aRTs app, and the sound daemon would get access to the sound card again, and BAM! it would dump like five minutes of gAIM alerts and application error sounds onto it, and your computer would go bing, bing, bing, bang, bing until the queue was finally empty.
I’d forgotten about that. I remember this happening when people logged out of computers: they’d quit BZFlag (yes, that’s basically what people used computers for in 2002) and log out, aRTs would get access to the sound device and write as many of the notification beeps as it could to the DSP device before it responded to the signal to quit.
ICQ-inspired systems back then really liked notification beeps. Psi would make a noise both when you sent and when you received a message (we referred to IM as bing-bong because it would go ‘bing’ when you sent a message and ‘bong’ when you received one). If nothing was draining the queue, it could really fill up!
This is exactly what happens with PulseAudio to me today, provided the applications trying to play the sounds come from different users.
Back in 2006ish though, alsa apps would mix sound, but OSS ones would queue, waiting to grab the device. I actually liked this a lot because I’d use an oss
play
command line program and just type up the names of files I want to play. It was an ad-hoc playlist in the shell!This is just an example of what the BSDs get right in general. For example, there is no world in which FreeBSD would remove
ifconfig
and replace it with an all-new command just because the existing code doesn’t have support for a couple of cool features - it gets patched or rewritten instead.I’m not sure I’d say “get right” in a global sense, but definitely it’s a matter of differing priorities. Having a stable user experience really isn’t a goal for most Linux distros, so if avoiding user facing churn is a priority, BSDs are a good place to be.
I don’t know; the older I get the more heavily I value minimizing churn and creating a system that can be intuitively “modeled” by the brain just from exposure, i.e. no surprises. If there are architectural reasons why something doesn’t work (e.g. the
git
command line), I can get behind fixing it. But stuff that just works?I guess we can’t blame Lennart for breaking audio on Linux if it was already broken….
You must be new around here - we never let reality get in the way of blaming Lennart :-/
Same as with systemd, there were dozens of us where everything worked before. I mean, I mostly liked pulseaudio because it brought a few cool features, but I don’t remember sound simply stopping to work before. Sure, it was complicated to setup, but if you didn’t change anything, it simply worked.
I don’t see this as blaming. Just stating the fact that if it works for some people, it’s not broken.
Well, can’t blame him personally, but the distros who pushed that PulseAudio trash? Absolutely yes they can be blamed. ALSA was fixed long before PA was, and like the parent post says, they could have just fixed OSS too and been done with that before ALSA!
But nah better to force everyone to constantly churn toward the next shiny thing.
Huh? I just setup ALSA recently and you very much had to specifically configure
dmix
, if that’s what you’re referring to. Here’s the official docs on software mixing. It doesn’t do anything as sophisticated as PulseAudio does by default. Not to mention that on a given restart ALSA devices frequently change their device IDs. I have a little script on a Void Linux box that I used to run as a media PC which creates theasoundrc
file based on outputs fromlspci
. I don’t have any such issue with PulseAudio at all.dmix has been enabled by default since 2005 in alsa upstream. If it wasn’t on your system, perhaps your distro changed things or something. The only alsa config I’ve ever had to do is change the default device from the hdmi to analog speakers.
And yeah, it isn’t sophisticated. But I don’t care, it actually works, which is more than I can say about PulseAudio, which even to this day, has random lag and updates break the multi-user setup (which very much did not just work). I didn’t want PA but Firefox kinda forced my hand and I hate it. I should have just ditched Firefox.
Everyone tells me the pipewire is better though, but I wish it could just go back to the default alsa setup again.
Shrug, I guess in my experience PulseAudio has “just worked” for me since 2006 or so. I admit that the initial rollout was chaotic, but ever since it’s been fine. I’ve never had random lag and my multi-user setup has never had any problems. It’s been roughly 15 years, so almost half my life, since PulseAudio has given me issues, so at this point I largely consider it stable, boring software. I still find ALSA frustrating to configure to this day, and I’ve used ALSA for even longer. Going forward I don’t think I’ll ever try to use raw ALSA ever again.
I’m pretty sure calvin is tongue in cheek referencing that Lennart created PulseAudio as well as systemd.
I cannot up this comment more. The migration to ALSA was a mess, and the introductions of Gstreamer*, Pulse*, or *sound_daemon fractured the system more. Things in BSD land stayed much simpler.
I was also ‘forced’ out of Linux ecosystem because of mess in sound subsystem.
After spending some years on FreeBSD land I got hardware that was not FreeBSD supported at that moment so I tried Ubuntu … what a tragedy it was. When I was using FreeBSD I got my system run for months and rebooted only to install security updates or to upgrade. Everything just worked. Including sound. In Ubuntu land I needed to do HARD RESET every 2-3 days because sound will went dead and I could not find a way to reload/restart anything that caused that ‘glitch’.
Details here:
https://vermaden.wordpress.com/2018/09/07/my-freebsd-story/
From time to time I try to run my DAW (Bitwig Studio) in Linux. A nice thing about using DAWs from Mac OS X is that, they just find the audio and midi sources and you don’t have to do a lot of setup. There’s a MIDI router application you can use if you want to do something complex.
Using the DAW from Linux, if it connects via ALSA or PulseAudio, mostly just works, although it won’t find my audio interface from PulseAudio. But the recommended configuration is with JACK, and despite reading the manual a couple times and trying various recommended distributions, I just can’t seem to wrap my head around it.
I should try running Bitwig on FreeBSD via the Linux compatibility layer. It’s just a Java application after all.
Try updating to Pipewire if your distribution supports it already. Then you get systemwide Jack compatibility with no extra configuration/effort and it doesn’t matter much which interface the app uses. Then you can route anything the way you like (audio and MIDI) with even fewer restrictions than MacOS.
I’ll give that a try, thanks!
I’m the first to urge caution in upgrades, but without highlighting actual breaking changes this seems like fud.
Some of us hang out in forums where people literally start posting minutes after a Python release that they don’t understand why NumPy isn’t installing on the new version.
Waiting at least a little bit for the ecosystem to catch up is sound advice.
I don’t understand why you say that when the article was very clearly a meta-discussion of how to approach Python version upgrades. It is not asking users to hold off indefinitely, but instead is reacting to the availability and how that plays out with updates throughout the ecosystem.
A “product manager” for Python could take a lot away from how clearly the pain points were laid out. As a platform, it’s advantageous for Python to tackle a lot of the issues pointed out, but it’s hard because of the number of stakeholders for things like packages. Getting a Docker image out more quickly seems like low-hanging fruit, but delaying a few days could perhaps be intentional.
For what it is worth, the Docker container, like many very popular containers on the official docker registry, are in fact owned and maintained by the Docker community themselves. I am unsure if it is really their duty to-do that.
Many of the listed things in the article are indeed painful things to deal with, but some of them I’m not sure if the PSF is really the right entity to have had them fixed on launch day.
edit: clarified that is the docker community that maintains it, not Docker the corporate entity.
Also, as the author suggested it could be, it’s fixed already:
They had to hit publish pretty quickly to release that complaint while it was still true.
Some of concerns seem reasonable, for example the tooling catching up with the new pattern matching syntax blocks (
match
,case
). If you use the popular Black code formatter, for example, it doesn’t yet handle pattern matching (and it looks like it’s going to be a bit of a job to update that).It doesn’t sound like they’ve fixed the complaints articulated here.
That’s enough to make me prefer other options when they’re available to me.
No one likes breaking changes either. Damned if you do, damned if you don’t.
I agree. And to strain the reference: So dammit, I will use something else.
All JSON is valid YAML. Just use JSON.
They don’t really target similar uses though? I don’t love YAML, but JSON is not really meant for human authoring, especially when large or repetitive
TOML is really the best of both worlds IMO. Easy to read/write, hard to screw up.
I’d also say that there’s really no human serialization language that handles repetition well. YAML has its anchors and stuff but that’s footgun city. HCL has for_each which is good but also has a steep learning curve. Writing real code and dumping to something else is my preferred method if HCL isn’t an option.
I don’t mind YAML anchors so much, but if I really want this kind of feature I’m reaching for Dhall for sure
All JSON is also valid UCL as well. UCL actually supports all of the things I want from a human-friendly configuration language, such as well-defined semantics for loading multiple files with overrides (including deletion), so if you want to avoid complex parsing in your main application then you can have a stand-alone unprivileged process that parses UCL and generates JSON.
Anything where I might want YAML, I’ve found either JSON or UCL a better choice. JSON is much simpler to parse and has good (small!) high-performance parsers, UCL is more human-friendly. YAML is somewhere between the two.
One small nit: the YAML spec disallows tabs, while JSON allows them. In practice, I don’t know of any YAML parser implementations that will actually complain, though.
I haven’t used a YAML parser that allows tabs in YAML syntax, but appearing inside inline/json syntax they may be more lax
This is the only good thing about YAML
I agree with several of the issues pointed out on that page and it’s sibling pages, but some of the headache experienced is a direct result of libyaml(and thus pyyaml). It still doesn’t properly support YAML 1.2, which defines booleans as true/false. That is still a nasty thing though when wanting to use the literal word true or false, but at least it avoids the n,no,y,yes, and the various case differences. 1.2 only supports true, false.
libyaml also doesn’t generate errors on duplicate keys, which is incredibly frustrating as well.
The criticism of implicit typing, tagging, typing, flows, yaml -> language datatype, etc. are all spot on. They are prone to errors and in the latter case make it really easy to introduce security issues.
Are you sure? https://yaml.org/spec/1.2.2/#other-schemas seems to be saying that it’s fine to extend the rules for interpreting untagged nodes in arbitrary ways. That wouldn’t be part of the “core schema”, but then nothing in the spec says parsers have to implement the core schema.
Whoops. You are correct. The three described schemas are just recommendations. The lack of a required schema is frustrating.
The described recommended schemas define booleans as true, false, which is a change from what versions less than 1.2 had.
I’ll say that I personally get frustrated quickly with yaml.
Any language that supports bare strings will have issues when you also have asciinumeric keywords.
An issue? Sure, but more a tradeoff than anything else.
The problem with the bare string interpretation, as I see it, is not so much the fact that it exists. It’s that it’s not part of the spec proper, but a set of recommended additional tags. What do you think
2001:40:0:0:0:0:0:1
is? Not the number 5603385600000001? The YAML spec, to the extent it has an opinion, actually agrees, but many YAML parsers will interpret it as 5603385600000001 by default, because they implement the optional timestamp type.YAML 1.2 doesn’t recommend https://yaml.org/type/ any more, but it doesn’t disallow it, either. The best part of all this is that there are no strict rules about which types parsers should implement. If you use a bare string anywhere in a YAML document, even one that already has a well-understood meaning, the spec doesn’t guarantee that it will keep its meaning tomorrow.
Gonna open my mouth and remove all doubt… what’s the joke?
I think this was just mistagged as
satire
and have corrected it. Though it’ll be pretty embarrassing if it also went over my head, what with taking my handle from x86 assembly.See the sibling. I’m all but certain it was intended as satire.
I would be very surprised if this weren’t a joke.
This could be tongue-in-cheek, but the bulk of the article looks serious.
Judging by the rest of the site it is not a joke.
At work our team tends to be a team that inherits a lot of messed up stuff and has to fix it. This week my partner in crime and I are replacing the deployment method for some services. Every time a new release is made the current method requires touching 3-4 different git repos to have the new release deployed to production, then to top it off it is incredibly slow and fragile.
Is there no mode that would share the physical network port but tag all IPMI traffic with a VLAN you configure?
Many HPE servers have a dedicated network ports for the iLO card but can also optionally share one of the regular network ports if needed. When in shared mode, you can indeed configure a VLAN tag for the management traffic, which can be different to the VLAN tag used by the host operating system normally.
Unfortunately, in the same way that chris explained that a any compromised host might be able to switch the device IPMI mode from dedicated to shared, using a VLAN for segregation can have a similar problem. If the compromised host adds a sub-interface with the tagged VLAN to their networking stack they now can gain network access to the entire IPMI VLAN.
In addition there are other annoyance with using a shared interface. Because the OS has control of the NIC it can reset the PHY. If the PHY is interrupted while, for example, you’re connected over Serial over LAN or a virtual KVM, you lose access. If you’re lucky, that’s temporary. If you’re really unlucky the OS can continually reset the PHY making IPMI access unusable. A malicious actor could abuse this to lock out someone from remote management.
That can’t happen when you use a dedicated interface for IPMI (other than explicit IPMI commands sent over /dev/ipmi0). Generally switching a BMC from dedicated mode to shared mode requires a BIOS/UEFI configuration change and a server reset.
(Speaking from experience with shared mode and the OS resetting the NIC. The malicious actor is merely a scenario I just dreamt up.)
Indeed, although I suspect in many cases these IPMI modules are already accessible from the compromised host over SMBus/SMIC or direct serial interfaces anyway - possibly even with more privileged access than over the network. That’s how iLOs and DRACs can have their network and user/group settings configured from the operating system.
The increased risk mostly isn’t to the compromised host’s own IPMI; as you note, that’s more or less under the control of the attacker once they compromise the host (although network access might allow password extraction attacks and so on). The big risk is to all of the other IPMIs on the IPMI VLAN, which would let an attacker compromise their hosts in turn. Even if an attacker doesn’t compromise the hosts, network access to an IPMI often allows all sorts of things you won’t like, such as discovering your IPMI management passwords and accounts (which are probably common across your fleet).
(I’m the author of the linked to article.)
The L2 feature you are looking for is called a protected port. This should be available on any managed switch, but I’ll link to the cisco documentation:
https://www.cisco.com/en/US/docs/switches/lan/catalyst3850/software/release/3.2_0_se/multibook/configuration_guide/b_consolidated_config_guide_3850_chapter_011101.html
In a previous life at a large hosting we used this feature on switch ports that were connected to servers for the purposes of using our managed backup services.