I’m interested in the pretty impressive performance delta – I wouldn’tve thought that Zen could outperform Broadwell quite so handily!
Me too! I’ll be completely honest: I have no idea what factors contributed here. Maybe things like no NUMA? a bit more cache? Something with Spectre / Meltdown? No idea – not my forte – but I am sure delighted by it.
EPYC is way more NUMA than Intel equivalents. EPYC has four dies on one package, and each die is a NUMA domain.
But Meltdown mitigations are indeed usually only turned on for Intel! :)
Rust meanwhile notes that you can’t safely write a performant data structure in Rust, so they urge you not to do that.
The interesting thing to me is the linked FAQ (https://www.rust-lang.org/en-US/faq.html#can-i-implement-linked-lists-in-rust) literally doesn’t say that.
It says:
I wonder if this was an oversight or misunderstanding?
As a follow-up, in the conclusion you say:
I think that in practice they may not be making real life shipped code a lot more secure - also because not that much actual Rust code is shipping.
While just one of the undoubtedly many examples which could be brought up, I hadn’t realized the Quantum CSS engine from Firefox was so short! More seriously, the achievements in Firefox are remarkable and inspiring, and is a large amount of code shipping to real users, and used every day.
One thing I like very much about the borrow checker is it took memory access problems and turned it in to a generic resource safety problem. Using the simple primitives available I’m able to easily encode usage requirements, limitations, and state changes through borrows and Drops and have them checked at compile time. This is really powerful, and I very much appreciate it.
For whatever it is worth, I’m a rubbish C dev – not to be trusted to write a single line – who has found Rust to be an comfortable and pleasant experience in only a few weeks of free-time practice.
Hi - I worded this incorrectly. What I meant to say was that the FAQ says performance will disappoint unless you go into unsafe mode. “For example, a doubly-linked list requires that there be two mutable references to each node, but this violates Rust’s mutable reference aliasing rules. You can solve this using Weak, but the performance will be poorer than you likely want. With unsafe code you can bypass the mutable reference aliasing rule restriction, but must manually verify that your code introduces no memory safety violations.”. I’ve updated the wording a bit. Apologies for the confusion.
It is improved, but they don’t urge you to not do it. However, still, unsafe Rust is still safer than C.
It definitely is in context of bigger picture. The default in C for doing Rust’s level of safety is separation logic with tools like VCC. Hard to learn, slow to develop (2loc/day at one point), and solvers likely slower than compiler checks. Rust brought that level of safety to most apps using a simplified model and checker. Both the checker and resulting code usually perform well. The quote indicates it can’t handle some optimized, low-level, data structures. Those atypical cases will need verification with external tool or method.
In light of that, Rust gets safer code faster than C in most verification cases but maybe equivalent labor in some. Rust still seems objectively better than C on being safe, efficient, and productively so.
there are languages in which linked lists are primitives or maybe even invisible. But if you are going to specify a language for writing programs that include linked lists, you should not have to use pragmas. This is a characteristic computer science trope: “we have a rigorous specification of this which cannot ever be violated unless you push the magic button”. It’s a way of acting as if you have solved a problem that you have simply hidden.
Glad you like it :) All the .nix credit goes to nmattia, but let me know in the issue tracker if you run into trouble. Afaik it only works with the unstable channel (and we didn’t pin the nixpkgs version yet).
Nice upgrades to 2TB SSDs.
Personally find it funny how OpenGrok is so bloated that it still has to run on the spindles, hugging along next to the backups — even a 512GB SSD is no fit when you’re dealing with Enterprise-level software written in Java. :-)
I wonder if they’ve examined Hound – https://github.com/etsy/hound – I’ve found it to be much more performant when compared to OpenGrok, while still providing excellent results.
Is the only difference between Guix and Nix the language? I know Nix is more mature and has a bigger community with more packages, but I don’t see any user-facing changes between the two.
I guess there are many differences? One important one is the license. The FSF prefers Guix and GuixSD over Nix and NixOS.
Guix is or was based on the Nix daemon and essentially just a fork, substituting the Nix language for scheme, plus the requirements for packaging. This was several years ago now, it may have diverged further.
I wonder if the Firefox build team has considered exploring Nix for allowing the builders to be internet-free, but without bundling dependencies in the repo.
Does Nix work on Windows? Firefox build team must produce Windows binary, in fact, it is the most important build in terms of users.
Way to go Domen! I completely agree, the Nix ecosystem needs tools like Cachix to support Nix in production and at small companies. I’m delighted to see this released, and look forward to giving it a try this weekend!
It’s in the readme. The linked document is meant as an addendum. I’ll think about it.
Update: Added a preface.
That’s interesting that the company behind it is CZ.NIC the owner/operator of the .cz domain name!
The problem with English, of course, is the difficulty in properly parsing and converting it in to a syntax tree that everyone can agree with:
Interesting! Did you consider using expect to implement this? I’ve seen some pretty wild implementations using expect!
expect is a great program for driving interactive programs however you choose, check this out.
Here is test.expect:
spawn bash
set timeout 1
send "echo input1 | rev\n"
expect {
"1tupni" {
puts "Got 1tupni!"
}
timeout {
puts "didn't get 1tupni soon enough..."
exit 1
}
}
send "echo input2 | rev\n"
expect {
"2tupni" {
puts "Got 2tupni!"
}
timeout {
puts "didn't get 2tupni soon enough..."
exit 1
}
}
# Note I used `input3` here but look for `input4`
send "echo input3 | rev\n"
expect {
"4tupni" {
puts "Got 4tupni!"
}
timeout {
puts "didn't get 4tupni soon enough..."
exit 1
}
}
exit 0
And running it:
Morbo> expect ./test.expect
spawn bash
echo input1 | rev
[grahamc@Morbo:~/projects/student-programs]$ echo input1 | rev
1tupni
Got 1tupni!
echo input2 | rev
[grahamc@Morbo:~/projects/student-programs]$ echo input2 | rev
2tupni
Got 2tupni!
echo input3 | rev
[grahamc@Morbo:~/projects/student-programs]$ echo input3 | rev
3tupni
[grahamc@Morbo:~/projects/student-programs]$ didn't get 4tupni soon enough...
Morbo> echo $?
1
Something I hope to be covered is text reflowing, where you can resize your terminal and have the text flow to the new size. I’ve found it difficult to find a minimal terminal like Terminator which also supports this feature.
Something I can’t ever shake the feeling of, is that iTerm2 for macOS is the best terminal emulator, and consistently innovates and pushes the boundaries on what a terminal emulator can do … but without feeling bloated.
FYI, terminator isn’t really “minimal.” In interface, sure, but it uses the heavy/featureful vte, notably used in gnome-terminal.
[Comment removed by author]
I haven’t received any reports of users running NixOS, but typically folks would only reach out to me i they were having a problem. You can certainly boot up a live rescue and run an install over the serial console. Depending on the distribution this either ‘just works’ or requires it be told the console is on the serial port.
NixOS ISOs from their website do not enable the serial console by default, but building a custom ISO which does is easy enough. I did so a few days ago on Debian using nix to create an NixOS installer for my APU2:
git clone --branch 18.03 --depth=1 https://github.com/NixOS/nixpkgs.git nixpgs
cat > serial-iso.nix <<EOF
{config, pkgs, ...}:
{
imports = [
<nixpkgs/nixos/modules/installer/cd-dvd/installation-cd-minimal.nix>
<nixpkgs/nixos/modules/installer/cd-dvd/channel.nix>
];
boot.kernelParams = [ "console=ttyS0,115200n8" ];
}
EOF
nix-build -A config.system.build.isoImage -I nixos-config=serial-iso.nix nixpkgs/nixos/default.nix
I actually tried a few months ago, but gave up because I had thought I figured out it was impossible. Although, seeing the link that @alynpost just posted, I might give it another go when I have some free time.
Some years ago, a friend taught me a simple trick which I used twice to install OpenBSD at providers where neither OpenBSD nor custom ISOs were directly supported: We would build or download a statically linked build of Qemu, boot the VPS into its rescue image and start Qemu with the actual hard disk of the VPS as disk and an ISO to boot from. Thats not too hard and works for pretty much everything where you got a rescue system with internet access. I guess it should work for NixOS too and maybe nix could even be used for the qemu build ;)
If you want to give it a go and get stuck write support@prgmr.com and we’ll help you debug.
If you want to impress me, set up a system at your company that will reimage a box within 48 hours of someone logging in as root and/or doing something privileged with sudo (or its local equivalent). If you can do that and make it stick, it will keep randos from leaving experiments on boxes which persist for months (or years…) and make things unnecessarily interesting for others down the road.
Man, yes. At a previous company I setup the whole company using an immutable deployments. Part of this was you could still log in and change stuff, but it marked the box as “tainted” and would terminate and replace it after 24hrs. This let you log in, fix a breaking and go back to bed … but made sure the “port it back to the config management tool” a #1 task for the morning.
A second policy was no machine existed for more than 90 days.
These two policies instilled in us a hard-lined attitude of “if it isn’t managed, it isn’t real” and was resoundingly successful in pushing us to solid deployment mechanisms which worked and survived instances being replaced regularly.
I can’t recommend this approach enough. Thank you Rachel, for writing about this.
A second policy was no machine existed for more than 90 days.
I’m curious how you managed the stateful machines (assuming you had some). I’m a DBA, and, well, I often find myself pointing out to our sysads that stateful stuff is just harder to manage (and maintain uptime) than stateless stuff. Did you just exercise the failover mechanism automatically? How did that work downstream?
Great catch! Our MySQL database cluster was excluded from the rule because of the inherent challenges of making that work, however our caching and ElasticSearch clusters were not. Caching because it is a cache, ElasticSearch because its replication and failure handling is batteries-included. Note this was with a not enormous amount of data, if our data grew to $lots we would likely stop giving ES the same treatment.
We worked hard to architect our systems in such a way that data was not on random machines, but in very specific places.
Ah, good, okay. That makes more sense.
Currently we’re in a private cloud, so nothing’s batteries-included. Plus we’re using a virtual storage system in a way that would make traditional replica/failover structures too expensive. The result is our production DB VMs go for a very long time between reboots, let alone rebuilds.
I agree, though, that isolation is a great way to limit that impact. Combine that with some decent data-purpose division (e.g. move the sessions out of the DB into a redis store that can be rebuilt, move the reporting data to a separate DB so we can flip between replicas during reboots, etc), and you can really cut down on the SPOFs.
I use NixOS as my main OS at home and I think it’s the best thing I’ve done for having a stable system. I was mucking around with some boot deps and messed something up and all I had to do to get back to a working system was choose one option up at the grub menu and I booted into my system as it was before I had made the change.
However there’s a few things that I wish were better:
You pretty much need to put your nixos config into version control. While you can revert to a previous version of your system, it doesn’t actually save the previous version of your config, you need to manually revert before making any changes.
While versions of things are tracked explicitly and you can have multiple versions installed, nixpkgs generally doesn’t have multiple versions available to install/depend on (with the obvious exceptions of big things like py2/3). This means if you need a newer version of something and want to contribute back you have to update everything else that depends on your package’s (ie derivation’s) dependencies. That’s a pain. It also means that you can’t installed old versions of things along side new versions of things.
There’s also a lot to learn if you need something that’s not packaged already because there’s no way to run binaries not built explicitly for nixos. There’s not even any way to run flatpacks, snaps, or any of the others, but looking at nixpkgs there are people working on trying to make those work.
All that said, it’s still a better experience that any other distro I’ve used in the past. And I’ve never even tried to contribute to packages on any previous distro, so I’m not sure if it’s easier this way, but it’s a hell of a lot less intimidating for sure.
Also, I’m by no means an expert, take what I’ve said with a grain of salt, I’m sure there’s bound to be at least one thing I’ve said above that’s wrong just due to my inexperience.
And again, that’s mostly about NixOS, and not just Nix. I’m actually in the process of moving all the things I’ve installed via homebrew on my work laptop over to Nix after homebrew broke my system (twice) yet again when they mucked with the python 2/3 naming. I’m tired of dealing with it and have yet to have a serious issue with Nix on OSX. So, I can wholeheartedly suggest to everyone here to start playing around with Nix on an existing Linux or OSX system.
It also means that you can’t installed old versions of things along side new versions of things.
Nothing prevents you from using different revisions of nixpkgs in different places, which would allow you to achieve this.
There’s also a lot to learn if you need something that’s not packaged already because there’s no way to run binaries not built explicitly for nixos.
This is not true, Nix has a buildFHSUserEnv function that creates a linux chroot where you can pretend you’re running a regular linux distro. @puffnfresh has a good post on using this here.
Nothing prevents you from using different revisions of nixpkgs in different places, which would allow you to achieve this.
Huh, I can’t believe I never thought of that. I’ve even installed things from a local “fork” of nixpkgs and it never occurred to me that’s exactly what I was doing.
This is not true, Nix has a buildFHSUserEnv function that creates a linux chroot where you can pretend you’re running a regular linux distro. @puffnfresh has a good post on using this here.
That’s true. I guess I was inexact in what I wrote. You might not need to “package” something (as in contributing it to nixpkgs), but you still need to know enough about how things are “packaged” (as in writing any kind of derivation) so you can write a .nix file that wraps it in something that allows it to work. I really wish there was a pretend-to-not-be-nix ./rando-bin command that would handle 99% of binaries for when I just need to get something done. (Although I realize that’s asking a heck of a lot.)
Edit: Huh. I think that’s what you just linked. I should have read that all the way though before replying. Man, I’ve been looking for something like that for ages. I should complain about things on the internet more often.
At work we use a few different versions of nixpkgs. We want an old version of Docker, for example. So we import the exact commit of nixpkgs we want.
pretend-to-not-be-nix ./rando-bin
Is exactly what you get when using buildFHSUserEnv.
I really wish there was a pretend-to-not-be-nix ./rando-bin command that would handle 99% of binaries for when I just need to get something done.
I think a lot of people find steam-run provides it this command in most cases!
This is great, but I’m not sure what problem they are addressing. My main problem with VPN services isn’t that I’d have to trust their software, because I’m not the only one running it. I have to trust their networks, their operators, their everything.
This might be an unpopular opinion, but I think I’m better off with HTTPS Everywhere (and Tor, when I want to be really anonymous).
and Tor, when I want to be really anonymous
Of course that isn’t even a very good option unless you have extraordinary opsec hygiene.
I’d say it’s relatively easy, depending on who you want to be anonymous to.
But for a more general audience, I recommend checking the Tor documentation about the protection they provide. They also have great illustrations of how and where to expect privacy from whom. Also, use the Tor Browser Bundle. Other browsers will betray you :)
I think a lot of their customers just don’t want to receive rude letters in the mail from their ISPs. I can attest that this service prevents such letters. …Assuming you remember to turn the VPN on, or use a VM/dedicated machine that always/only has it on.
Mmmmh, an anonymous domain registration, an unknown “CTS” security research firm publishing only one whitepaper for all vulnerabilities. Whitepaper published on a secondary website “safefirmware.com”, that is otherwise broken.
No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.
This smells like FUD. The SP is probably broken and vulnerable, yes. But this crap seems only aimed at selling security services.
My phrasing was a bit misleading, but the whole “exploit being published, peer review, responsible disclosure” was what I was getting at to verify the findings. These publications have to be transparent, reproducible and verified by third parties to be taken seriously.
No exploit has been published, there is no peer review, no responsible disclosure to verify the findings.
This is bullshit. Here’s peer review.
I’m astounded at just how strong the backlash against this is, and the backlash reeks of damage control propaganda.
AMD PSP is a hardware backdoor. Intel ME is a hardware backdoor. These things shouldn’t exist in the first place, and I wouldn’t put it past AMD and Intel to spend $$ sending armies of trolls trying to cover up the severity of what they’ve done.
Of course AMD PSP shouldn’t exist in the first place.
But the backlash against this is simply due to “it” being a ridiculous hit-job. I don’t care about damage to AMD.
This is bullshit. Here’s peer review.
Nice, they did not link it on their website. My first guess will always be that there is none unless shown otherwise.
Seems to be the consensus about this site on Reddit, HN, etc. Someone’s either trying to make a name for themselves or Intel paid someone who paid someone who paid someone who is good at marketing.
[Comment removed by author]
Can anyone show me a laptop that doesn’t lose to a macbook in any of these categories?
Personally I like the Dell XPS 13 and 15. The 4K screens are really amazing to see in person. You can configure with an i7 processor, optional fingerprint reader, fast SSDs up to 1TB, up to 32GB RAM, touch/non-touch display options, up to 97Wh battery in the ~4.5lb model or 56Wh in the 4lb if you want to go lighter (benchmarks). For ports, it has an SD card slot, 2 USB-A 3.0 with PowerShare, 1 HDMI, and a Thunderbolt 3 (including power in/out).
I feel they compete in several of the categories and are worth checking out in person somewhere (Frys, etc) if you’re in the market. Just earlier today someone posted a link to this guy’s experience spending a year away from MacOS and he winds up with an XPS 15, which he mostly likes.
I went from a 2011 macbook pro 15” to a thinkpad 460p running kubuntu, its not as flush as the macbook but it beats performance & price for me. Form factor, I should’ve got a 15” again but thats my choice. Fit & finish on the macbook is better but then I can easily remove my battery and get to all the internals of the laptop, so I prefer the thinkpad.
I can try, though I am not sure what “fit and finish” means or how to measure it.
Ignoring that, I would offer up both the Dell XPS 13 or Lenovo X1 Carbon.
There are reasons to pick one over the other, but for me it was the X1 Carbon for having matte screen.
Fit and finish covers build quality and aesthetics. According to this page it’s an automotive term.
The new Huawei Matebook X?
How about the ASUS ZenBook Pro? I don’t have experience with it, but superficially it’s got very similar form factor and design to a MacBook. Aluminum uni-body and all. And being not-Apple, you obviously get better performance for the price.
Thinkpad P71. Well, except for the form factor (I’d rather get stronger arms than have to compromise on other factors), it beats the Macbook Pro on all fronts.
I’ve run Linux on a Macbook because my employer wouldn’t give me anything else. Reason was: effort of IT team vs my effort of running Linux.
But pretty sure my effort was extensive compared to what their effort would have been :)
[Comment removed by author]
Yeah, but then you’re stuck with the clunky old macOS rather than a nice modern UI like StumpWM, dwm or i3.
16:10 screen, wide-gamut display, correct ppi (X1C is too low, and the high-res Dells too high).
The last ThinkPad (of which I have many) to have a 16:10 screen was T410, which is now 8 years old.
Personally, there’s no other modern laptop I’d rather use, regardless of operating system. To me nothing is more important than a good and proper screen.
If anybody comes up with a laptop that has a 4:3 screen, I’ll reconsider.
Doesn’t the pixelbook have a nice tall aspect ratio? Ignoring linux compatibility and the fact that it’s a chromebook, I feel like you’d like the hardware.
It does, but tragically it’s ruined by a glossy finish on the screen. I bought one for the aspect ratio and brightness but almost threw it out the window several times in frustration before giving it away.
I don’t think many people buy new Apple hardware with the intention of immediately wiping it and installing Linux.
My MBP, for example, is running OSX because I need it (or Windows) to use Capture One photo software. When I upgrade to a new machine I’m going to put Linux on the old one and use it for everything else. I did the same thing with my iMac years ago.
I personally still think the build quality of Apple laptops are better than the alternatives. The trackpad in my old MBP, for example, still feels better than the trackpads I’ve used on newer machines from other brands. The performance and specs are less important to me as long as it’s “fast enough” and the build is solid.
All that said, I’m not buying any more Apple products because their software quality has completely gone down the toilet the last few years.
In this case I didn’t really have a choice. I had tried asking for a PC before I started this job; but they tried to get me in really fast and provisioned a Mac without even asking me. My boss made up some bullshit about how you have to have them for developers laptops as the PCs the company bought didn’t have the specs (16GB of ram and such). I’m really glad I got Linux booting on it and not have to use it in VMWare (which does limit your max ram to 12GB and doesn’t give you access to the logical HT cores).
But yea if it was my personal laptop, I wouldn’t even bother buying a mac to being with. My recent HP had everything supported on it with the latest Ubuntu or on Gentoo with a stock kernel tree right out of the box.
I got given a macbook so I had no choice what laptop to use so I installed linux on it and it works well enough.
Bad idea, it should error or give NaN.
It’s not mathematically sound.
a/b = c should be equivalent to a = c*b
this fails with 1/0 = 0 because 1 is not equal to 0*0.
Edit: I was wrong, it is mathematically sound. You can define x/0 = f(x) any function of x at all. All the field axioms still hold because they all have preconditions that ensure you never look at the result of division by zero.
There is a subtlety because some people say (X) and others say (Y)
(X) a/b = c should be equivalent to a = c*b when the LHS is well defined
(Y) a/b = c should be equivalent to a = c*b when b is nonzero
If you have (X) definition in mind it becomes unsound, if you are more formal and use definition (Y) then it stays sound.
It seems like a very bad idea to make division well defined but the expected algebra rules not apply to it. This is the whole reason we leave it undefined or make it an error. There isn’t any value you can give it that makes algebra work with it.
It will not help programmers to have their programs continue on unaware of a mistake, working on with corrupt values.
I really appreciate your follow-up about you being wrong. It is rare to see, and I commend you for it. Thank you.
This is explicitly addressed in the post. Do you have any objections to the definition given in the post?
I cover that exact objection in the post.
That was my initial reaction too. But I don’t think Pony’s intended use case is numerical analysis; it’s for highly parallel low-latency systems, where there are other (bigger?) concerns to address. They wanted to have no runtime exceptions, so this is part of that design tradeoff. Anyway, nothing prevents the programmer from checking for zero denominators and handling them as needed. If you squint a little, it’s perhaps not that different from the various conventions on truthy/falsey values that exist in most languages, and we’ve managed to accommodate to those.
Those truthy/falsey values are an often source of errors.
I may be biased in my dislike of this “feature”, because I cannot recall when 1/0 = 0 would be useful in my work, but have no difficulty whatsoever thinking of cases where truthy/falsey caused problems.
1/0 is integer math. NaN is available for floating point math not integer math.
I wonder if someone making a linear math library for Pony already faced this. There are many operations that might divide by zero, and you will want to let the user know if they divided by zero.
It’s easy for a Pony user to create their own integer division operation that will be partial. Additionally, a “partial division for integers” operator has been been in the works for a while and will land soon. Its part of operators that will also error if you have integer overflow or underflow. Those will be
+?,/?,*?,-?.https://playground.ponylang.org/?gist=834f46a58244e981473c0677643c52ff