Topics are in the intersection of code and ops usually. I go over it every few months and aggressively prune blog posts that I released prematurely and dislike/didn’t have enough content to be worth displaying.
Thanks - it’s this Hugo theme (with some minor modifications).
This is really interesting to get an idea of how people are taking advantage of BSD! I now have a much nicer idea of why people are going to it (and am a bit tempted myself). That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though
Define “1st class support”.
https://people.canonical.com/~ubuntu-security/cve/universe.html
I mean “someone talks to me about an application and I’m interested in trying it out on my system”?
I feel like the link to the CVE database is a bit of an unwarranted snipe here. I’m not talking too much about security updates, just “someone released some software and didn’t bother to confirm BSD support so now I’m going to need to figure out which ways this software will not work”.
To be honest I don’t really think that having all userland software come in via OS-maintained package managers is a great idea in the first place (do I really need OS maintainers looking after anki?). I’m fine downloading binaries off the net. Just nicer if they have out of the box support for stuff. I’m not blaming the BSDs for this (it’s more the software writer’s fault), just that it’s my impression that this becomes a bit of an issue if you try out a lot of less used software.
As an engineer that uses and works on a minority share operating system, I don’t really think it’s reasonable to expect chiefly volunteer projects to ship binaries for my platform in a way that fits well with the OS itself. It would be great if they were willing to test on our platform, even just occasionally, but I understand why they don’t.
Given this, it seems more likely to expect a good experience from binaries provided by somebody with a vested interest in quality on the OS in question – which is why we end up with a distribution model.
Yep, this makes a lot of sense.
I’m getting more and more partial to software relying on their host language’s package manager recently. It’s pretty nice for a Python binary to basically always work so long as you got pip running properly on your system, plus you get all the nice advantages of virtual environments and the like letting you more easily set things up. The biggest issue being around some trust issues in those ecosystems.
Considering a lot of communities (not just OSes) are getting more and more involved in distribution questions, we might be getting closer to getting things to work out of the box for non-tricky cases.
software relying on their host language’s package manager
In general I’m not a fan. They all have problems. Many (most?) of them lack a notion of disconnected operation when they cannot reach their central Internet-connected registry. There is often no complete tracking of all files installed, which makes it difficult to completely remove a package later. Some of the language runtimes make it difficult to use packages installed in non-default directory trees, which is one way you might have hoped to work around the difficulty of subsequent removal. These systems also generally conflate the build machine with the target machine (i.e., the host on which the software will run) which tends to mean you’re not just installing a binary package but needing to build the software in-situ every time you install it.
In practice, I do end up using these tools because there is often no alternative – but they do not bring me joy.
Operating system package managers (dpkg/apt, rpm/yum, pkg_add/pkgin, IPS, etc) also have their problems. In contrast, though, these package managers tend to at least have some tools to manage the set of files that were installed for a particular package and to remove (or even just verify) them later. They also generally offer some first class way to install a set of a packages from archive files obtained via means other than direct access to a central repository.
For development I use the “central Internet-connected registry.”, for production I use DEB/RPM packages in a repository:
There are probably more benefits that escape me at the moment :)
That feeling of having to go through ports and simply not having 1st-class support for some software seems… rough for desktop usage though
What kind of desktop software do you install from these non-OS sources?
I remember screwing around with Flathub on the command line in Fedora 27, but right now on Fedora 28, if you enable Flatpak in the Gnome Software Center thingy, it’s actually pretty seamless - type “Signal” in the application browser, and a Flatpak install link shows up.
With this sort of UX improvements, I’m optimistic. I feel like Fedora is just going to get easier and easier to use.
Possibly unpopular opinions (and a large block of text) incoming:
C++, Go, Swift, D and Rust all fail to adequately replace C for me. When given the choice, I would likely choose to just stick with C (for the moment; I’ll talk about what I’m considering at the end).
C++ has so much historical baggage and issues, it’s already an immediate turn-off. More than that, it’s explicitly understood that you should carve out a subset of C++ to use as your language because trying to use the whole thing is a mess of sadness (given that it’s massive at this point). I appreciate the goal of zero-cost abstraction and having pleasant defaults in the standard library, but there are just too many problems for me to take it as a serious choice. Plus, I still have to deal with much of the unfortunate UB from C (not all of it, and honestly, I don’t mind UB in some cases; but a lot of the cases of UB in C that just make no reasonable sense come across to C++). It should be noted that I do still consider C++ occasionally in a C-with-templates style, but it’s still definitely not my preference.
Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).
Swift is really easy to rule out: It’s not cross-platform. Even if it were, it has all sorts of terrible issues (have they fixed the massive compile times yet?) that make it a no-go.
D, as far as I can tell, manages to be C++ without the warts in a really lovely way.
Having said that, it seems like we’re talking about good replacements for C, not C++, and D just doesn’t cut it for me.
GC by-default (being able to turn that off is good, but I’ll still have to do it every time), keeping name mangling by-default, etc.
-betterC helps with some of this, but at that point, there’s just not enough reason for me to switch (especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source? sounds like I might need to take another look at D; though, again, its emulation of C++ still suggests to me that it won’t quite cut it).
Rust is the only language in this list that I think is actually a reasonable contender. Sadly, it still bites a lot of these issues: names are still mangled by-default, the generated binaries are huge (I’m still a little bugged that C’s hello-world with glibc dynamically links to 6KB), et al.
But more than all of these things I’ve listed, the problems I have with these languages is that they all have a pretty big agenda (to borrow a term from Jon Blow). They all (aside from C++ which has wandered in the desert for many years) have pretty specific goals with how they are designed to try and carve out their ideal of what programming should be instead of providing tools that allow for people to build what they need.
So, as for languages that I think might (someday, not soon really) actually replace C (for me):
Zig strikes a balance between C (plus compile-time execution to replace the preprocessor) and LLVM IR internals which allow for incredibly fine-grained control (Hello arbitrarily-sized, fixed-width integers, how are you doing?). It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding).
Myr is still really new (so is Zig really), and has a lot left to figure out (e.g., its C interop story is not quite so nice yet). However, it manages to be incredibly simple and terse for a braced language. My guess is that, in the long run, myr will actually replace shell languages for me, but not C.
Jai looks incredibly cool and embraces a lot of what I’ve mentioned above in that it is not a big agenda language, but provides a lot of really useful tools so that you can use what you make of it. However, it’s been in development for four years and there is no publicly available implementation (and I am worried that it may end up being closed-source when it is released, if ever). I’m hoping for the best here, but am expecting dark storms ahead.
Okay, sorry for the massive post; let me just wrap up a few things. I do not mean to imply with this post that any of the languages above are inherently bad or wrong, only that they do not meet my expectations and needs in replacing C. For a brief sampling of languages that I love which suffer from all the problems I mentioned above and more, see here:
They are all great and have brilliant ideas; they’re just not good for replacing C. :)
Now then, I’ll leave you all to it. :)
(especially with all the weirdness of there being two de facto standard libraries from different organizations, one of which I think is still closed-source?)
That was resolved a few years ago. D just has one stdlib, it’s fairly comprehensive, and keeps getting better with each release.
Despite how often people place it here, I do not believe Go belongs in this group of candidates The garbage collection alone makes it unfit for systems programming. I see Go as a very reasonable choice to replace Java (but I don’t use Java whenever I have the choice, so I might not be the best person to ask). There are many other parts of this language that rub me the wrong way, but mostly, I just think it’s not a good systems language (but is instead a great intro language for higher-level stuff).
Agreed with this. Go is my go-to when I need to introduce dependencies to a Python script (and thus fuck with pip --user or virtualenv or blah blah blah) for high level glue code between various systems (e.g. AWS APIs, etc.)
I think there’s a reason Go is dominating the operations/devops tooling world - benefits of static compilation, high level, easy to write.
Look at the amount of hacks Docker needs to do things like reexec etc. to work properly in Go, that would be trivial to do in C.
Note that Zig is x86-only at the moment. Check “Support Table” on Zig’s README.
For that matter, Rust is x86-only too, if you want Tier-1 support.
I’m a big d fan, but I agree that it’s the wrong thing to replace c. Betterc doesn’t really help in this respect, because it doesn’t address the root reason of why d is the wrong thing to replace c (which being that the language itself is big, not that the runtime or standard library are). Personally, I think zig is the future, but rust has a better shot at ‘making it’, and the most likely outcome is that c stays exactly where it currently is (which I’m okay with). I haven’t looked at myr (yet), and afaik isn’t jai targeted at game development? It might be used for systems programming, but I think it might not necessarily do well there.
It also manages to have the best C interop story I have ever seen in a language so far (you can import from C header files no problem, and zig libraries can have their code called from C also no problem; astounding)
I think nim does this, and d for sure does, with d++ (I think this may also help with the c++ emulation? Also, I’m not sure why you’re knocking it for its lack of quality c++ emulation when it’s afaik the only language that does even a mediocre job at c++).
I agree!
As for knocking D for emulating C++, I did not mean to suggest that doing so is a count against D as a language, but rather just as a count against replacing C. I already ruled out C++, if a given language is pretty close to C++, it’s probably also going to be ruled out.
It’s been a long time since I looked at nim, but generating C code leaves a really poor taste in my mouth, especially because of some decisions made in the language design early on (again, I haven’t looked in a while, perhaps some of those are better now).
As for Jai, yes it’s definitely targeted at game development; however, the more I look at it, the more it looks like it might be reasonable for a lot of other programming tasks. Again though, at the moment, it’s just vaporware, so I offer no guarantee on that point. :)
However, it’s been in development for four years and there is no publicly available implementation
I’m amazed that behind the project is Jonathan Blow, he is a legend programming original video games.
I’m in my favorite job of my career so far, doing my best ever work. I had a very reasonable interview loop without a single trivia question. It’s the first time I got referred in by a friend and ex-coworker, through a network of his friends.
My new rule of thumb for myself is, when possible, work with my friends or their friends. Referrals referrals. Find people you like to work with and stick together. The only way to know if somebody can solve real problems for a real company (with warts and all) is by actually being in the trenches with them for months and years.
Working only with friends and friends of friends seems wrong to me. The words that come to mind are exclusive and cliquish. To get diverse perspectives and live up to the ideal of equal-opportunity employment, we need to be comfortable working with strangers.
Traveling between companies as a group of friends doesn’t mean that you’re only working with said friends, since there will almost always be other co-workers involved, but that you’re working with more known quantities. As an employee-side strategy, I don’t think it’s hugely problematic, especially given the amount of information asymmetry that’s often in play in hiring. I’ve also had luck with referrals (I’ve found out about 3 out 4 of my development jobs via some sort of reference or connection, including my current one).
On the employer side, I could see only working with referrals being somewhat problematic, but I doubt most employers do that.
I mean, my friend’s friend who I am now working for is roughly 3000 miles away from where I met my friend. I was definitely exposed to diverse perspectives by moving out here, and I necessarily have to interact and learn from all of my new coworkers (who are all diverse strangers to me). What I gained was a pre-selection stamp; somebody vouched that I’m not an idiot.
It’s also bi-directional preselection. I know that my friend wouldn’t send me off to work for a real dumpster fire of a company - I know that I’ll be working with good people on good projects, and it would benefit my own personal growth.
Friend networks can span multiple cities, countries, companies, and cultures. It doesn’t have to imply inbreeding.
The words that come to mind are exclusive and cliquish.
This sounds fun in practice until you hire a terrible stranger. Then you’re back to square one of “how do you find out if an interview candidate is good?” And it’s a hard question.
I’m trying to convince my workplace to get rid of whiteboarding interviews, does anyone know if there are resources for ideas of alternatives? Anyone have a creative non-whiteboarding interview they’d like to share?
The best that I’ve found is to just ask them to explain some tech that’s listed on their resume. You’ll really quickly be able to tell if its something they understand or not.
My team does basic networking related stuff and my first question for anyone that lists experience with network protocols is to ask them to explain the difference between TCP and UDP. A surprising number of people really flounder on that despite listing 5+ years of implementing network protocols.
This is what I’ve done too. Every developer I’ve ever interviewed, we kept the conversation to 30min-1hr and very conversational. A few questions about, say, Angular if it was listed on their resume, but not questions without any context. It would usually be like- “so what projects are you working on right now? Oh, interesting, how are you solving state management?” etc. Then I could relate that to a project we currently had at work so they could get a sense of what the work would be like. The rapid-fire technical questions I’ve find are quite off-putting to candidates (and off-putting to me when I’ve been asked them like that).
As a side note, any company that interviews me in this conversational style (a conversation like a real human being) automatically gets pushed to the top of my list.
Seconded. Soft interviewing can go a long way. “You put Ada and Assembler on your CV? Oh, you just read about Ada once and you can’t remember which architecture you wrote your assembly for?”
I often flunk questions like that on things I know. This is because a question like that comes without context. If such a problem comes up when I’m building something, I have the context and then I remember.
I don’t think any networking specialist would not know the difference between TCP and UDP, though. That sounds like a pretty clear case of someone embellishing their CV.
So if you can’t whiteboard and you can’t talk about your experience, what options are left? Crystal ball?
I like work examples, open ended coding challenges: Here’s a problem, work on it when you like, how you like, come back in a week and lets discuss the solution. We’ve crafted the problem to match our domain of work.
In an interview I also look out for signs of hostility on the part of the interviewer, suggesting that may not be a good place for me to work.
A sample of actual work expected of the prospective employee is fair. There are pros and cons to whether it should be given ahead of time or only shown there, but I lean towards giving it out in advance of the interview and having the candidate talk it through.
Note that this can be a hard sell, as it requires humility on the part of the individual and the institution. If your organization supports an e-commerce platform, you probably don’t get to quiz people on quicksort’s worst-case algorithmic complexity.
I certainly don’t have code just sitting around I could call a sample of actual work. The software I write for myself isn’t written in the way I’d write software for someone else. I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun. The code I’ve written for work is the intellectual and physical copy of my previous employers, and I couldn’t present a sample even if I had access to it, which I don’t.
Yup, the code I write for myself is either 1) something quick and ugly just to solve a problem 2) me learning a new language or API. The latter is usually a bunch of basic exercises. Neither really show my skills in a meaningful way. Maybe I shouldn’t just throw things on GitHub for the hell of it.
Oh, I think you misinterpreted me. I want the employer to give the employee some sample work to do ahead of time, and then talk to it in person.
As you said, unfortunately, the portfolio approach is more difficult for many people.
I write software for myself in Haskell using twenty type system extensions or in Python using a single generator comprehension. It’s for fun.
Perhaps in the future we will see people taking on side projects specifically in order to get the attention of prospective employers.
I recently went through a week of interviewing as the conclusion of the Triplebyte process, and I ended up enjoying 3 of the 4 interviews. There were going to be 5, but there was a scheduling issue on the company’s part. The one I didn’t enjoy involved white board coding. I’ll tell you about the other three.
To put all of this into perspective, I’m a junior engineer with no experience outside of internships, which I imagine puts me into the “relatively easy to interview” bucket, but maybe that’s just my perception.
The first one actually involved no coding whatsoever, which surprised me going in. Of the three technical interviews, two were systems design questions. Structured well, I enjoy these types of questions. Start with the high level description of what’s to be accomplished, come up with the initial design as if there was no load or tricky features to worry about, then add stresses to the problem. Higher volume. New features. New requirements. Dive into the parts that you understand well, talk about how you’d find the right answer for areas you don’t understand as deeply. The other question was a coding design question, centered around data structures and algorithms you’d use to implement a complex, non-distributed application.
The other two companies each had a design question as well, but each also included two coding questions. One company had a laptop prepared for me to use to code up a solution to the problem, and the other had me bring my own computer to solve the questions. In each case, the problem was solvable in an hour, including tests, but getting it to the point of being fully production ready wasn’t feasible, so there was room to stretch.
By the time I got to the fourth company and actually had to write code with a marker on a whiteboard I was shocked at how uncomfortable it felt in comparison. One of my interviews was pretty hostile, which didn’t help at all, but still, there are many, far better alternatives.
I’m a little surprised that they asked you systems design questions, since I’ve been generally advised not to do that to people with little experience. But it sounds like you enjoyed those?
There are extensive resources to help with the evangelism side of things.
Obligatory: https://linux.die.net/man/8/ss
So literally the only thing I ever use netstat for is showing listening network ports with: netstat -luntp. I’ve tried to get the same output with ss, but this is the closest I’ve come: ss -lpf inet and ss -lpf inet6. It seems that “inet” and “inet6” are mutually exclusive. Short of shell trickery, is there any way to get both in one command?
Do you mean the explicit notation of tcp6 and udp6?
shanssian:~ $ sudo netstat -tulpn | awk '{print $1}' | grep '.*6'
tcp6
udp6
udp6
udp6
udp6
shanssian:~ $ sudo ss -tulpn | awk '{print $1}' | grep '.*6'
shanssian:~$
It looks like for ss you kinda have to guess based on the format of Local Address:Port
I nearly posted this as an ‘ask’: Slack is not good for $WORK’s use case because it does not have an on-premise option. What on-premise alternatives are people using/would you recommend?
I’ve used Mattermost before, which AFAIK has an on-prem version - just as a user, not setup or admin so I can’t speak to that end.
Same, actually. It does look very interesting, I’d be highly interested in whether anyone has any experience with it?
We’ve used mattermost for a few years now, it’s pretty easy to setup and maintain, you basically just replace the go binary every 30 days with the new version. We just recently moved to the integrated version with Gitlab, and now Gitlab handles it for us, even easier now, since Gitlab is just a system package you upgrade.
A lot of people have said Mattermost, might be a good drop-in replacement. According to the orange site they’re considering dropping a “welcome from Hipchat” introductory offer, which is probably a smart move.
IIRC mattermost is open core. I’ve heard good things about zulip. Personally, I like matrix, which federates and bridges
There’s a UX in some UNIX tools, not sure where or how it originated, but for example in youtube-dl, you can choose to save a video with some macros/aliases for the title, e.g.:
--output "%(uploader)s%(title)s.%(ext)s"
Could be cool here to provide things e.g. passing a %year-%author-%title format string (like some other commenters in this thread mentioned - I have no actual opinions on PDF naming).
I enjoyed this article (https://peter.bourgon.org/go-for-industrial-programming/) so I would like to revisit some Go code I’ve written at work and make it better. I’m lazy with testing, tracing, metrics, all of the above.
I’ve speculated in the past about a theoretical configuration allowed env vars and config files in various formats, on an package which mandated the use of flags for config, but also opt-in basis. I have strong opinions about what that package should look like, but haven’t yet spent the time to implement it. Maybe I can put this out there as a package request?
You ever seen: https://github.com/spf13/viper ?
BTW, great article. I’m in a spot where I’m writing new tooling in Go for a team semi-familiar with Go. We’re all learning as we go along. The testing section is great - I need to implement some better testing. I feel like most of the time my code is not testable, but as Mitchell Hashi says in the talk you link, that’s probably more of a code smell than an actual untestable situation.
I love these kinds of things. Thanks for asking!
Right around Y2K, I worked as the “Technical Coordinator” at a “Regional Development Authority” in Cornwallis Park, Nova Scotia, Canada. I got to do what so many Silicon Valley folks have falsely claimed to do, and it was glorious: I got to make the world a better place, using technology.
An RDA in Canada is an organisation that receives money from different levels of government and has a mandate to promote development in the region. This takes the shape of providing free education, encouraging businesses to move there, assisting existing businesses with hiring additional staff, and related activities. Because of some other programs, my particular RDA (the WVDA) had an additional mandate to improve the lives of residents and businesses using technology.
I got to do what so many Silicon Valley bros have claimed to do, but never have: I made the world a better place using technology. Looking back, it was probably the best opportunity of my career, and I’ve had many.
The many projects that I worked on in that short time period is absolutely staggering, as I think about it now. I was the only person in my department for most of my time there, and for the rest I was the more experienced and skilled one of two.
I can honestly say that in my approximately three years there, I made the world a better place. There are still fingerprints of my work there in that place, and I miss doing work like that. I haven’t done work like that since, and gotten paid for it.
It wasn’t all roses: my bosses were absolutely rotten with corruption. One got fired for giving large contracts to his friends to do nothing. Another was getting invoices for services and products that were never delivered from businesses owned by their friends and family. Another was doing that, and also using her expense account to expense trips and conferences she never went to. Eventually, years after I left, the entire agency got shut down and each government decided to run their own smaller operation themselves because of these things. But it doesn’t mean that we didn’t do great things at the time.
I should really write more details about my experiences, because it really was something amazing, and I’m quite proud of it. Most of the web applications are still usable via the internet archive.
I got to do what so many Silicon Valley folks have falsely claimed to do, and it was glorious: I got to make the world a better place, using technology.
I got to do what so many Silicon Valley bros have claimed to do, but never have: I made the world a better place using technology.
Cornwallis Park is a rural community in Annapolis County, Nova Scotia, Canada. As of 2016, the population is 479
Easy there, Genghis Khan.
I think he’s (sarcastically) saying it’s a pretty small chunk of the world. Like I could give a homeless person a sandwich, making the world a better place, but I wouldn’t expect a medal for it. Not my criticism.
I’m always looking for a cross-compiling system for building macOS executables from Linux, either as a single static executable, or as a self-contained relocatable bundle of (interpreter + libraries + user code entrypoint), because getting legal Mac build workers is such a pain.
The best toolkit I’ve found, by far, is golang Where you just GOOS=darwin go build .... There are a variety of more-or-less hacky solutions in the Javascript ecosystem, and a few projects for Python, but for Ruby this area is sorely lacking.
I mention this because while XAR looks like an awesome way to distribute software bundles, I still need to figure out a way to do nice cross-compiles if I’m going to use it to realistically target both macOS and Linux.
Tell me about it. I’ve tried cross compiling Rust from Linux to OSX and it was just a saga of hurt from start to finish.
For Go, did you need to jump through the hoops of downloading an out-of-date Xcode image, extracting the appropriate files and compiling a cross-linker? Or is that mysteriously handled for you by the Go distribution itself?
You literally just run GOOS=<your target os> GOARCH=<your target architecture> go build. No setup needed. Here’s the vars go build inspects.
It’s frustrating trying to do similar in compiles languages, and then interpreted languages with native modules are even worse.
Go basically DIYs the whole toolchain and directly produces binaries. That has pros and cons, but means it can cross-compile without needing any third-party stuff like the Xcode images. For example it does its own linking, so it doesn’t need the Xcode / LLVM linker to be installed for cross-compilation to Mac.
No reason you can’t put a whole virtualenv, python interpreter and all, into your XAR. XAR can pack anything.
You still need a tool to prepare that virtualenv so that you can pack it, and that’s the sort of tool I struggle to find - cross-compiling a venv, or equivalent in other languages.
Yes, exactly. I am less interested in different formats and more in a tool to create them. The ease of doing that with Go is the target.
The ease of doing that with Go is the target.
By this you mean, you’re looking for a solution for Python packaging that makes it as easy as Go to distribute universally?
I used this once before to take some code I wrote for Linux (simple cli with some libraries - click, fabric, etc.) and release it for Windows: http://www.py2exe.org/index.cgi/Tutorial
The Windows users on my team used the .exe file and it actually worked. It was a while back but I remember that it was straightforward.
I think remote flexibility is a solution. I would hate my open office less if I wasn’t forced to be in it and could just come by a few times a week.
The fact that the option to work remotely is not a standard thing in 2018 is just shocking in my opinion. Vast majority of people working in tech end up commuting just to sit in front of a computer in a different place. The whole model of having people clock in and out is based around factory work. It makes absolutely no sense for creative activities like programming. You don’t have a steady output as a coder, and you’re not going to be productive for 8 contiguous hours a day.
I also think that remote work is one of the most practical ways to combat traffic congestion in big cities. If everybody who works with a computer would get off the road then we’d have drastically less traffic in cities.
It doesn’t even have to be all or nothing, as you point out a mix of coming to the office a few times a week and working from home when you don’t need to be there would be a huge improvement. There’s also benefit for the companies as they would need a lot less office space.
Agreed on all counts. I just think it’ll be hard to find a workplace that truly commits to abandoning work hours and trusting that the engineers will deliver better. I would even sign a piece of paper that says “if my work output decreases, I’ll stop”.
I’m trying to write a “sudoers evaluator” using Linux namespaces. Something like:
sudo: /usr/libexec/sudo/sudoers.so must be owned by uid 0)sudo -v -h <hostname>
The goal is if I have a sudoers file like:
Host_Alias FOO = bar[1-52-3]
user FOO = (baz) quux
You can run with ./sudoers-eval ./fake-sudoers-file --host bar23 to ensure that host bar23 is not picked up by the expansion of bar[1-52-3].
Hope I get something working by the end of the week. I started the project in C, switched to Go (where I was using the reexec package: https://github.com/moby/moby/tree/master/pkg/reexec) until I decided that if my goal is to understand the primitives behind Docker, importing Docker helper libs would hinder my education, so I switched back to C.
I’ve tried several linux distribution in the last 10years (ubuntu,debian,fedora,arch,nixos) and honestly, NixOS is way above the others for a developer-friendly OS. It’s:
I love being able to drop in a shell having the package or the lib I want and test things. In comparison, Arch feels like a totally standard linux distribution.
I’ve been using NixOS everywhere around me for about 3.5 years and totally agree. I now work on Atlassian Marketplace which deployed using Docker images that are built from Nix.
The nix model definitely seems like a great way to build docker images. Reproducible, minimal, flexible, it seems like a perfect fit.
That sounds strange to me; if you’re already set up to use nix, why bother with docker? Maybe I’m overlooking some things?
Because Atlassian has an internal PaaS which requires Docker. I use NixOS for everything but deploy our systems to that.
I’m not using NixOS. And it wouldn’t be for running locally, it would be for deploying on something like Kubernetes. But nix is a flexible, useful tool even when NixOS isn’t involved.
How can it be both good as my workstation, and good as a minimal container runtime?
Not for the sake of being argumentative (I have yet to try Nix), just confused because those two seem like opposites.
Flexible is the key adjective that makes it work for both. Nix allows you to install a package tree into a target directory, using binary packages. Analogous to debootstrap / kickstart. But it also lets you ad hoc add / update / remove packages in that directory like apt / yum does on a running system. It can also do all this according to a package spec a la bundler / npm / maven.
And it can do all this live on a workstation too! So that’s why it works for both.
It also does a great job of keeping things clean by installing packages into versioned directories, and symlinking the active package into the base system. Similar to what homebrew does on MacOS. That makes cleaning old versions a breeze, and allows multiple versions to be installed, which nix lets you switch between easily.
That being said, I run more conventional distros to keep familiar with the server installs used by customers. I find the utility of that expertise greater than any utility NixOS provides. But since nix is also a standalone tool, it works on less interesting distros and can be used for homedir installs or building docker images.
That being said, I run more conventional distros to keep familiar with the server installs used by customers.
Interesting. I find that I do this as well. I.e. I use a minimal vimrc, bash (not zsh/fish), etc., to not confuse my muscle memory of my day job (which is bog-standard Linux/Debian sysadmin).
Here’s a good blog post about using Nix to build Docker images: http://lethalman.blogspot.com/2016/04/cheap-docker-images-with-nix_15.html
So all the images are built from the nixos base image? It’s the first big company that I hear is using nixos+docker!
My first suggestion would be to install VirtualBox, put your distro of choice inside that, and then run it full screen most of the time.
You could also try the Windows 10 Linux stuff. I haven’t tried it (because I don’t use windows) but those who do say it is pretty great.
Cygwin would be a very, very remote third. I used it once in $corporate_office_job and the best thing I could say about it was that it was better than nothing.
Worst case, if your manager can’t or won’t provide you with the tools you need to do your job, you need to move on. Never stay long in a job you don’t love, that’s how you lose your soul.
My first suggestion would be to install VirtualBox, put your distro of choice inside that, and then run it full screen most of the time.
This is my life right now. It is decidedly second class; all the corporate-mandated bloatware is still there wasting utterly ludicrous amounts of memory and CPU time, but at least I don’t have to look at it and I can use a decent window manager and terminal environment. I certainly consider this preferable to the Bash-on-Windows features (and far, far better than Cygwin).
That was my choice for years, too. Most of the times, I ssh‘d into that machine using putty or another terminal emulator. They are not good but you can get the job done. SSH’ing-in also circumvents any input lag, too. Plus side: suspend and resume would work flawlessly with Windows.
Anyways, I switched jobs since then. Doing Linux only now. Macs serve as ridiculously expensive SSH terminals now. No more Windows.
Have you noticed significant input lag when running linux fullscreen? Whenever I’ve tried it it’s been too laggy to be usable, but I was running on an AMD FX 8350.
My 2 cents - a few years ago I had a powerful laptop (one of the WS Lenovos, maybe 16-32GB RAM, i7, etc.) and I tried VMWare (paid edition - company paid for it), Microsoft Hyper-V, and VirtualBox. I could never get rid of, or stop being bothered by, input lag.
Can’t install VirtualBox. This is a workspace, think something like remote desktop. See above for a link.
You could also try the Windows 10 Linux stuff. I haven’t tried it (because I don’t use windows) but those who do say it is pretty great.
WSL is good, but not great. I’ve been using it semi-seriously on my home machine for the past year and half and it’s better than it was, but I’m consistently disappointed in all the terminal emulators. The only setup I’m happy with is running urxvt on Xming. You will be disappointed in file system speed, but that seems to be a Windows thing regardless of WSL.
Have you tried ConsEmu? https://conemu.github.io/
I suspect one of the fundamental problems at play here is the fact that many of these tools want to be able to embed things like CMD.EXE or PowerShell and don’t have the native characteristics we associate with UNIX terminals.
Possibly, but I just found ConEmu to be hideously ugly. Personal preference thing, really. Other than its grotesque UI, it seems like a capable terminal emulator.
Ah. Yeah. I’m not so concerned about that :) When you’re trying to make a home in the malarial swamps, first you ensure that you have shelter, then you worry about whether the drapes match the tablecloth :)
If you remove the snipes at Docker, the article reads like “Here’s a single tool that can replace a ton of your in-house scripts”, which is typically a win. The fact that Docker is not mysterious is a good thing.
For example, you talk about using Ansible - if you tried, you could write an article about “Ansible Considered Harmful”. You know, just use Fabric and write your own Python scripts. It’s just running a bunch of scripts on hosts via SSH, that’s easy shit - Ansible is not mysterious at all. Why use Ansible?
Because it’s usually a waste of energy to write something from scratch when there’s a popular open-source version that mostly works fine and is well-tested and documented - especially if that something is not your core business.
I’m using JWTs for something at $WORK. To implement a crude but simple token revocation, I coded the services to accept SIGHUPs to change/reload their JWT signing keys in sync. It’ll invalidate every token that exists in the wild.
Edit: the clients are a Linux command-line application, so it’s not a typical web workflow.
I always leave at 5; I try to avoid ‘being in the middle of solving something’ and working with such small increments that I can stop and continue at any time. Because during the day, there will always be people, meetings, even walking to get water. In my opinion is better to avoid being in ‘the zone’ so I don’t feel any type of pain being interrupted.
This is interesting to me. I hear a lot about “flow” and “zone” lately and I find that I would rather have many small periods of decent productivity (interspersed by unavoidable context switching) than get upset at inevitably having my flow broken.
I suggest finding a way to get back in the zone. It’s productivity benefits are incredible. It feels great, too, with who knows what positive benefits in your mind. Maybe find a way in your practices or talking to management to reduce interruptions or make them happen outside of zone time.
Alternatively, consider finding a better job where you can be in zone. This could be worth talking to their employees about, too, when checking on the company before applying.
Who are these people? How do you get yourself on this list? Do you ask your friend to contribute you?
Are these people famous for some reason? How did this list come about in the first place?
I’ve been reading the pull requests and looking at the git blames, but I couldn’t really recognize any scheme to get highlighted on the list. I guess it’s just a kind of “personal highlights” list, which makes sense since “nice” is a rather vague term.
The reason the maintainer doesn’t want personal submissions is mentioned here:
It makes sense at first, but I agree that generally speaking it’s a bad rule. Many of the configurations listed aren’t that spectacular, and some of these mentioned in the rejected pull requests certainly seemed more interesting or better maintained.
I’ve opened a issue to start a conversation about it: https://github.com/caisah/emacs.dz/issues/34