For me as a programmer, what I need is just a stable Operating System which can always provide latest software toolchains to meet my requirements, and I don’t want to spend much time to tweak it.
Why is this? As a programmer I find I rarely need bleeding edge. I’m sure not going to install bleeding edge to production. It can be fun for hacking around but I don’t see why as a programmer it’s needed.
For what it’s worth, Arch does distinguish between ‘stable’ and ‘bleeding edge’ in its releases, although the rolling release does mean that stable is generally much newer than you might find in, say, Debian.
I wouldn’t use it in production, though I have seen it done.
As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.
It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.
I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.
@Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.
Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).
With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.
EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.
Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.
For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.
For me, I like the ability to plan when I will solve a problem.
But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.
And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following:
cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.
On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
And if an update break things, I can also roll back from that update until I have time to fix things.
Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.
Several people here said that Arch doesn’t really support rollback
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.
Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).
I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:
$ sudo pacman -Syu
... some time passes, after a reboot perhaps, and PostgreSQL doesn't start
... oops, I didn't notice that PostgreSQL got a major version bump, I don't want to deal with that right now.
$ ls /var/cache/pacman/pkg | rg postgres
... ah, postgresql-x.(y-1) is sitting right there
$ sudo pacman -U /var/cache/pacman/pkg/postgres-x.(y-1)-x86_64.pkg.tar.xz
$ sudo systemctl start postgres
... it's alive!
My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.
(Take my claims with a grain of salt. I am a mere pacman user, not an expert.)
now pacman Syu is almost guaranteed to break or change something for the worse
I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).
I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.
I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.
I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.
Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
It’s sometimes also a matter of bad timing. Now every time before doing a pacman -Syu I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.
I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)
Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.
How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.
I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.
As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.
I wrote a boot environment manager zedenv. It functions similarly to beadm. You can install it from the AUR as zedenv or zedenv-git.
It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.
(Preface: I didn’t know much, and still don’t, about the *Solaris ecosystem.)
So it seems like the evolution of *Solaris took an approach closer to Linux? Where there’s a core chunk of the OS (kernel and core build toolchain?) that is maintained as its own project. Then there’s distributions built on top of illumos (or unleashed) that make them ready-to-use for endusers?
For some reason, I had assumed it was closer to the *BSD model where illumos is largely equivalent to something like FreeBSD.
If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?
If Linux (kernel only) and BSD (whole OS) are the extremes of the scale, illumos is somewhere in the middle. It is a lot more than just a kernel, but it lacks some things to even build itself. It relies on the distros to provide those bits.
Historically, since Solaris was maintained by one corporation with lots of release engineering resources and many teams working on subsets of the OS as a whole, it made sense to divide it up into different pieces. The most notable one being the “OS/Net consolidation” which is what morphed into what is now illumos.
Unleashed is still split across more than one repo, but in a way it is closer to the BSD way of doing things rather than the Linux way.
Hope this helps clear things up!
If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?
OI would be the easiest one to start with on a desktop. People have gotten Xorg running on OmniOS (and even SmartOS), but it’s extra work vs. just having it.
illumos itself doesn’t have an actual release. You’re expected to use one of its distributions as far as I can tell, which should arguably be called “derivatives” instead. OpenIndiana seems to be the main desktop version.
I don’t know. I know there are some people who run SmartOS on their desktop, but I get the feeling it’s not targeting that use case, or at least there isn’t a lot of work going into supporting it.
“Hooray! We have forked an already small community into yet another smaller community because…”
Well, the “because” doesn’t really matter, even though they make extremely valid points! In an already incredibly fragmented community (how many derivatives of OpenSolaris does this make?) this makes the problem bigger…
I don’t follow illumos very closely, but are there reasons that community won’t assist in pushing towards solving the concerns that sparked unleashed? Surely illumos is also an operating system that “developers want to use,” no?
If the illumos community were healthy I would agree with you and I wouldn’t have bothered to create this fork. Sadly, I think the illumos community has problems and the people that truly have a lot of say where the project goes either don’t see them or like the status quo.
Two years ago when I started Unleashed, I had a dilemma: should I fork illumos or ditch it for one of the BSDs. When I realized that there were other people that were just as unhappy with the (lack of) direction illumos had, making a fork sounded like a good option. That’s how we got here.
Now where do we go from here is an open question. It is completely possible that Unleashed will fizzle, at which point I can say that no real harm was done. The illumos community will remain as small as it was two days ago, with major contributors like Delphix bailing on illumos in favor of Linux. If Unleashed takes off and in the process kills off illumos, the overall ecosystem will be better off. There might be a person or two grumpy that they can’t run their emacs binary from 1994, but in my opinion that is a small price to pay.
Surely illumos is also an operating system that “developers want to use,” no?
That is the reason I considered and ultimately went with a fork instead of bailing on it. The technology in Solaris/OpenSolaris/illumos/Unleashed is great, and I didn’t want to give it up. I wanted to give up the hugely inefficient and ultimately counter-productive contribution process.
Thanks for taking the time to respond. I know my post probably came off as aggressive, and if I’m honest, it was half intended to be. I think forks are very disruptive, and wish, of course, to minimize these sorts of things when at all possible.
When I realized that there were other people that were just as unhappy with the (lack of) direction illumos had, making a fork sounded like a good option.
This makes total and reasonable sense. I didn’t mean to imply that you hadn’t thought this through! And appreciate that you used it as a sort of last resort.
That is the reason I considered and ultimately went with a fork instead of bailing on it. The technology in Solaris/OpenSolaris/illumos/Unleashed is great, and I didn’t want to give it up. I wanted to give up the hugely inefficient and ultimately counter-productive contribution process.
Thanks for doing what you’re doing, and I wish Unleashed success (and maybe either domination or an eventual merge of the communities again)!
No problem. I really had no choice - someone on the internet was “wrong” ;)
I know my post probably came off as aggressive, and if I’m honest, it was half intended to be.
The phrasing certainly made me go “urgh, not one of those…” but it sounds like we both agree that forks are disruptive, but you think that it’s a negative thing while I think it is a positive thing. A reasonable difference of opinion.
Thanks for doing what you’re doing, and I wish Unleashed success (and maybe either domination or an eventual merge of the communities again)!
The phrasing certainly made me go “urgh, not one of those…”
There’s really nothing I can offer as a legitimate excuse for that. I’m sorry.
but you think that it’s a negative thing while I think it is a positive thing. A reasonable difference of opinion.
The additional context you’ve provided makes me feel that it probably is the right, and positive choice in this case. I’m not vehemently against forks if there’s a legitimately good reason [and just to be clear, moving on from supporting legacy stuff is the important divergence I’m seeing, as it frees up resources to move faster]. I am against forks that don’t offer some radical divergence in philosophy, though. These are often rooted from deep bikeshedding on topics that don’t matter in the grand scheme of things.
Two examples of justified forks in my opinion: @rain1 recently forked filezilla because it was incorporating “unwanted extra nonfree software.” Devuan is a fork of Debian that replaces systemd – a topic that is far beyond bikeshedding at this point, as it’s had (and will continue to have) a drastic effect on the portability of software to other ecosystems.
In my mind, there are two types of forks we’re talking about. One of them is a “fork” on github, where I clone the repo, make some changes, contribute it back to the original author (or maybe not!), and live a happy life. These types of forks are almost always ok. It’s the “You do you, man. You do you.” response.
The other “fork” is far more challenging, and far more likely to cause a rift in spacetime. Those are the large, and by all accounts, successful projects that as a result divide a community, and make it difficult for users and would be contributors to find the right thing to use. These projects fork very publicly, and are rather uncomfortable, to be honest.
In many cases, these forks occurred because egos were hurt (I wanted it yellow) – a social issue – not a technical issue. In other cases, there’s a large philosophical difference that impacts the general direction of the technology. This may be licensing, whether or not to support obscure platforms, a radical new idea or focus… etc. In all cases, even if there are legitimately great outcomes (OpenBSD comes to mind), there’s a period of confusion and frustration from users who are now forced to choose where to put their effort. They are forced into taking sides, and that’s unfair.
These are marketing concerns. Market share issues, to be precise.
They are valid for open source projects that are basically marketing tools, but they are pointless for free software that maximizes hackers’ freedom to hack.
Feeling the need to justify a fork, is the first step towards asking permission.
The PATENTS file in projects like Fuchsia’s kernel sources just push for that.
Sorry, my friend. Most people don’t share your principles on what a ‘hack,’ or a ‘hacker’ is. More often than not, the people using, and writing software care more about getting the job done quickly, and without frustration, and a fork makes that harder. It doesn’t matter how you classify it.
people using, and writing software care more about getting the job done quickly, and without frustration
And this is fine!
But, my friend, you need to understand the tools you use!
If you pick up a free software that is distributed “WITHOUT ANY WARRANTY” just because it’s free of charge, and you completely miss the culture of the people who develop it, you won’t get your job done. Same if you pick an open source software controlled by Google (or whoever) and you fork it to successfully challenge their market share.
In both cases, you’ll face surprises, unexpected costs and frustration.
Understanding the environment you operate in, is strategic to “get the job done”.
Most people don’t share your principles on what a ‘hack,’ or a ‘hacker’ is.
Interesting! Do you have world-wide statistics to prove such claim?
Not that it matters: “principles” stand to “artifacts” like “postulates” stand to “theorems”. How many people accept the postulates/principles is irrelevant.
I know that some people don’t share my principles. And I’m fine with it.
Do you know that some people don’t share your principles?
Are you fine with it?
But, my friend, you need to understand the tools you use!
If you pick up a free software that is distributed “WITHOUT ANY WARRANTY” just because it’s free of charge, and you completely miss the culture of the people who develop it, you won’t get your job done. Same if you pick an open source software controlled by Google (or whoever) and you fork it to successfully challenge their market share.
In both cases, you’ll face surprises, unexpected costs and frustration
I read this several times and can’t figure out what you’re saying.
Why do I need to understand the culture of a tool I use? As long as it fulfills my technical needs and I know what I’m prohibited to do by law, I can use it to get my job done.
There are ways around much of these concerns. I have a support contract, or trust in a distribution (say, Canonical for Ubuntu or Red Hat), which provides vuln disclosures, and updates for me to apply. I have a development process that includes QA, and automated CI infrastructure so that breaking changes are caught before production… etc.
But, to the meta point:
But, my friend, you need to understand the tools you use!
Demonstrably this is not at all true. It’s easy to do a survey of 100 people – 10 people even, and ask them if they understand their tools. How are their tools implemented? How does the relational database they store and query data into/from store data on disk? How does the map type work in their favorite language? How does the VM work? How does the ORM work? How does the templating language they use work? How does the image processing library they use work to resize images, or rotate images, or whatever work? How does TensorFlow do all it does?
What you’ll find is that a large portion of engineers have no idea how things work. And they don’t need to know. Their job is to build CRUD apps for people who could care less if something takes a little bit longer. The developer themselves, in many cases, could care less about BTREE indexes vs. HASH indexes, and doesn’t really know the difference. For the amount of data they manipulate, doing full table scans 3 times an hour (because they literally have 3 queries an hour) is completely sane, reasonable, and still puts a smile on the face of the Administrative assistant who no longer has to go to a type writer to type out a bunch of labels. Or, who no longer has to print 10,000 college applications to give to admissions reviewers… or any number of other tasks where even the worst technology choices, recommended by underskilled developers can make a ginormous (and) positive difference on the process.
Sure, but the simplest one is to understand the tools you use.
And actually, trusting Debian (or OpenBSD or whatever) or signing support a contract with Canonical (or Red Hat or Microsoft or whatever) requires the cultural understanding of such people I was talking about.
Demonstrably this is not at all true. […]
…even the worst technology choices, recommended by underskilled developers can make a ginormous (and) positive difference on the process.
Practically you are saying: “everyone can become rich without working: just win the lottery!”. Well, this is not false. Stick on boring low-hanging fruits all your life and you will never face the issues that a professional developer has to consider every day.
What you describe is not to “get the job done”.
People die because of people who work this way.
In Italy we use to say: “even a broken clock can be right twice a day”.
Yes, incompetent developers can occasionally improve the life of someone, but for most of time, they just mess up things beyond repair.
Practically you are saying: “everyone can become rich without working: just win the lottery!”. Well, this is not false. Stick on boring low-hanging fruits all your life and you will never face the issues that a professional developer has to consider every day.
What you describe is not to “get the job done”.
People die because of people who work this way.
I believe this comment really lacks perspective. What you are saying is the Shamar-style of development is the only correct style of development and anyone not doing it that way is not only doing it wrong but putting people’s lives at risk.
The industry I work in produces a lot of software and consumes a lot of software, however no company in this industry would consider itself a tech company. We have people whose job title is “Software Engineer”. But, for the most part, they make pretty bad technical decisions and are fairly unskilled relative to the engineers at most tech companies. But, they aren’t “trying to get rich without working” or “win the lottery”. They are very hard working. The industry just has a different set of values where the software is incidental to the actual problem the company is solving. A lot of the things you brought up in an earlier post about why one needs to understand the culture of the software they consume doesn’t actually apply in the industry I’m in. Security updates and backdoors are almost never going to be a concern because these systems are not open to the outside. The data they consume is entirely generated and processed inside the walls of the company. In the industry I’m in, we’re actually saving lives too! I mean that literally.
I hate to use this word, but your comment is elitist. Anyone not solving problems how you say is not a professional and just causing damage “beyond repair”. Your comment lacks humility and perspective yet is extremely assertive. It might be worth stepping back and questioning if what you assert so strongly is an ideal, a belief, or reality. Or perhaps it’s a challenge with the language and you don’t realize how assertive your comments sound relative to how assertive you meant them to be. But insisting people not following your development principles are killing people is a pretty strong statement, in any case.
But insisting people not following your development principles are killing people is a pretty strong statement, in any case.
I was not talking about software development in particular.
Incompetent engineers build bridges that fell off.
Incompetent phyisicians do not cure mortal deseases properly. And so on.
They can get some work done, but it’s lucky, like winning te lottery.
On the contrary!
I mean that to make a choice you need competence.
I’m saying that only a competent professional that knows the tools she use can really “get the job done”.
An incompetent one can be lucky some times, but you cannot trust her products and thus the job is not done.
Or perhaps it’s a challenge with the language
Actually, I’m rather surprised by the opposition such a simple and obvious concept is facing. All other craftmen I know (the real ones, not the software ones) agree that it takes years to “own” their tools.
Probably we have diverged too much from the original topic, and we are facing a deep cultural mismatch.
In Europe (that, let me say, is not living up to its own values these days) we are used to be very diverse and inclusive (note: it took centuries of wars, rapes, debates, commerce, poetry, science, curiosity and many other contaminations to get here).
But we do not meld the meaning of words just to include more people.
We clearly see and state the differences, and happily talk about them.
And this is not elitism, it’s efficient communication.
When we say “job” or “done” we convey a precise message.
And if a bridge fell off and kills someone, we call the engineers who built it liars because the job was not done. At times they even stop being called engineers at all.
You don’t give an inch, do you? I’ve explicitly said that I work in an industry that does not do software development like you have expressed it should be done and your response is to keep on insisting on it. On top of that, you did this annoying thing where this discussion has clearly been about software development but when I pushed back you move the goal post and start talking about bridges and medicine. It’s extremely challenging and frustrating to communicate with you, I need to work on not doing that. Thanks for the discussion, it was insightful for myself.
Looks like someone got a degree in being right on the Internet! There’s no point in engaging with you, and if there was a feature to block users, I would make use of it.
If you lack arguments to support your assuptions, I can suggest to simply state such assumptions clearly. For example:
Users and companies are entitled to get work and value from software developers for free, because they are in a rush to get their job done.
FS and OSS forks hurts this right.
I would deeply disagree on such premise.
But I wouldn’t argue against the conclusions.
I just spent 30 minutes carefully crafting a response to your absurd notion that everyone must be highly skilled or people will die. But, it’s just not worth it. You’ll find a way to twist it into something it’s not, and yell loudly about how I’m wrong without considering that you may be shortsighted in your assumptions.
Question that I have that isn’t clear from the post. Do you intend to maintain enough compat with Illumos that you would be able to get improvements that were done n something like SmartOS? Are you planning on continuing to pulls changes from Illumos? Planning to try contributing changes back? Or is this a hard fork where you don’t imagine there would be cross pollination?
Source-level compat, yes until it stops to make sense. Binary compat, no.
I’ll continue git-pull from illumos-gate until it starts to be too cumbersome due to divergence. Once that happens, I’ll probably still take commits from illumos-gate but I’ll be more selective. In addition to illumos-gate, we cherry-pick changes from the illumos downstreams (omnios, illumos-joyent, etc.). This is open source, if those repos have good changes I’d be stupid not to take them because they were authored “outside”.
I have no plan to get changes back into illumos, however the code is open so others can do it. As an example, Toomas Soome took one of the cleanups in Unleashed and got it into illumos-gate (87bdc12930bfa66277c45510e399f8a01e06c376). He also has a work-in-progress to get our cpio-based boot_archives into illumos, but I don’t know the status of that.
I love how varied university education can be. We definitely had very different experiences. I’ll just ramble some hand wavy thoughts, don’t take them too seriously.
School in the US doesn’t have to be expensive, depending on circumstances. I went to UMass Amherst for a year (living on campus) before transferring to Worcester State (living at home). UMass was too expensive for my taste, and it was still quite a bit cheaper (in state) than private schools in the area you probably haven’t heard of (WPI, Clark, Assumption, Becker, Mount Holyoke, Amherst). State schools like Worcester State are much much cheaper by comparison, especially if you can finagle a way to live at home. I worked my way through school and had pretty small student loans at the end, about an order of magnitude less than others in my circle of friends. Order of magnitude is not an exaggeration. Living at home was a convenient option that I was fortunate to have, but everyone in my circle had that opportunity as well. But they prioritized other things. :-)
The most important thing I ever did was learn how to learn. This made most courses stupidly easy, in the sense that I still learned some things but I spent very little time on it. I don’t consider myself particularly bright, but figuring out how to do active reading in high school felt like a cheat code.
I very rarely tried anything new because I knew what I liked, and that’s OK. The only real friends that I stay in contact with from college are my wife, my boss and my mentor (where my mentor was from graduate school). (Sorry @jfredett, fellow woo stater, but we haven’t seen each other in a loooong time. :-))
I enjoyed my time in school overall. In part because I liked taking classes and expanding my exposure to topics I wanted to know more about in a structured way. But really, I liked school because I had so much god damn time. I always tried to arrange my schedule on a Tues/Thurs or Mon/Wed/Fri rhythm. Sure, I ended up with 12 hour (or more) days, but then I had alternating days off. Homework and exam prep always happened in spirts, so I’d almost always have huge huge chunks of time where I could just do whatever I wanted to. I look back on those days with longing. These days (with a full time job), I need to be much more efficient and ruthless with how I spend my time.
I almost never went to office hours, and disliked working in groups. I think I was too stubborn for office hours, because I would just do my best to work it out on my own. I disliked working in groups because the incentive structure was always completely off. Occasionally I worked with other motivated students, and those were great.
I enjoyed study groups in large part because I got to teach others. This reinforced the material even more for me. Win-win. But I didn’t do this often.
At every school I went to, I always found my little corners of quiet. I loved those so much, especially when nobody else was around. They really helped me a lot. In my three years at Worcester State, almost nobody went to the second floor of the library during the day. It was bliss.
Find a way to be engaged with what you’re spending your time on. You’ll be happier and so will the people around you. (Easier said than done, but it has always come naturally to me. More than that, I’ve found that I tend to be surrounded by people that do the same. I don’t know how that happened, but it’s happened several times now, so I’m pretty sure it isn’t coincidence.)
Eh, I run into you on the internet often enough. :) We should get a beer sometime though.
I can definitely echo a lot of these statements, but especially these two:
The most important thing I ever did was learn how to learn. This made most courses stupidly easy, in the sense that I still learned some things but I spent very little time on it. I don’t consider myself particularly bright, but figuring out how to do active reading in high school felt like a cheat code.
This is critical, but ‘learning how to learn’ has never been my favorite phrasing. The latter, “active reading” is closer, but I think I like “active learning” the most – the idea of not just learning something, but learning how you learn it, and critically examining and engineering better ways to learn. I was not a particularly great high school student, but college taught me pretty quickly how to treat learning as an engineering problem, and that was super valuable.
I almost never went to office hours, and disliked working in groups. I think I was too stubborn for office hours, because I would just do my best to work it out on my own. I disliked working in groups because the incentive structure was always completely off. Occasionally I worked with other motivated students, and those were great.
I was similar. I much preferred leading discussion in a peer group – in that way I ended up being forced to teach the material / defend my position (even when I wasn’t particularly confident I was right!). @burntsushi was a pretty common foil for my ramblings in the Math lab.
Having a good set of friends to talk math at was super valuable; I suspect that is the core thing to try to acquire in any situation that involves a lot of learning. I’ve developed a similar method of working now, my team consists of a lot of really smart people who know their fields well and have enough overlap that we can all bounce ideas off each other while still having areas we can feel expert in. It’s the best of both worlds, I think.
I suppose this is number three, but quiet parts of libraries are the best. The bottom floor of WPI’s library is where I got most of my work done.
Wow, I had no idea there were woo state alums here. I, too, went to Worcester State. I can’t compare it to any other undergraduate experience but I had a great time and I came out with almost no debt (my 4 years there were less than 1 semester of friends who went to more prestigious schools in the are). While the cost savings wasn’t an active decision at the time, it turned out to be a great one. I have found, for software engineering at least, the amount of negative impact going to a small school has on one’s career is minimal to zero. And since those loans are in my name, I’m much better off financially than those going to more expensive schools and having to pay for it (although I know many who had parents willing to foot the bill). Google might not have been waiting outside during my graduation but I see little different in my career relative to most friends who went to much more prestigious schools for undergrad.
I was there 2001 - 2006 (I think, I’m terrible with dates). My claim to fame is I was the first person to graduate with the Bioinformatics concentration. I don’t know if that still existed when you were there. I can’t wait to retire though, I miss college and want to go back full time!
Hah, nice. I also graduated with a bioinformatics concentration (and a Math degree). The biology and chemistry courses were awesome. Judging by the number of other people I was aware of that were pursuing a CS w/ bioinformatics concentration, I wouldn’t be surprised if I was the second one. :P
I was actually just reflecting on how much I enjoyed the bio and chem as well! I feel it really grounded me in the real world. In CS we just have control over so much since we’re defining so much of it but bio and chem are messy and fun and so many unknowns.
The distinction between ‘art project’ and ‘hack’ is mostly whether or not you come out of a culture that knows what ‘hack’ means. (I’m not sure I can count on that anymore, even for OS dev people.)
However an artwork is not, in itself, an hack just like a perfectly executed engineering task is not, by itself, an hack.
In other words humans can hack art just like we can hack engineering.
Hacks however are identified by the curiosity they express: if there is nothing that challenge common wisdom, it’s not an hack (but you know… any definition can be hacked… this too!)
Another interesting aspect of your comment is the cultural perspective: is “hack” the best term to convey the meaning we are talking about?
Honestly I don’t know.
I cannot think of a better term for this specific meaning in the languages I know.
But this is probably a cultural bias.
Hackers have been around for centuries (if not millennia), so it’s suspect that mankind waited for MIT to pick a word from English.
We should probably look for a language hacker to look for translations in other languages, and find a term that can clearly identify the meaning, without too much cultural bias.
In this particular case, ‘hack’ is a little over-loaded with meaning for my tastes.
XANA is a hack in several senses of the word: it’s not only pointless but jokey, ad-hoc, and running counter to best practices. iX, on the other hand, is much more conservative in its style – the only thing new about iX is the concept behind it.
The reason I used the term ‘art project’ is that, rather than the hacks of the demoscene or of the obfuscated C contest, I identify these projects more closely with design fiction & other kinds of conceptual art: I said “what if your whole OS was ZigZag” and then answered my own question in the form of usable code, in the same way R. Mutt[1] asked “what’s the outer limit of what constitutes art – is it just whatever’s in the context of the museum” and then answered by submitting a urinal.
The term I think best applies is one from conlanging. In the constructed language space, an ‘a priori philosophical language’ is a language invented to express a particular model of language directly and purely. For instance, lojban’s ancestor loglan began as an a priori constructed language based on the idea that horn clauses are enough (and also as a kind of test of Sapir-Worf); toki pona is an expression of the idea of an extremely limited vocabularly & an extreme vagueness of denotation (because every word has such a wide range of possible meanings, a lot more effort goes into interpretation than into sentence construction); ithkuil is designed to be extremely dense, removing redundancy and using complex conjugation tables and swaths of imported phonemes to make it possible to translate paragraphs of english text into a handful of syllables. Generally speaking, these languages are both simpler than natural languages & harder to use: conceptual purity handicaps certain kinds of expression.
In the same way, I thought about “what if the only thing in your OS was this particular concept”, and then tried to make something that bordered on usability without going outside of that plan. (Semi-mainstream OSes that have gone down this conceptual route exist & are criticized from one side for sticking to the purity of their idea too closely while from the other side criticized for breaking it – like plan9, with the ‘everything is a file’ idea – but once you give any ground to usability or compatibility the project becomes a lot harder.)
I threw out any normal feature that would make things harder without meshing conceptually with my main idea. For instance, both these systems use flat memory (all in ring 0) and neither support executing non-kernel binaries. Dynamic memory management is eschewed in favor of a preallocated chunk. This severely limits compatibility with existing systems, eliminates the possibility of low-level extensions, and prevents it from being anything more than a single-user system (since security features are nonexistent).
[1] R. Mutt has been considered an alias of Marcel Duchamp for a long time, but recent research indicates that it was most likely actually an associate living in New York City, whose name I have forgotten.
Why are the words ‘hacker’ and ‘hack’ so important to you? I mean that question genuinely. Your comments over the last few days read, to me, like you feel a strong need to own the word ‘hacker’ and apply it to things regardless of the intent. In other words: is it important to call something a ‘hack’ vs an ‘art-project’? If so, why?
I think you are reading my comments according to your own culture.
Ownership is not something I care much.
Indeed one of my criticisms to ESR work is related to his appropriation of the Jargon file.
is it important to call something a ‘hack’ vs an ‘art-project’?
Is it important to call something “engineering” vs “applied physics”?
Is it important to call something “art” vs “craft”?
I feel a need for a precise language. Don’t you?
If so, why?
Because the language we use, forges the way we think.
By distorting the jargon, you affect what people can think easily and what they cannot.
The same apply with the difference between “free software” and “open source”, and my realization that the concept of FOSS/FLOSS is just a deception of this kind.
We need proper terms to convey orthogonal concepts.
Hack, Hacking, Hacker are just words. But they convey a time-worn meaning all over the world.
I’m just using such words properly. And trying to notice when people do not.
I think you are reading my comments according to your own culture.
It’s pretty hard not to? For evidence, I site the comments you’ve made over the last few days insisting on a particular definition of ‘hacker’ as well as properties of hacking regardless of the intent of the ‘hacker’. By “own” I mean you seem to have a very specific meaning that you feel it is important other people subscribe to.
Is it important to call something “engineering” vs “applied physics”?
Is it important to call something “art” vs “craft”?
Unless there are legal reasons, I don’t think it matters that much. If someone wants me to call them an engineer and I respect them, I’ll do it even if it doesn’t necessarily align with my view of being an engineer.
Because the language we use, forges the way we think.
You’ve decided calling something a ‘hack’ is preferable to ‘art project’, so you seem to be pushing for us using a particular language, which suggests you seem to want us to think in a particular way. Maybe calling it an art project is actually just as good, or maybe even better! Or why can’t we call it both things rather than deciding one or the other? Your original comment here is so excluding. “If X then it’s a Y”. You could have said “I think these are born out of curiosity, so I’d also consider them hacks”. That would not have prompted me to comment.
I believe your insistence on a word meaning a very particular thing and pushing it is making you close-minded and rigid.
Can we change words? Why not!
Do you have any proposal?
This question is missing the point, I believe. I don’t want to use a particular word. I just don’t want you insisting on which word I should use. You’re certainly free to do whatever you want (and I won’t comment on it anymore), I’m just not sure if you realize that in regards to this you can come off a bit arrogant and exclusionary. Maybe I’m the only one who interprets your comments that way, though.
However all definitions in the jargon except “someone who makes furniture with an axe” connotate the same thing, just focusing on particular observable aspects of an hacker.
As one who fits all eight numbered definitions (but not the addictional commentary about the inherent elitism), I see how all the definitions are partial and overall missing the point. They are, in other words, as pertinent as the descriptions of an elephant by blind men.
But, you know, dictionaries are human artifacts: they can be wrong and they can be fixed.
Meanwhile I’m just using the term properly.
you can come off a bit arrogant and exclusionary.
Well you might have a point on this.
Maybe ESR saw several hackers that looked arrogant and exclusionary as they talked authoritatively about their realm of competence, misunderstood his own misunderstanding as if those hacker were actually arrogant and then rationalized such behaviour as elitism.
The fact is: definitions define. From Latin de-finire, roughly “marking a boundary”.
Any definition limits the scope of the meaning of a word.
But having a clear understanding of hacking does not mean to be exclusionary.
On the contrary! As a curious person I want everybody to leverage my curiosity and become hackers themselves, so that I can leverage their curiosity to learn more and so on… recursively.
Well… I agree that the word “hacker” is too overloaded!
But I don’t agree! I think this is why we keep on talking past each other. I am fine with the word ‘hacker’ being fuzzy and unclear and I’m not a fan of that you are rigidly defining it.
I grew up with a fairly fuzzy and unclear definition of hacker so I never learned it should bother me. In general, when I have a deep conversation with someone I either define my important terminology during the discussion or have them define theirs. I don’t really care what the definition is just as long as we both agree.
I think even though it’s fuzzy and unclear the essence is usually close enough among all the definitions to get the gist of it.
I don’t think there is a lot of value derived from rigidly defining the word ‘hacker’ and getting the, fairly significant, buy-in from the world needed to agree on it.
Unless one can enforce it by law, I don’t really know of any success story where the number of definitions of a word has been reduced. Communication just seems to naturally expand.
As for SHA256 vs. SHA512, from a performance point of view, SHA512 seems to perform ~1.5x faster than SHA256 on 64-bit platforms. Not that that matters much in a case like this, where we’re calculating it for a very small file, and very infrequently. Just thought I’d put it out there. So, yeah, SHA256 works too if you want to go with that :)
Oh idk. I havent looked at the numbers in a while. I recall some systems, esp cost- or performance-sensitive, stuck with SHA-1 over SHA-256 years ago when I was doing comparisons. It was fine if basic collisions weren’t an issue in thd use case.
Anecdotal, but I just timed running sha 512 and 256 10 times each, on a largeish (512MB) file. Made sure to run them a couple of times before starting the timer to make sure it was in cache. Results for sha-512 were:
27.66s user 2.86s system 99% cpu 30.562 total
And 256:
42.18s user 2.72s system 99% cpu 44.943 total
So it looks like sha-512 pretty clearly wins. (CPU is an i3-5005u).
Cool stuff. Modern chips handle good algorithms pretty well. What I might look up later is where the dirt-cheap chips are on offload performance and if they’ve upgraded algorithms yet. That will be important for IoT applications as hackers focus on them more.
I said there’s hardware accelerators for SHA-1 and SHA-2. Both are in use in new deployments with one used sometimes for weak-CPU devices or legacy support. Others added more points to the discussion with current, performance stats something I couldnt comment on.
Now, which of my two claims do you think is wrong?
As noted, SHA-1 has been on its way out for awhile and shouldn’t be suggested.
I don’t know if your claim on weak-CPU devices or legacy support is true, plus you mentioned IoT in response elsewhere, it clearly doesn’t apply in the context of filezilla, an FTP app people will be running on desktops/laptops. Even if one is using the a new ARM laptop that is somewhat under powered…
As the comment you responded to points out, one installs new software quite infrequently, so the suggestion based on performance seems odd, especially since the comment you responded to already points out that SHA-512 is generally faster to compute than SHA-256. In any case, suggesting SHA-1 for performance reasons seems unsecure.
Whenever I read about the legendary programmers and inventors, these people sought out programming by themselves.
So I don’t have much hope that pushing these things onto kids would give such good return. The top tier potentials will already find their own ways.
You don’t need to know how computers work to operate in a modern society just as you don’t need to know how a car works to drive. It’s good to know but not necessary.
Given the availability of computers, the extra ‘discovery’ of potential, I think would be small.
So the whole ‘teach xyz to program’ seems like a mostly cost-ineffective boondoggle to me.
We can safely drop mathematics, physics and literature from school curriculums then. Most of the students aren’t ever going to be good at it, and talent will find the way.
Learning programming by yourself does not necessarily make you top talent, though we’d all love to entertain that idea. It’s certainly not worse with a self motivated learner who is actually aided by school system. Besides the “self learners” of old days didn’t come from Amazon jungle to a running PDP rack and started hacking. They still had the fundamentals of logic, maths and reasoning taught in the school.
If school failed me back then the same way it’s failing kids today, it’s by teaching students idiotic facts beyond the basics of reading, writing and math. Introduce kids to as many matters as you can in such a way to cultivate curiosity and you’ll have won.
The whole ‘teach xyz to abc’ is ultimately pointless if what you’re seeking is innovation. It’s super good if you’re raising cattle-citizen though.
This seems orthogonal to what @varjag is and @LibertarianLlama are saying. Llama seems to be arguing that being great at programming is innate and we shouldn’t bother teaching kids programming because they’ll never be great because if they were great they don’t need to be taught. And varjag is pointing out that education, historically, has not been about making the greats. Whether or not the quality of education is any good seems quite different than the question if if we should educate.
I’m not responding to @varjag, although there is a relationship between what we both say. What I’m saying is, teaching programming for the sake of teaching programming is indeed pointless, but then again, so is pretty much everything beyond basic math and reading skills (then again, there’s the case of the enormous amount of functional illiterates so I’m not sure even that is technically necessary).
You’re correct, of course, both matters are vastly different. I think they’re ultimately connected, especially if you’re after cost-effectiveness, which I don’t agree should be the target of education, but that’s also another matter.
My bad, I did not correctly express my idea. I mean every part of my education which required the absorption of data for the sole purpose of regurgitation at a later date. I’ve lived that through many different subject matters. If you’re just pumping facts into brains so that you get graded on the quality of your repetition, it’s not really productive, and the students end up losing much of what they “learned”.
Wonder what he thinks of fossil and mercurial.
The fossil author deliberately disallowed rewriting history; he makes
a good case for it. At one time, Mercurial history was immutable too,
but I believe this has changed.
I guess my point is that there are DVCS out there that satisfy his criteria;
they’re just not git.
History can never be truly immutable so long as the data is stored on mutable media like a hard disk. Refusing to package tools that do it just makes people who need the feature find/build 3rd party tools
About Mercurial, I believe it has always allowed rewriting history, but not by default — you have to change your configuration files to opt-in to all the “dangerous/advanced” features.
Haha. I would love it if I had the time to play. Perhaps next year. Thanks for the ping, though. I’ve forwarded this on to a few of my coworkers who play CTFs.
I’d love to if I hadn’t lost my memory, including of hacking, to that injury. I never relearned it since I was all-in with high-assurance security at that point which made stuff immune to almost everything hackers did. If I still remembered, I’d have totally been down for a Lobsters hacking crew. I’d bring a dozen types of covert channels with me, too. One of my favorite ways to leak small things was putting it in plain text into TCP/IP headers and/or throttling of what otherwise is boring traffic vetted by NIDS and human eye. Or maybe in HTTPS traffic where they said, “Damn, if only I could see inside it to assess it” while the data was outside encoded but unencrypted. Just loved doing the sneakiest stuff with the most esoteric methods I could find with much dark irony.
I will be relearning coding and probably C at some point in future to implement some important ideas. I planned on pinging you to assess the methods and tooling if I build them. From there, might use it in some kind of secure coding or code smashing challenge.
I’m having a hard time unpacking this post, and am really starting to get suspicious of who you are, nickpsecurity. Maybe I’ve missed some background posts of yours that explains more, and provides better context, but this comment (like many others) comes off…almost Markovian (as in chain).
“If I hadn’t lost my memory…” — of all the people on Lobsters, you seem to have the best recall. You regularly cite papers on a wide range of formal methods topics, old operating systems, security, and even in this post discuss techniques for “hacking” which, just sentences before “you can’t remember how to do.”
You regularly write essays as comments…some of which are almost tangential to the main point being made. These essays are cranked out at a somewhat alarming pace. But I’ve never seen an “authored by” submitted by you pointing outside of Lobsters.
You then claim that you need to relearn coding, and “probably C” to implement important ideas. I’ve seen comments recently where you ask about Go and Rust, but would expect, given the number of submissions on those topics specifically, you’d have wide ranging opinions on them, and would be able to compare and contrast both with Modula, Ada, and even Oberon (languages that I either remember you discussing, or come from an era/industry that you often cite techniques from).
I really, really hate to have doubt about you here, but I am starting to believe that we’ve all been had (don’t get me wrong, we’ve all learned things from your contributions!). As far as I’ve seen, you’ve been incredibly vague with your background (and privacy is your right!). But, that also makes it all the more easy to believe that there is something fishy with your story…
I’m not hiding much past what’s private or activates distracting biases. I’ve been clear when asked on Schneier’s blog, HN, maybe here that I don’t work in the security industry: I’m an independent researcher who did occasional gigs if people wanted me to. I mostly engineered prototypes to test my ideas. Did plenty of programming and hacking when younger for the common reasons and pleasures of it. I stayed in jobs that let me interact with lots of people. Goal was social research and outreach on big problems of the time like a police state forming post-9/11 which I used to write about online under aliases even more than tech. I suspected tech couldn’t solve the problems created by laws and media. Had to understand how many people thought, testing different messages. Plus, jobs allowing lots of networking mean you meet business folks, fun folks, you name it. A few other motivations, too.
Simultaneously, I was amassing as much knowledge as I could about security, programming, and such trying to solve the hardest problems in those fields. I gave up hacking since its methods were mostly repetitive and boring compared to designing methods to make hacking “impossible.” Originally a mix of public benefit and ego, I’d try to build on work by folks like Paul Karger to beat the worlds’ brightest people at their game one root cause at a time until a toolbox of methods and proven designs would solve the whole problem. I have a natural, savant-like talent for absorbing and integrating tons of information but a weakness for focusing on doing one thing over time to mature implementation. One is exciting, one is draining after a while. So, I just shared what I learned with builders as I figured it out with lots of meta-research. My studies of work of master researchers and engineers aimed to solve both individual solutions in security/programming (eg secure kernels or high-productivity) on top of looking for ways to integrate them like a unified, field theory of sorts. Wise friends kept telling me to just build one or more of these to completion (“focus Nick!”). Probably right but I’d have never learned all I have if I did. What you see me post is what I learned during all the time I wasn’t doing security consulting, building FOSS, or something else people pushed.
Unfortunately, right before I started to go for production stuff beyond prototypes, I took a brain injury in an accident years back that cost me most of my memory, muscle memory, hand-eye coordination, reflexes, etc. Gave me severe PTSD, too. I can’t remember most of my life. It was my second, great tragedy after a triple HD failure in a month or two that cost me my data. All I have past my online writings are mental fragments of what I learned and did. Sometimes I don’t know where they came from. One of the local hackers said I was the Jason Bourne of INFOSEC: didn’t know shit about my identity or methods but what’s left in there just fires in some contexts for some ass-kicking stuff. I also randomly retain new stuff that builds on it. Long as it’s tied to strong memories, I’ll remember it for some period of time. The stuff I write-up helps, too, which mostly went on Schneier’s blog and other spaces since some talented engineers from high-security were there delivering great peer review. Made a habit out of what worked. I put some on HN and Lobsters (including authored by’s). They’re just text files on my computer right now that are copies of what I told people or posted. I send them to people on request.
Now, a lot of people just get depressed, stop participating in life as a whole, and/or occasionally kill themselves. I had a house to keep in a shitty job that went from a research curiosity to a necessity since I didn’t remember admining, coding, etc. I tried to learn C# in a few weeks for a job once like I could’ve before. Just gave me massive headaches. It was clear I’d have to learn a piece at a time like I guess is normal for most folks. I wasn’t ready to accept it plus had a job to re-learn already. So, I had to re-learn the skills of my existing job (thank goodness for docs!), some people stuff, and so on to survive while others were trying to take my job. Fearing discrimination for disability, I didn’t even tell my coworkers about the accident. I just let them assume I was mentally off due to stress many of us were feeling as Recession led to layoffs in and around our households. I still don’t tell people until after I’m clearly a high-performer in the new context. Pointless since there’s no cure they could give but plenty of downsides to sharing it.
I transitioned out of that to other situations. Kind of floated around keeping the steady job for its research value. Drank a lot since I can’t choose what memories I keep and what I have goes away fast. A lot of motivation to learn stuff if I can’t keep it, eh? What you see are stuff I repeated the most for years on end teaching people fundamentals of INFOSEC and stuff. It sticks mostly. Now, I could’ve just piece by piece relearned some tech in a focused area, got a job in that, built up gradually, transitioned positions, etc… basically what non-savants do is what I’d have to do. Friends kept encouraging that. Still had things to learn talking to people especially where politics were going in lots of places. Still had R&D to do on trying to find the right set of assurance techniques for right components that could let people crank out high-security solutions quickly and market competitive. All the damage in media indicated that. Snowden leaks confirmed most of my ideas would’ve worked while most of security community’s recommendations not addressing root causes were being regularly compromised as those taught me predicted. So, I stayed on that out of perceived necessity that not enough people were doing it.
The old job and situation are more a burden now than useful. Sticking with it to do the research cost me a ton. I don’t think there’s much more to learn there. So, I plan to move on. One, social project failed in unexpected way late last year that was pretty depressing in its implications. I might take it up again since a lot of people might benefit. I’m also considering how I might pivot into a research position where I have time and energy to turn prior work into something useful. That might be Brute-Force Assurance, a secure (thing here), a better version of something like LISP/Smalltalk addressing reasons for low uptake, and so on. Each project idea has totally different prerequisites that would strain my damaged brain to learn or relearn. Given prior work and where tech is at, I’m leaning most toward a combo of BFA with a C variant done more like live coding, maybe embedded in something like Racket. One could rapidly iterate on code that extracted to C with about every method and tool available thrown at it for safety/security checks.
So, it’s a mix of indecision and my work/life leaving me feeling exhausted all the time. Writing up stuff on HN, Lobsters, etc about what’s still clear in my memory is easy and rejuvenating in comparison. I also see people use it on occasion with some set to maybe make waves. People also send me emails or private messages in gratitude. So, probably not doing what I need to be doing but folks were benefiting from me sharing pieces of my research results. So, there it is all laid out for you. A person outside security industry going Ramanujan on INFOSEC and programming looking for its UFT of getting shit done fast, correct, and secure (“have it all!”) while having day job(s) about meeting, understanding, and influencing people for protecting or improving democracy. Plus, just the life experiences of all that. It was fun while it lasted. Occasionally so now but more rare.
Sure. Im strange and seemingly contradictory enough that I expect confusion or skepticism. It makes sense for people to wonder. Im glad you asked since I needed to do a thorough writeup on it to link to vs scattered comments on many sites.
I have to admit similar misgivings (unsurprisingly, I came here via @apg and know @apg IRL). For someone so prolific and opinionated you have very little presence beyond commenting on the internet. To me, that feels suspicious, but who knows. I’m actually kind of hoping you’re some epic AI model and we’re the test subjects.
have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?
I did say the way my mind works makes it really hard to focus on long-term projects to completion. Also, I probably should’ve been doing some official submissions in ACM/IEEE but polishing and conferencing was a lot of work distracting from the fun/important research. If I’m reading you right, it’s accurate to say I wasn’t trying to compete in academia, market, or social club that is the security industry on top of memory loss. I was operating at a severe handicap. So, I’d (a) do those tedious, boring, distracting, sometimes-political things with that handicap or (b) keep doing what I was doing, enjoying, and provably good at despite my troubles. I kept going with (b).
That was the decision until recently when I started looking at doing some real, public projects. Still in the planning/indecision phase on that.
“But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.”
I did most of my bullshitting when I was a young hacker trying to get started. Quite opposite of your claim, the snobby, elitist, ego-centered groups I had to start with told you to GTFO by default unless you said what they said, did what they expected, and so on. I found hacker culture to be full of bullshit beliefs and practices with no evidence backing them. That’s true to this day. Just getting in to few opportunities I had required me to talk big… being a loud wolf facing other wolves… plus deliver on a lot of it just to not be filtered. I’d have likely never entered INFOSEC or verification otherwise. Other times have been personal failures that required humiliating retractions and apologies when I got busted. I actually care about avoiding unnecessary harm or aggravation to decent people. I’m sure more failures will come out over time with them costing me but there will be a clear difference between old and newer me. Since I recognize my failure there, I’m focusing on security BSing for rest of comment since it’s most relevant here.
The now, especially over past five years or so, has been me sharing hard-won knowledge with people with citations. Most of the BS is stuff security professionals say without evidence that I counter with evidence. Many of their recommendations got trashed by hackers with quite a few of mine working or working better. Especially on memory safety, small TCB’s, covert channels, and obfuscation. I got much early karma on HN in particular mainly countering BS in fads, topics/people w/ special treatment, echo chambers, and so on. My stuff stayed greyed out but I had references. They usually got upvoted back by the evening. To this day, I get emails thanking me for doing what they said they couldn’t since any dissenting opinion on specific topics or individuals would get slammed. My mostly-civil, evidence-based style survived. Some BS actually declined a bit since we countered it so often. Just recently had to counter a staged comparison here which is at 12 votes worth of gratitude, high for HN dissenters. The people I counter include high-profile folks in security industry who are totally full of shit on certain topics. Some won’t relent no matter who concrete the evidence is since it’s a game or something to them. Although I get ego out of being right, I mainly do this since I think safe, secure systems are a necessary, public good. I want to know what really works, get that out there, and see it widely deployed.
If anything, I think my being a bullshitting hacker/programmer early on was a mix of justified and maybe overdoing it vs a flaw I should’ve avoided. I was facing locals and an industry that’s more like a fraternity than meritocracy, itself constantly reinforcing bullshit and GTFO’ing dissenters. With my learning abilities and obsession, I got real knowledge and skills pretty quickly switching to current style of just teaching what I learned in a variety of fields with tons of brainstorming and private research. Irritated by constant BS, I’ve swung way in the other direction by constantly countering BS in IT/INFOSEC/politics while being much more open about personal situation in ways that can cost me. I also turned down quite a few jobs offers for likely five to six digits telling them I was a researcher “outside of industry” who had “forgotten or atrophied many hands-on skills.” I straight-up tell them I’d be afraid to fuck up their systems by forgetting little, important details that only experience (and working memory) gives you. Mainly admining or networking stuff for that. I could probably re-learn safe/secure C coding or something enough to not screw up commercial projects if I stayed focused on it. Esp FOSS practice.
So, what you think? I had justification for at least some of my early bullshit quite like playing the part for job interviews w/ HR drones? Or should’ve been honest enough that I never learned or showed up here? There might be middle ground but that cost seems likely given past circumstances. I think my early deceptions or occasional fuckups are outweighed by the knowledge/wisdom I obtained and shared. It definitely helped quite a few people whereas talking big to gain entry did no damage that I can tell. I wasn’t giving bad advice or anything: just a mix of storytelling with letting their own perceptions seem true. Almost all of them are way in my past. So, really curious what you think of how justified someone entering a group of bullshitters with arbitrary, filtering criteria is justified in out-bullshiting and out-performing them to gain useful knowledge and skills? That part specifically.
As a self-piloted, ambulatory tower of nano machines inhabiting the surface of a wet rock hurtling through outer space, I have zero time for BS in any context. Sorry.
I do have time for former BSers who quit doing it because they realized that none of these other mechanical wonders around them are actually any better or worse at being what they are. We’re all on this rock together.
p.s. the inside of the rock is molten. w t actual f? :D
Actually, come to think of it, I will sit around and B.S. for hours, in person with close friends, for fun. Basically just playing language games that have no rules. It probably helps that all the players love each other. That kind of BS is fine.
I somehow missed this comment before or was dealing with too much stuff to respond. You and I may have some of that in common since I do it for fun. I don’t count that as BS people want to avoid so much as just entertainment since I always end with a signal its bullshit. People know it’s fake unless tricking them is part of our game, esp if I owe them a “Damnit!” or two. Even then, it’s still something we’re doing voluntarily for fun.
My day-to-day style is a satirist like popular artists doing controversial comedy or references. I just string ideas together to make people laugh, wonder, or shock them. Same skill that lets me mix and match tech ideas. If shocking stuff bothers them, tone it way down so they’re as comfortable as they let others be. Otherwise, I’m testing their boundaries with stuff making them react somewhere between hysterical laughter and “Wow. Damn…” People tell me I should Twitter the stuff or something. Prolly right again but haven’t done it. Friends and coworkers were plenty fun to entertain without any extra burdens.
One thing about sites like this is staying civil and informational actually makes me hide that part of my style a lot since it might piss a lot of people off or risk deleting my account. I mostly can’t even joke here since it just doesn’t come across right. People interpret via impression those informational or political posts gave vs my in-person, satirical style that heavily leans on non-tech references, verbal delivery, and/or body language. Small numbers of people face-to-face instead of a random crowd, too, most of the time. I seem to fit into that medium better. And trying to be low-noise and low-provocation on this site in particular since I think it has more value that way.
Just figured I’d mention that since we were talking about this stuff. I work in a pretty toxic environment. In it, I’m probably the champion of burning jerks with improv and comebacks. Even most naysayers pay attention with their eyes and some smirks saying they look forward to next quip. I’m a mix of informative, critical, random entertainment, and careful boundary pushing just to learn about people. There’s more to it than that. Accurate enough for our purposes I think.
Lmao. Alright. We should get along fine then given I use this site for brainstorming, informing, and countering as I described. :)
And yeah it trips me out that life is sitting on a molten, gushing thing being supplied energy by piles of hydrogen bombs going off in a space set to maybe expand into our atmosphere at some point. That is if a stray star doesn’t send us whirling out of orbit. Standing in the way of all of this is the ingenuity of what appear to be ants on a space rock whose combined brainpower got a few off of it and then back on a few times. They have plans for their pet rock. Meanwhile, they scurry around on it making all kinds of different visual, IR, and RF patterns for space tourists to watch for a space buck a show.
As with most of Gary Bernhardt’s writing, I loved this piece. I read it several times over, as I find his writing often deeply interesting. To me, this is a great case study in judgement through attempting to apply Americanized principles to speech between two non-Americans (a Pole and a Finn) communicating in a second language.
There are several facets at play here as I see it:
There’s a generational difference between older hackers and newer ones. For older hackers, the code is all that matters, niceties be damned. Newer hackers care about politeness and being treated well. Some of this is a product of money coming in since the 90s, and people who never would’ve been hackers in the past are hackers now.
Linux is Linus’ own project. He’s not going to change. He’s not going to go away. If you don’t like the way he behaves, fork it. Run your own Linux fork the way you want, and you’ll see whether or not the niceties matters. Con Kolivas did this for years.
There are definitely cultural issues at play. While Linus has a lot of exposure to American culture, he’s Finnish. Finnish people are not like Americans. I find the American obsession with not upsetting people often infuriatingly two-faced, and I’m British. I have various friends in other countries who find the much more minor but still present British obsession with not upsetting people two-faced, and they’re right.
Go to Poland, fuck up and people will tell you. Go to Germany, do something wrong and people will correct you. Go to Finland, do something stupid getting in the way of a person’s job and probably they’ll swear at you in Finnish. I’m not saying this is right, or wrong, it’s just the rest of the world works differently to you, and while you can scream at the sea about perceived injustices, the sea will not change it’s tides for you.
Yes Linus is being a jerk, but it’s not like this is an unknown quantity. Linus doesn’t owe you kindness. You don’t owe Linus respect either. If his behaviour is that important to you, don’t use Linux.
I think this is a false comparison of some sort. Americans worrying doesn’t say anything useful about Finns.
In my experience of dealing with Finns, they don’t sugar coat things. When something is needed to be said, the Finns I’ve interacted with are extremely direct and to the point, compared to some other cultures. Would you say that’s fair?
I emphatically disagree that Linus is representative of the social culture around me in Finland.
I didn’t say that he’s representative of Finnish culture. He’s a product of it. He wasn’t raised American. He didn’t grow up immersed in American culture and values. It would be unrealistic to expect him to hold or conform to American values.
Nonviolent, clear communication is not the same thing as avoiding difficult subjects. It’s the opposite!
Definitely! Out of interest, what are your thoughts on this in terms of applicability to his communication style? I’m fairly certain there’s a general asshole element to his style, but I wonder how much (if any) is influenced by this.
He didn’t grow up immersed in American culture and values. It would be unrealistic to expect him to hold or conform to American values.
As an Italian, I can say that after the WWII, US did a great job to spread their culture in Europe.
Initially to counter the “Bolsheviks” influx, later as a carrier for their products.
They have been largely successful.
Indeed, I love Joplin just like I love Vivaldi, Mozart and Beethoven! :-)
But we have thousands years of variegate history, so we are not going to completely conform anyway. After all, we are proud of our deep differences, as they enrich us.
At the risk of getting into semantics, Finland was much more neutral post WWII than other European nations due to realpolitik.
Also, there is something to say for Italian insults, by far some of the finest and most perverse, blasphemous poetry I’ve ever had the pleasure of experiencing. It’s the sort of level of filth that takes thousands of years to age well :)
speech between two non-Americans (a Pole and a Finn) communicating in a second language.
How is that relevant? On my current team, we have developers from Argentina, Bosnia, Brazil, China, India, Korea, and Poland, as well as several Americans (myself included). Yet as far as I can recall from the year that I’ve been on this team so far, all of our written communication has been civil. And even in spoken communication, as far as I can recall, nobody uses profanity to berate one another. To be fair, this is in a US-based corporate environment. Still, I don’t believe English being a second language is a reason to not be civil in written communication.
You’re comparing Linux, a Finnish-invented, international, volunteer-based non-corporate project to a US-based corporate environment, and judging Linus’ communications against your perception of a US-based corporate environment. You’re doing the same thing as the author, projecting your own values onto something that doesn’t share those values.
Additionally, by putting the words I’ve said, and following that up with a reference to a US-based corporate environment, you’ve judged the words of a non-American who wasn’t speaking to you by your own US-based corporate standards.
I hope that helps you understand my point more clearly. My point isn’t that Linus does or doesn’t act an asshole (he does), but that expecting non-Americans to adhere to American values, standards or norms is unrealistic at best, and cultural colonialism at worst.
For older hackers, the code is all that matters, niceties be damned. [..]
Some of this is a product of money coming in since the 90s, and people who never would’ve been hackers in the past are hackers now.
No, people who would’ve never been hackers in the past, are not hackers now either.
And hackers have always cared about more than code. Hacking has always been a political act.
Linus is not a jerk, his behaviour is pretty deliberate. He does not want to conform.
He is not much different from Dijkstra, Stallman or Assange.
Today, cool kids who do not understand what hacking is, insult hackers while calling themselves hackers.
Guess what? Hackers do care about your polite corporate image as much as they do care about dress code.
There are definitely cultural issues at play.
Not an issue. It’s a feature! Hackers around the world are different.
And we are proud of the differences, because they help us to break mainstream groupthink.
This is a really interesting idea! I’m seeing this kind of idea more and more these days and I haven’t been able to work out what it means. I guess you don’t mean something as specific as “Hacking has always been in favour of a particular political ideology” nor something as general as “Hacking has always had an effect on reality”. So could you say something more precise about what you mean by that?
This is a good question that is worth of a deep answer. I’ll rush a fast one here, but I might write something more in the near future.
All hacks are political, but some are more evidently so. An example is Stallman’s GNU GPL. Actually the whole GNU project is very political. Almost as political as BSDs. Another evidently political hack was done by Cambridge Analytica with Facebook’s user data.
The core value of hackers activity is curiosity: hackers want to learn. We value freedom and sharing as a mean to get more knowledge for the humanity.
As such, hacking is always political: its goal is always to affect (theoretically, to improve) the community in one way or another.
Challenging laws or authorities is something that follows naturally from such value, but it’s not done to get power or profit, just to learn (and show) something new. This shows how misleading is who distinguish hats’ colours: if you are an hacker you won’t have problems to violate stupid laws to learn and/or share some knowledge, be it a secret military cablage, how to break a DRM system or how to modify a game console: it’s not the economical benefit you are looking for, but the knowledge. The very simple fact that some knowledge is restricted, forbidden or simply unexplored, is a strong incentive for an hacker to try to gain it, using her knowledge and creativity.
But even the most apparently innocent hack is political!
See Rust, Go, Haskell or Oberon: each with its own vision of how and who should program and of what one should expect from a software.
See HTTP browsers: very political tools that let strangers from a different state run code (soon assembly-like) on your pc (ironically with your consent!).
See Windows, Debian GNU/Linux or OpenBSD: each powerful operating systems which their own values and strong political vision (yes, even OpenBSD).
See ESR appropriation of the jergon file (not much curiosity here actually, just a pursuit for power)!
Curiosity is not the only value of an hacker, but all hackers share such value.
Now, this is also a value each hacker express in a different way: I want everyone to become an hacker, because I think this would benefit the whole humanity. Others don’t want to talk about the political responsibility of hacking because they align with the regime they live in (be it Silicon Valley, Raqqa, Moscow or whatever), and politically aware hackers might subvert it.
But even if you don’t want to acknowledge such responsibility, if you hack, you are politically active, for better or worse.
That’s also the main difference between free software and open source software, for example: free software fully acknowledge such ethical (and thus political) responsibility, open source negate it.
So if I understand you correctly you are saying something much closer to “Hacking has always attempted to change the world” than “Hacking has always been in support of a political party”.
Politics is to political parties, what economy is to bankers.
If you read “Hacking has always been a political act” as something related to political parties, you should really delve deeper in the history of politics from ancient Athens onwards.
“Hacking has always attempted to change the world”
No.
This is a neutral statement that could be the perfect motto/tagline for a startup or a war.
Hacking and politics are not neutral. They are both strongly oriented.
Politics is oriented to benefit the polis.
Indeed, lobbying for particular interests is not politics at all.
Hacking is not neutral either.
Hacking is rooted in the international scientific research that was born (at least) in Middle Age.
Hackers solve human problems. For all humans. Through our Curiosity.
IMO, you’re defining “Hacking is political” to the point of uselessness. Basically, nothing is apolitical in your world. Walking down the street is a political statement on the freedom to walk. Maybe that’s useful in a warzone but in the country I live in it’s a basic right to the point of being part of the environment. I don’t see this really being a meaningful or valuable way to talk about things. I think, instead, it’s probably more useful for people to say “I want to be political and the way I will accomplish this is through hacking”.
Hacking, instead, is political in its very essence. Just like Science. And Math.
Maybe it’s the nature of knowledge: an evolutive advantage for the humanity as a whole.
Or maybe it is just an intuitive optimization that serves hackers’ curiosity: the more I share my discoveries, the more brains can build upon them, the more interesting things I can learn from others, the more problem solved, the more time for more challenging problems…
For sure, everyone can negate or refuse the political responsibility that comes from hacking, but such behaviour is political anyway, even if short-sight.
I just don’t see it. I think you’re claiming real estate on terminology in order to own a perspective. In my opinion, intent is usually the dominating factor, for example murder vs manslaughter (hey, I’m watching crime drama right now). Or a hate crime vs just beating someone up.
You say:
As such, hacking is always political: its goal is always to affect (theoretically, to improve) the community in one way or another.
But I know plenty of people who do what would generally be described as hacking with no such intent. It may be a consequence that the community is affected but often times it’s pretty unlikely and definitely not what they were trying to do.
Now, I agree that Hacking and Engineering overlap.
But they differ more than Murders and Manslaughters.
Because hackers use engineering.
And despite the fact that people abuse all technical terms, we still need proper terms and definitions.
So despite the fact that everyone apparently want to leverage terms like “hacking” and “freedom” in their own marketing, we still need to distinguish hackers from engineers and free software from open source.
And honestly I think it’s easy to take them apart, in both cases.
Politics is the human activity that creates, manages and preserves the polis.
Polis was the word ancient Greeks used for the “city”, but by extension we use it for any “community”.
In our global, interconnected world, the polis is the whole mankind.
So Politics is the set of activities people do to participate to our collective life.
One of my professors used to define it as “the art of living together”.
Another one, roughly as “the science of managing power for/over a community”.
Anyway, the value of a political act depends on how it make the community stronger or weaker. Thus politics is rarely neutral. And so is hacking.
Thanks a lot. That does make things clearer. However I am still confused why under the definition of “Politics is the human activity that creates, manages and preserves the polis.” I admit that I don’t understand what ‘Saying that “intent is usually the dominating factor” is a political act’ but at least I now have a framework in which to think about it more.
I think the author did a pretty good job of editing the message in such a way that it was more clear, more direct, and equally forceful, while ensuring that all of that force was directed in a way relevant to the topic at hand.
(Linus has strong & interesting ideas about standardization & particular features. I would love to read an essay about them. The response to a tangentially-related PR is not a convenient place to put those positions: they distract from the topic of the PR, and also make it difficult to find those positions for people who are more interested in them than in the topic of the PR.)
The resulting message contains all of the on-topic information, without extraneous crap. It uses strong language and emphasis, but limits it to Linus’s complaints about the actually-submitted code – in other words, the material that should be emphasized. It removes repetition.
There is nothing subtle about the resulting message. Unlike the original message, it’s very hard to misread as an unrelated tangent about standardization practices that doesn’t address the reasons for rejecting the PR at all.
The core policy being implemented here is not “be nice in order to avoid hurting feelings”, but “remove irrelevant rants in order to focus anger effectively”. This is something I can get behind.
I find the American obsession with not upsetting people often infuriatingly two-faced, and I’m British.
[…]
Go to Poland, fuck up and people will tell you. Go to Germany, do something wrong and people will correct you. Go to Finland, do something stupid getting in the way of a person’s job and probably they’ll swear at you in Finnish.
Just wanted to point out that America is a huge country and its population is not homogenous. For example, you could have replaced Poland, Germany, and Finland with “Boston” and still have been correct (though, they’d just swear at you in English 🙂).
I think because most American tech comes out of San Francisco/Silicon Valley that it skews what is presented as “Americanized principals” to the international tech community.
Just wanted to point out that America is a huge country and its population is not homogenous.
Down here in the South, they have an interesting mix of trying to look/sound more civil or being blunt in a way that lets someone know they don’t like them or think they’re stupid. Varies by group, town, and context. There’s plenty of trash talking depending on that. Linus’s style would fit in pretty well with some of them.
Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.
In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).
There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.
I disagree with the negative posts. Writing about something you’ve just learned is absolutely a wonderful way to cement the knowledge, record it as you understand it for posterity if only for yourself, and help you pull others up right behind you. It’s not your responsibility to keep your ideas to yourself until some magic day where you reach enlightenment and only then can convey blessed knowledge on the huddled masses, a lot of this stuff (the specific tech, for the most part) moves too damn fast for that anyway. Maybe we need better mechanisms for surfacing the best information, sure, but discouraging people (yes, even noobs) from sharing what they’ve learned only ensures we’ll have fewer people practiced in how to do it effectively in the future.
That said, I do 1000% agree that people writing in public should be as up front as possible about where they are coming from and where they are at. I definitely get annoyed with low quality information that also carries an authoritative tone.
There’s a world between documenting how you learned a thing, and writing a tutorial for that same thing. If you’re learning a thing, probably don’t write a tutorial. I agree with you, writing about a freshly learned lesson helps in making the learning more permanent, though.
In the case of projects, I’d rather see people committing documentation changes back to the project, at least here the creator of the project can review it.
It’s a free internet and nobody can stop someone from doing this, but, IMO, the problem with technology is not that there is too little poorly written tutorial out there. Maybe it’s worth finding other ways of being constructive.
Writing it down can help the mind remember or think on things. If errors are likely, then maybe they just don’t publish it. They get the benefits of writing it down without the potential downsides.
Java is a language, while Node is a runtime. Node should be compared against the JVM because each platform can be targeted by different languages. For example, I can target both Node and the JVM with Clojure. In that scenario the problems regarding locking threads don’t exist because Clojure is designed to be thread safe and it provides tools, such as atoms, for working with shared mutable state.
My experience targeting both the JVM and Node, is that the JVM provides a much simpler mental model for the developer. The JVM allows you to write predominantly synchronous code, and the threads are used to schedule execution intelligently ensuring that no single chunk of code hogs the CPU for too long. With Node you end up doing scheduling by hand, and it’s your responsibility to make sure that your code isn’t blocking the CPU.
Here’s a concrete example from a script I ended up writing on Node:
You could use promises or async to make the Node example a bit cleaner, but at the end of the day you’re still doing a lot more manual work and the code is more complex than it would’ve been with threads.
I don’t really see how that’s the case. The problem I’m describing is that Node has a single execution thread, and you can’t block it. This means that the burden of breaking up your code into small chunks and coordinating them is squarely on the developer.
As I said, you could make the code a bit more concise, but the underlying problem is still there. For example, I used promises here, but that’s just putting on a bandaid in my opinion.
Threads are just a better default from the developer perspective, and it’s also worth noting that you can opt into doing async on the JVM just fine if you really wanted to. It’s not a limitation of the platform in any way.
Threads are just a better default from the developer perspective
There is the caveat that threads (at last in the JVM) dramatically increase the complexity of the memory model and are generally agreed to make it harder to write correct code. Single threaded event-loop style programs don’t remove the chance of race conditions and dead locks but they remove a whole class of issues. Personally, I like something like the Erlang model which is fairly safe and scales across hardware threads. My second personal preference is for a single threaded event-loop (although I generally use it in Ocaml which makes expressing the context switches much more pleasant than in JavaScript/Node).
The part about it being harder to write correct code only applies to imperative languages though. This is why I’m saying that it’s important to separate the platform from the language. I like the Erlang model as well, however shared nothing approach does make some algorithms trickier.
Personally, I found Clojure model of providing thread safe primitives for managing shared mutable state to work quite well in practice. For more complex situations the CPS model such as core.async or Go channels is handy as well in my experience.
In response to questions we want to be clear: Microsoft is not working with U.S. Immigration and Customs Enforcement or U.S. Customs and Border Protection on any projects related to separating children from their families at the border, and contrary to some speculation, we are not aware of Azure or Azure services being used for this purpose. As a company, Microsoft is dismayed by the forcible separation of children from their families at the border.
Maybe I’m missing something, but it seems they are going in the exact same direction…
It’s a very confusing article; my best guess is that they are working with ICE, but not on “projects related to separating children from their families at the border”.
And just because Microsoft isn’t directly helping, they are still helping. That nuance is discussed in OP’s article - any support to an morally corrupt institution is unacceptable, even if it is indirect support.
But that perspective is very un-nuanced. Is everything ICE does wrong? It’s a large organization. What if the software the company that @danielcompton denied service to is actually just trying to track down violent offenders that made it across the border? Or drug trafficking?
To go even further, by your statement, Americans should stop paying their taxes. Are you advocating that?
ICE is a special case, and deserves to be disbanded. It’s a fairly new agency, and its primary mission is to be a Gestapo. So yes, very explicitly, everything ICE does is wrong.
On what ground and with which argument can you prove your statement? I mean, there is probably an issue with how it’s run, but the whole concept of ICE doesn’t sound that wrong to me.
The thing that is so striking about all three items is not merely the horror they symbolize. It is how easy it was to get all of these people to play their fascistic roles. The Trump administration’s family separation rule has not even been official policy for two months, and yet look at where we are already. The Border Patrol agent is totally unperturbed by the wrenching scenes playing out around him. The officers have sprung to action with a useful lie to ward off desperate parents. Nielsen, whom the New Yorker described in March as “more of an opportunist than an ideologue” and who has been looking to get back into Donald Trump’s good graces, is playing her part—the white supremacist bureaucrat more concerned with office politics than basic morality—with seeming relish. They were all ready.
I’m going to just delegate all arguments to that link, basically, with a comment that of it’s not exceedingly obvious, then I probably can’t say anything that would persuade you. Also, this is all extremely off-topic for this forum, but, whatevs.
There’s always a nuance, sure. Every police force ever subverted for political purposes was still continuing to fight petty crime, prevent murders and help old ladies cross the street. This always presented the regimes a great way to divert criticism, paint critics as crime sympathisers and provide moral leeway to people working there and with them.
America though, with all its lip service to small government and self reliance was the last place I expected that to see happening. Little did I know!
Is everything ICE does wrong? It’s a large organization.
Just like people, organizations should be praised for their best behaviors and held responsible for their worst behaviors.
Also, some organizations wield an incredible amount of power over people and can easily hide wrongdoing and therefore should be held responsible to the strictest standard.
Its worth pointing out that ICE didn’t exist 20 years ago. Neither, for that matter did the DHS (I was 22 when that monster was born). “Violent offenders” who “cross the border” will be tracked down by the same people who track down citizen “violent offenders” ie the cops (what does “violent offender” even mean? How do we who these people are? how will we know if they’re sneaking in?) Drug trafficking isn’t part of ICEs institutional prerogative in any large, real sense, so its not for them to worry about? Plenty of americans, for decades, have advocated tax resistance precisely as a means to combat things like this. We can debate its utility but it is absolutely a tactic that has seen use since as far as I know at least the Vietnam war. Not sure how much nuance is necessary when discussing things like this. Doesn’t mean its open season to start dropping outrageous nonsense, but institutions which support/facilitate this in any way should be grounds for at the very least boycotts.
Why is it worth pointing out it didn’t exist 20 years ago? Smart phones didn’t either. Everything starts at some time.
To separate out arguments, this particular subthread is in response to MSFT helping ICE, but the comment I responded to was referring to the original post, which only refers to “border security”. My comment was really about the broader aspect but I phrased it poorly. In particular, I think the comment I replied to which states that you should not support anything like this indirectly basically means you can’t do anything.
Its worth pointing out when it was founded for a lot of reasons; what were the conditions that led to its creation? Were they good? Reasonable? Who created it? What was the mission originally? The date is important because all of these questions become easily accessible to anyone with a web browser and an internet connection, unlike, say, the formation of the FBI or the origins of Jim Crow which while definitely researchable on the net are more domains of historical research. Smart phones and ethnic cleansing however, not so much in the same category.
If you believe the circumstances around the formation of ICE are worth considering, I don’t think pointing out the age of the institution is a great way to make that point. It sounds more like you’re saying “new things are inherently bad” rather than “20 years ago was a time with a lot of politically questionable activity” (or something along those lines).
dude, read it however you want, but pointing out that ICE is less than 20 years old, when securing a border is a foundational issue, seems like a perfect way to intimate that this is an agency uninterested in actual security and was formed expressly to fulfill a hyper partisan, actually racist agenda. Like, did we not have border security or immigration services or customs enforcement prior to 2002/3? Why then? What was it? Also, given that it was formed so recently, it can be unformed, it can be dismantled that much easier.
I don’t understand your strong reaction here. I was pointing out that if your goal was to communicate something, just saying it’s around 20 years old didn’t seem to communicate what you wanted to me. Feel free to use that feedback or not use it.
No, it requires you to acknowledge that using any currency is unacceptable.
Of course not using any currency is also unacceptable. When faced with two unacceptable options, one has to choose one.
Using the excuse “If I follow my ethics I can never do anything” is just a lazy way to never think about ethics. In reality everything has to be carefully considered and weighed on a case by case basis.
Of course not using any currency is also unacceptable.
Why? Currency is just a tool.
Using the excuse “If I follow my ethics I can never do anything” is just a lazy way to never think about ethics.
I completely agree.
Indeed I think that we can always be ethical, but we should look beyond the current “public enemy”, be it Cambridge Analytica or ICE. These are just symptoms. We need to cure the disease.
I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.
On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!
More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.
This data is then used to power our real-time calculations. Currently this import process has to take place outside of business hours because of the impact it has on memory usage.
So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.
Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.
They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it
That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.
Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.
You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.
Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.
From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.
EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.
The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.
Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…
As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level
Kinda neat how Microsoft went full circle…
According to wikipedia, Xenix was first released in 1980 with the last release in 1989.
Why is this? As a programmer I find I rarely need bleeding edge. I’m sure not going to install bleeding edge to production. It can be fun for hacking around but I don’t see why as a programmer it’s needed.
It can be useful to have the latest compiler set and tooling for your project. I often find new potential issues with a newer GCC.
For what it’s worth, Arch does distinguish between ‘stable’ and ‘bleeding edge’ in its releases, although the rolling release does mean that stable is generally much newer than you might find in, say, Debian.
I wouldn’t use it in production, though I have seen it done.
I don’t want bleeding edge in general, but “your issue has been fixed in the latest version” get old quickly.
As someone who uses arch on all my developer machines, arch is a horrible developer OS, and I only use it because I know it better than other distros.
It was good 5-10 years ago (or I was just less sensitive back then), but now pacman Syu is almost guaranteed to break or change something for the worse, so I never update, which means I can never install any new software because everything is dynamically linked against the newest library versions. And since the arch way is to be bleeding edge all the time, asking things like “is there an easy way to roll back an update because it broke a bunch of stuff and brought no improvements” gets you laughed out the door.
I’m actually finding myself using windows more now, because I can easily update individual pieces of software without risking anything else breaking.
@Nix people: does NixOS solve this? I believe it does but I haven’t had a good look at it yet.
Yes, Nix solves the “rollback” problem, and it does it for your entire OS not just packages installed (config files and all).
With Nix you can also have different versions of tools installed at the same time without the standard python3.6 python2.7 binary name thing most place do: just drop into an new nix-shell and install the one you want and in that shell that’s what you have. There is so much more. I use FreeBSD now because I just like it more in total, but I really miss Nix.
EDIT: Note, FreeBSD solves the rollback problem as well, just differently. In FreeBSD if you’re using ZFS, just create a boot environment before the upgrade and if the upgrade fails, rollback to the pre-upgrade boot environment.
Being a biased Arch Developer, I rarely have Arch break when updating. Sometimes I have to recompile our own C++ stack due to soname bumps but for the rest it’s stable for me.
For Arch there is indeed no rollback mechanism, although we do provide an archive repository with old versions of packages. Another option would be BTRFS/ZFS snapshots. I believe the general Arch opinion is instead of rolling back fixing the actual issue at hand is more important.
I can see some people might value that perspective. For me, I like the ability to plan when I will solve a problem. For example I upgraded to the latest CURRENT in FreeBSD the other day and it broke. But I was about to start my work day so I just rolled back and I’ll figure out when I have time to address it. As all things, depends on one’s personality what they prefer to do.
But on stable distros you don’t even have that choice. Ubuntu 16.04, (and 18.04 as well I believe) ships an ncurses version that only supports up to 3 mouse buttons for ABI stability or something. So now if I want to use the scroll wheel up, I have to rebuild everything myself and maintain some makeshift local software repository.
And that’s not an isolated case, from a quick glance at my $dayjob workstation, I’ve had to build locally the following: cquery, gdb, ncurses, kakoune, ninja, git, clang and other various utilities. Just because the packaged versions are ancient and missing useful features.
On the other hand, I’ve never had to do any of this on my arch box because the packaged software is much closer to upstream. And if an update break things, I can also roll back from that update until I have time to fix things.
I don’t use Ubuntu and I try to avoid Linux, in general. I’m certainly not saying one should use Ubuntu.
Several people here said that Arch doesn’t really support rollback which is what I was responding to. If it supports rollback, great. That means you can choose when to solve a problem.
Ok, but that’s a problem inherent to stable distros, and it gets worse the more stable they are.
It does, pacman keeps local copies of previous versions for each package installed. If things break, you can look at the log and just let pacman install the local package.
Your description makes it sound like pacman doesn’t support roll backs, but you can get that behaviour if you have to and are clever enough. Those seem like very different things to me.
Also, what you said about stable distros doesn’t seem to match my experience in FreeBSD. FreeBSD is ‘stable’ however ports packages tend to be fairly up to date (or at least I rarely run into it except for a few).
I’m almost certain any kind of “rollback” functionality in pacman is going to be less powerful than what’s in Nix, but it is very simple to rollback packages. An example transcript:
This is all super standard, and it’s something you learn pretty quickly, and it’s documented in the wiki: https://wiki.archlinux.org/index.php/Downgrading_packages
My guess is that this is “just downgrading packages” where as “rollback” probably implies something more powerful. e.g., “rollback my system to exactly how it was before I ran the last pacman -Syu.” AFAIK, pacman does not support that, and it would be pretty tedious to actually do it if one wanted to, but it seems scriptable in limited circumstances. I’ve never wanted/needed to do that though.
(Take my claims with a grain of salt. I am a mere pacman user, not an expert.)
EDIT: Hah. That wiki page describes exactly how to do rollbacks based on date. Doesn’t seem too bad to me at all, but I didn’t know about it: https://wiki.archlinux.org/index.php/Arch_Linux_Archive#How_to_restore_all_packages_to_a_specific_date
I have the opposite experience. Arch user since 2006, and updates were a bit more tricky back then, they broke stuff from time to time. Now nothing ever breaks (I run Arch on three different desktop machines and two servers, plus a bunch of VMs).
I like the idea of NixOS and I have used Nix for specific software, but I have never made the jump because, well, Arch works. Also with Linux, package management has never been the worst problem, hardware support is, and the Arch guys have become pretty good at it.
I wonder if the difference in experience is some behaviour you’ve picked up that others haven’t. For example, I’ve found that friend’s children end up breaking things in ways that I would never do just because I know enough about computers to never even try it.
I think it’s a matter of performing Syu update often (every few days or even daily) instead of once per month. Rare updates indeed sometimes break things but when done often, it’s pretty much update and that’s it.
I’m an Arch user since 6 years and there were maybe 3 times during those 6 years where something broke badly (I was unable to boot). Once it was my fault; second & third one is related to nvidia driver and Xorg incompatibility.
It’s sometimes also a matter of bad timing. Now every time before doing a
pacman -Syu
I check /r/archlinux and the forums to see if someone is complaining. If so then I tend to wait for a day or two before the devs push out updates to broken packages.That’s entirely possible.
I have quite a contrary experience, I have pacman run automated in the background every 60 minutes and all breakage I suffer is from human-induced configuration errors (such as misconfigured boot loader or fstab)
Things like Nix even allow rolling back from almost all user configuration errors.
Would be nice, yeah, though I never understood or got Nix really. It’s a bit complicated and daunting to get started and I found the documentation to be lacking.
How often were you updating? Arch tends to work best when it’s updated often. I update daily and can’t remember the last time I had something break. If you’re using Windows, and coming back to Arch very occasionally and trying to do a huge update you may run into conflicts, but that’s just because Arch is meant to be kept rolling along.
I find Arch to be a fantastic developer system. It lets me have access to all the tools I need, and allows me to keep up the latest technology. It also has the bonus of helping me understand what my system is doing, since I have configured everything.
As for rollbacks, I use ZFS boot environments. I create one prior to every significant change such as a kernel upgrade, and that way if something did happen go wrong, and it wasn’t convenient to fix the problem right away, I know that I can always move back into the last environment and everything will be working.
How do you configure ZFS boot environments with Arch? Or do you just mean snapshots?
I wrote a boot environment manager
zedenv
. It functions similarly tobeadm
. You can install it from the AUR aszedenv
orzedenv-git
.It integrates with a bootloader if it has a “plugin” to create boot entries, and keep multiple kernels at the same time. Right now there’s a plugin for systemdboot, and one is in the works for grub, it just needs some testing.
Looks really useful. Might contribute a plugin for rEFInd at some point :-)
Awesome! If you do, let me know if you need any help getting started, or if you have any feedback.
It can be used as is with any bootloader, it just means you’ll have to write the boot config by hand.
(Preface: I didn’t know much, and still don’t, about the *Solaris ecosystem.)
So it seems like the evolution of *Solaris took an approach closer to Linux? Where there’s a core chunk of the OS (kernel and core build toolchain?) that is maintained as its own project. Then there’s distributions built on top of illumos (or unleashed) that make them ready-to-use for endusers?
For some reason, I had assumed it was closer to the *BSD model where illumos is largely equivalent to something like FreeBSD.
If I wanted to play with a desktop-ready distribution, what’s my best bet? SmartOS appears very server oriented - unsurprising given *Solaris was really make more in-roads there in recent years. OpenIndiana?
If Linux (kernel only) and BSD (whole OS) are the extremes of the scale, illumos is somewhere in the middle. It is a lot more than just a kernel, but it lacks some things to even build itself. It relies on the distros to provide those bits.
Historically, since Solaris was maintained by one corporation with lots of release engineering resources and many teams working on subsets of the OS as a whole, it made sense to divide it up into different pieces. The most notable one being the “OS/Net consolidation” which is what morphed into what is now illumos.
Unleashed is still split across more than one repo, but in a way it is closer to the BSD way of doing things rather than the Linux way.
Hope this helps clear things up!
OI would be the easiest one to start with on a desktop. People have gotten Xorg running on OmniOS (and even SmartOS), but it’s extra work vs. just having it.
Solaris is like BSD in that it includes the kernel + user space. In Linux, Linux is just the kernel and the distros define user space.
So…. is there no desktop version of Illumos I can download? Why does their “get illumos” page point me at a bunch of distributions?
Genuine questions - I’m just not sure where to start if I want to play with illumos.
illumos itself doesn’t have an actual release. You’re expected to use one of its distributions as far as I can tell, which should arguably be called “derivatives” instead. OpenIndiana seems to be the main desktop version.
I don’t know. I know there are some people who run SmartOS on their desktop, but I get the feeling it’s not targeting that use case, or at least there isn’t a lot of work going into supporting it.
“Hooray! We have forked an already small community into yet another smaller community because…”
Well, the “because” doesn’t really matter, even though they make extremely valid points! In an already incredibly fragmented community (how many derivatives of OpenSolaris does this make?) this makes the problem bigger…
I don’t follow illumos very closely, but are there reasons that community won’t assist in pushing towards solving the concerns that sparked unleashed? Surely illumos is also an operating system that “developers want to use,” no?
As always, we’re happy to work with people who want to push changes to illumos-gate!
xkcd 1095 seems relevant. :^)
Yeah, maybe. :)
If the illumos community were healthy I would agree with you and I wouldn’t have bothered to create this fork. Sadly, I think the illumos community has problems and the people that truly have a lot of say where the project goes either don’t see them or like the status quo.
Two years ago when I started Unleashed, I had a dilemma: should I fork illumos or ditch it for one of the BSDs. When I realized that there were other people that were just as unhappy with the (lack of) direction illumos had, making a fork sounded like a good option. That’s how we got here.
Now where do we go from here is an open question. It is completely possible that Unleashed will fizzle, at which point I can say that no real harm was done. The illumos community will remain as small as it was two days ago, with major contributors like Delphix bailing on illumos in favor of Linux. If Unleashed takes off and in the process kills off illumos, the overall ecosystem will be better off. There might be a person or two grumpy that they can’t run their emacs binary from 1994, but in my opinion that is a small price to pay.
That is the reason I considered and ultimately went with a fork instead of bailing on it. The technology in Solaris/OpenSolaris/illumos/Unleashed is great, and I didn’t want to give it up. I wanted to give up the hugely inefficient and ultimately counter-productive contribution process.
Happy hacking!
Thanks for taking the time to respond. I know my post probably came off as aggressive, and if I’m honest, it was half intended to be. I think forks are very disruptive, and wish, of course, to minimize these sorts of things when at all possible.
This makes total and reasonable sense. I didn’t mean to imply that you hadn’t thought this through! And appreciate that you used it as a sort of last resort.
Thanks for doing what you’re doing, and I wish Unleashed success (and maybe either domination or an eventual merge of the communities again)!
No problem. I really had no choice - someone on the internet was “wrong” ;)
The phrasing certainly made me go “urgh, not one of those…” but it sounds like we both agree that forks are disruptive, but you think that it’s a negative thing while I think it is a positive thing. A reasonable difference of opinion.
Thanks, that’s the idea :)
There’s really nothing I can offer as a legitimate excuse for that. I’m sorry.
The additional context you’ve provided makes me feel that it probably is the right, and positive choice in this case. I’m not vehemently against forks if there’s a legitimately good reason [and just to be clear, moving on from supporting legacy stuff is the important divergence I’m seeing, as it frees up resources to move faster]. I am against forks that don’t offer some radical divergence in philosophy, though. These are often rooted from deep bikeshedding on topics that don’t matter in the grand scheme of things.
Two examples of justified forks in my opinion: @rain1 recently forked filezilla because it was incorporating “unwanted extra nonfree software.” Devuan is a fork of Debian that replaces systemd – a topic that is far beyond bikeshedding at this point, as it’s had (and will continue to have) a drastic effect on the portability of software to other ecosystems.
No worries. Hopefully my initial response didn’t come across as too harsh either. If it did, my apologies.
Agreed. Although sometimes it is hard to tell if there is a justification for the fork.
I wonder when we started to need a justification.
Why?
You do you, man. You do you.
In my mind, there are two types of forks we’re talking about. One of them is a “fork” on github, where I clone the repo, make some changes, contribute it back to the original author (or maybe not!), and live a happy life. These types of forks are almost always ok. It’s the “You do you, man. You do you.” response.
The other “fork” is far more challenging, and far more likely to cause a rift in spacetime. Those are the large, and by all accounts, successful projects that as a result divide a community, and make it difficult for users and would be contributors to find the right thing to use. These projects fork very publicly, and are rather uncomfortable, to be honest.
In many cases, these forks occurred because egos were hurt (I wanted it yellow) – a social issue – not a technical issue. In other cases, there’s a large philosophical difference that impacts the general direction of the technology. This may be licensing, whether or not to support obscure platforms, a radical new idea or focus… etc. In all cases, even if there are legitimately great outcomes (OpenBSD comes to mind), there’s a period of confusion and frustration from users who are now forced to choose where to put their effort. They are forced into taking sides, and that’s unfair.
These are marketing concerns. Market share issues, to be precise.
They are valid for open source projects that are basically marketing tools, but they are pointless for free software that maximizes hackers’ freedom to hack.
Feeling the need to justify a fork, is the first step towards asking permission.
The PATENTS file in projects like Fuchsia’s kernel sources just push for that.
Sorry, my friend. Most people don’t share your principles on what a ‘hack,’ or a ‘hacker’ is. More often than not, the people using, and writing software care more about getting the job done quickly, and without frustration, and a fork makes that harder. It doesn’t matter how you classify it.
And this is fine!
But, my friend, you need to understand the tools you use!
If you pick up a free software that is distributed “WITHOUT ANY WARRANTY” just because it’s free of charge, and you completely miss the culture of the people who develop it, you won’t get your job done. Same if you pick an open source software controlled by Google (or whoever) and you fork it to successfully challenge their market share.
In both cases, you’ll face surprises, unexpected costs and frustration.
Understanding the environment you operate in, is strategic to “get the job done”.
Interesting! Do you have world-wide statistics to prove such claim?
Not that it matters: “principles” stand to “artifacts” like “postulates” stand to “theorems”. How many people accept the postulates/principles is irrelevant.
I know that some people don’t share my principles. And I’m fine with it.
Do you know that some people don’t share your principles?
Are you fine with it?
I read this several times and can’t figure out what you’re saying.
Why do I need to understand the culture of a tool I use? As long as it fulfills my technical needs and I know what I’m prohibited to do by law, I can use it to get my job done.
Some example of the issues you might face:
and so on…
You could ignore the culture of tools you get for free, and be lucky.
But in my job, I would call that short-sight and unprofessional.
Software is not like an hammer: even if you take it free of charges, there are strings attached.
There are ways around much of these concerns. I have a support contract, or trust in a distribution (say, Canonical for Ubuntu or Red Hat), which provides vuln disclosures, and updates for me to apply. I have a development process that includes QA, and automated CI infrastructure so that breaking changes are caught before production… etc.
But, to the meta point:
Demonstrably this is not at all true. It’s easy to do a survey of 100 people – 10 people even, and ask them if they understand their tools. How are their tools implemented? How does the relational database they store and query data into/from store data on disk? How does the map type work in their favorite language? How does the VM work? How does the ORM work? How does the templating language they use work? How does the image processing library they use work to resize images, or rotate images, or whatever work? How does TensorFlow do all it does?
What you’ll find is that a large portion of engineers have no idea how things work. And they don’t need to know. Their job is to build CRUD apps for people who could care less if something takes a little bit longer. The developer themselves, in many cases, could care less about BTREE indexes vs. HASH indexes, and doesn’t really know the difference. For the amount of data they manipulate, doing full table scans 3 times an hour (because they literally have 3 queries an hour) is completely sane, reasonable, and still puts a smile on the face of the Administrative assistant who no longer has to go to a type writer to type out a bunch of labels. Or, who no longer has to print 10,000 college applications to give to admissions reviewers… or any number of other tasks where even the worst technology choices, recommended by underskilled developers can make a ginormous (and) positive difference on the process.
Sure, but the simplest one is to understand the tools you use.
And actually, trusting Debian (or OpenBSD or whatever) or signing support a contract with Canonical (or Red Hat or Microsoft or whatever) requires the cultural understanding of such people I was talking about.
Practically you are saying: “everyone can become rich without working: just win the lottery!”. Well, this is not false. Stick on boring low-hanging fruits all your life and you will never face the issues that a professional developer has to consider every day.
What you describe is not to “get the job done”.
People die because of people who work this way.
In Italy we use to say: “even a broken clock can be right twice a day”.
Yes, incompetent developers can occasionally improve the life of someone, but for most of time, they just mess up things beyond repair.
I believe this comment really lacks perspective. What you are saying is the Shamar-style of development is the only correct style of development and anyone not doing it that way is not only doing it wrong but putting people’s lives at risk.
The industry I work in produces a lot of software and consumes a lot of software, however no company in this industry would consider itself a tech company. We have people whose job title is “Software Engineer”. But, for the most part, they make pretty bad technical decisions and are fairly unskilled relative to the engineers at most tech companies. But, they aren’t “trying to get rich without working” or “win the lottery”. They are very hard working. The industry just has a different set of values where the software is incidental to the actual problem the company is solving. A lot of the things you brought up in an earlier post about why one needs to understand the culture of the software they consume doesn’t actually apply in the industry I’m in. Security updates and backdoors are almost never going to be a concern because these systems are not open to the outside. The data they consume is entirely generated and processed inside the walls of the company. In the industry I’m in, we’re actually saving lives too! I mean that literally.
I hate to use this word, but your comment is elitist. Anyone not solving problems how you say is not a professional and just causing damage “beyond repair”. Your comment lacks humility and perspective yet is extremely assertive. It might be worth stepping back and questioning if what you assert so strongly is an ideal, a belief, or reality. Or perhaps it’s a challenge with the language and you don’t realize how assertive your comments sound relative to how assertive you meant them to be. But insisting people not following your development principles are killing people is a pretty strong statement, in any case.
I was not talking about software development in particular.
Incompetent engineers build bridges that fell off.
Incompetent phyisicians do not cure mortal deseases properly. And so on.
They can get some work done, but it’s lucky, like winning te lottery.
As for software, I do not means that a competent software developer cannot adopt a cheap half-working solution instead of an expensive “right” one (whatever it means in the context).
On the contrary!
I mean that to make a choice you need competence.
I’m saying that only a competent professional that knows the tools she use can really “get the job done”.
An incompetent one can be lucky some times, but you cannot trust her products and thus the job is not done.
Actually, I’m rather surprised by the opposition such a simple and obvious concept is facing. All other craftmen I know (the real ones, not the software ones) agree that it takes years to “own” their tools.
Probably we have diverged too much from the original topic, and we are facing a deep cultural mismatch.
In Europe (that, let me say, is not living up to its own values these days) we are used to be very diverse and inclusive (note: it took centuries of wars, rapes, debates, commerce, poetry, science, curiosity and many other contaminations to get here).
But we do not meld the meaning of words just to include more people.
We clearly see and state the differences, and happily talk about them.
And this is not elitism, it’s efficient communication.
When we say “job” or “done” we convey a precise message.
And if a bridge fell off and kills someone, we call the engineers who built it liars because the job was not done. At times they even stop being called engineers at all.
You don’t give an inch, do you? I’ve explicitly said that I work in an industry that does not do software development like you have expressed it should be done and your response is to keep on insisting on it. On top of that, you did this annoying thing where this discussion has clearly been about software development but when I pushed back you move the goal post and start talking about bridges and medicine. It’s extremely challenging and frustrating to communicate with you, I need to work on not doing that. Thanks for the discussion, it was insightful for myself.
Looks like someone got a degree in being right on the Internet! There’s no point in engaging with you, and if there was a feature to block users, I would make use of it.
I’m sorry about this.
If you lack arguments to support your assuptions, I can suggest to simply state such assumptions clearly. For example:
I would deeply disagree on such premise.
But I wouldn’t argue against the conclusions.
Did you just tell me to go fuck myself?
Ok, this must really be a language problem.
I cannot find a translation of what I wrote that can be interpreted that way!
Anyway: No, I’m not telling you to fuck yourself.
I just spent 30 minutes carefully crafting a response to your absurd notion that everyone must be highly skilled or people will die. But, it’s just not worth it. You’ll find a way to twist it into something it’s not, and yell loudly about how I’m wrong without considering that you may be shortsighted in your assumptions.
I’m sorry for the time you wasted.
I do not think that “everyone must be highly skilled or people will die”.
I think that everyone should be professional in his own job.
Which, at the bare minimium, means to understand the tools you use.
I woudn’t even engage if I woud not assume this to be possible: there would be nothing to learn.
Question that I have that isn’t clear from the post. Do you intend to maintain enough compat with Illumos that you would be able to get improvements that were done n something like SmartOS? Are you planning on continuing to pulls changes from Illumos? Planning to try contributing changes back? Or is this a hard fork where you don’t imagine there would be cross pollination?
Good questions!
Hopefully I covered everything.
I love how varied university education can be. We definitely had very different experiences. I’ll just ramble some hand wavy thoughts, don’t take them too seriously.
Eh, I run into you on the internet often enough. :) We should get a beer sometime though.
I can definitely echo a lot of these statements, but especially these two:
This is critical, but ‘learning how to learn’ has never been my favorite phrasing. The latter, “active reading” is closer, but I think I like “active learning” the most – the idea of not just learning something, but learning how you learn it, and critically examining and engineering better ways to learn. I was not a particularly great high school student, but college taught me pretty quickly how to treat learning as an engineering problem, and that was super valuable.
I was similar. I much preferred leading discussion in a peer group – in that way I ended up being forced to teach the material / defend my position (even when I wasn’t particularly confident I was right!). @burntsushi was a pretty common foil for my ramblings in the Math lab.
Having a good set of friends to talk math at was super valuable; I suspect that is the core thing to try to acquire in any situation that involves a lot of learning. I’ve developed a similar method of working now, my team consists of a lot of really smart people who know their fields well and have enough overlap that we can all bounce ideas off each other while still having areas we can feel expert in. It’s the best of both worlds, I think.
I suppose this is number three, but quiet parts of libraries are the best. The bottom floor of WPI’s library is where I got most of my work done.
Tangential comment:
Wow, I had no idea there were woo state alums here. I, too, went to Worcester State. I can’t compare it to any other undergraduate experience but I had a great time and I came out with almost no debt (my 4 years there were less than 1 semester of friends who went to more prestigious schools in the are). While the cost savings wasn’t an active decision at the time, it turned out to be a great one. I have found, for software engineering at least, the amount of negative impact going to a small school has on one’s career is minimal to zero. And since those loans are in my name, I’m much better off financially than those going to more expensive schools and having to pay for it (although I know many who had parents willing to foot the bill). Google might not have been waiting outside during my graduation but I see little different in my career relative to most friends who went to much more prestigious schools for undergrad.
Neat! What years were you there? I was there 2007-2010.
I was there 2001 - 2006 (I think, I’m terrible with dates). My claim to fame is I was the first person to graduate with the Bioinformatics concentration. I don’t know if that still existed when you were there. I can’t wait to retire though, I miss college and want to go back full time!
Hah, nice. I also graduated with a bioinformatics concentration (and a Math degree). The biology and chemistry courses were awesome. Judging by the number of other people I was aware of that were pursuing a CS w/ bioinformatics concentration, I wouldn’t be surprised if I was the second one. :P
Wow! Small world!
I was actually just reflecting on how much I enjoyed the bio and chem as well! I feel it really grounded me in the real world. In CS we just have control over so much since we’re defining so much of it but bio and chem are messy and fun and so many unknowns.
Good point! But I wouldn’t call them art-projects: if born out of curiosity they are “hacks”.
The distinction between ‘art project’ and ‘hack’ is mostly whether or not you come out of a culture that knows what ‘hack’ means. (I’m not sure I can count on that anymore, even for OS dev people.)
This is a very interesting perspective.
One of the greatest hackers of all times (at least in western history) grew up as an artist, Leonardo da Vinci.
Both art and hacking are creative acts.
And some clever artworks can surely be qualified as hacks too.
However an artwork is not, in itself, an hack just like a perfectly executed engineering task is not, by itself, an hack.
In other words humans can hack art just like we can hack engineering.
Hacks however are identified by the curiosity they express: if there is nothing that challenge common wisdom, it’s not an hack (but you know… any definition can be hacked… this too!)
Another interesting aspect of your comment is the cultural perspective: is “hack” the best term to convey the meaning we are talking about?
Honestly I don’t know.
I cannot think of a better term for this specific meaning in the languages I know.
But this is probably a cultural bias.
Hackers have been around for centuries (if not millennia), so it’s suspect that mankind waited for MIT to pick a word from English.
We should probably look for a language hacker to look for translations in other languages, and find a term that can clearly identify the meaning, without too much cultural bias.
In this particular case, ‘hack’ is a little over-loaded with meaning for my tastes.
XANA is a hack in several senses of the word: it’s not only pointless but jokey, ad-hoc, and running counter to best practices. iX, on the other hand, is much more conservative in its style – the only thing new about iX is the concept behind it.
The reason I used the term ‘art project’ is that, rather than the hacks of the demoscene or of the obfuscated C contest, I identify these projects more closely with design fiction & other kinds of conceptual art: I said “what if your whole OS was ZigZag” and then answered my own question in the form of usable code, in the same way R. Mutt[1] asked “what’s the outer limit of what constitutes art – is it just whatever’s in the context of the museum” and then answered by submitting a urinal.
The term I think best applies is one from conlanging. In the constructed language space, an ‘a priori philosophical language’ is a language invented to express a particular model of language directly and purely. For instance, lojban’s ancestor loglan began as an a priori constructed language based on the idea that horn clauses are enough (and also as a kind of test of Sapir-Worf); toki pona is an expression of the idea of an extremely limited vocabularly & an extreme vagueness of denotation (because every word has such a wide range of possible meanings, a lot more effort goes into interpretation than into sentence construction); ithkuil is designed to be extremely dense, removing redundancy and using complex conjugation tables and swaths of imported phonemes to make it possible to translate paragraphs of english text into a handful of syllables. Generally speaking, these languages are both simpler than natural languages & harder to use: conceptual purity handicaps certain kinds of expression.
In the same way, I thought about “what if the only thing in your OS was this particular concept”, and then tried to make something that bordered on usability without going outside of that plan. (Semi-mainstream OSes that have gone down this conceptual route exist & are criticized from one side for sticking to the purity of their idea too closely while from the other side criticized for breaking it – like plan9, with the ‘everything is a file’ idea – but once you give any ground to usability or compatibility the project becomes a lot harder.)
I threw out any normal feature that would make things harder without meshing conceptually with my main idea. For instance, both these systems use flat memory (all in ring 0) and neither support executing non-kernel binaries. Dynamic memory management is eschewed in favor of a preallocated chunk. This severely limits compatibility with existing systems, eliminates the possibility of low-level extensions, and prevents it from being anything more than a single-user system (since security features are nonexistent).
[1] R. Mutt has been considered an alias of Marcel Duchamp for a long time, but recent research indicates that it was most likely actually an associate living in New York City, whose name I have forgotten.
Why are the words ‘hacker’ and ‘hack’ so important to you? I mean that question genuinely. Your comments over the last few days read, to me, like you feel a strong need to own the word ‘hacker’ and apply it to things regardless of the intent. In other words: is it important to call something a ‘hack’ vs an ‘art-project’? If so, why?
I think you are reading my comments according to your own culture.
Ownership is not something I care much.
Indeed one of my criticisms to ESR work is related to his appropriation of the Jargon file.
Is it important to call something “engineering” vs “applied physics”?
Is it important to call something “art” vs “craft”?
I feel a need for a precise language. Don’t you?
Because the language we use, forges the way we think.
By distorting the jargon, you affect what people can think easily and what they cannot.
The same apply with the difference between “free software” and “open source”, and my realization that the concept of FOSS/FLOSS is just a deception of this kind.
We need proper terms to convey orthogonal concepts.
Hack, Hacking, Hacker are just words. But they convey a time-worn meaning all over the world.
I’m just using such words properly. And trying to notice when people do not.
Can we change words? Why not!
Do you have any proposal?
It’s pretty hard not to? For evidence, I site the comments you’ve made over the last few days insisting on a particular definition of ‘hacker’ as well as properties of hacking regardless of the intent of the ‘hacker’. By “own” I mean you seem to have a very specific meaning that you feel it is important other people subscribe to.
Unless there are legal reasons, I don’t think it matters that much. If someone wants me to call them an engineer and I respect them, I’ll do it even if it doesn’t necessarily align with my view of being an engineer.
You’ve decided calling something a ‘hack’ is preferable to ‘art project’, so you seem to be pushing for us using a particular language, which suggests you seem to want us to think in a particular way. Maybe calling it an art project is actually just as good, or maybe even better! Or why can’t we call it both things rather than deciding one or the other? Your original comment here is so excluding. “If X then it’s a Y”. You could have said “I think these are born out of curiosity, so I’d also consider them hacks”. That would not have prompted me to comment.
I believe your insistence on a word meaning a very particular thing and pushing it is making you close-minded and rigid.
This question is missing the point, I believe. I don’t want to use a particular word. I just don’t want you insisting on which word I should use. You’re certainly free to do whatever you want (and I won’t comment on it anymore), I’m just not sure if you realize that in regards to this you can come off a bit arrogant and exclusionary. Maybe I’m the only one who interprets your comments that way, though.
Actually, maybe I’m missing your point.
I do not insist on the word “hack” more that I insist on using the word “sky” for the sky.
Tbh I insist a lot less than with “sky”, since:
Well… I agree that the word “hacker” is too overloaded!
However all definitions in the jargon except “someone who makes furniture with an axe” connotate the same thing, just focusing on particular observable aspects of an hacker.
As one who fits all eight numbered definitions (but not the addictional commentary about the inherent elitism), I see how all the definitions are partial and overall missing the point. They are, in other words, as pertinent as the descriptions of an elephant by blind men.
But, you know, dictionaries are human artifacts: they can be wrong and they can be fixed.
Meanwhile I’m just using the term properly.
Well you might have a point on this.
Maybe ESR saw several hackers that looked arrogant and exclusionary as they talked authoritatively about their realm of competence, misunderstood his own misunderstanding as if those hacker were actually arrogant and then rationalized such behaviour as elitism.
The fact is: definitions define. From Latin de-finire, roughly “marking a boundary”.
Any definition limits the scope of the meaning of a word.
But having a clear understanding of hacking does not mean to be exclusionary.
On the contrary! As a curious person I want everybody to leverage my curiosity and become hackers themselves, so that I can leverage their curiosity to learn more and so on… recursively.
But I don’t agree! I think this is why we keep on talking past each other. I am fine with the word ‘hacker’ being fuzzy and unclear and I’m not a fan of that you are rigidly defining it.
Oh… got it! :-D
Can I ask why? What’s the advantage of partial and unclear definitions?
Note that we are using this word world-wide.
This is what you obtain with this sort of “fuzzy and unclear” definitions!
(and no, that’s not pizza)
I think for a few reasons:
Nice. If you distribute pre-compiled binaries, please gpg-sign them and perhaps provide sha512 checksums of them as well.
Thank you. I was planning on GPG signing and using SHA256. Is that OK?
I also hope to make the build reproducible on linux, using debian’s reproducible build tools.
Reproducible builds would be awesome.
As for SHA256 vs. SHA512, from a performance point of view, SHA512 seems to perform ~1.5x faster than SHA256 on 64-bit platforms. Not that that matters much in a case like this, where we’re calculating it for a very small file, and very infrequently. Just thought I’d put it out there. So, yeah, SHA256 works too if you want to go with that :)
Also remember defaulting on SHA-1 or SHA-256 means hardware acceleration might be possible for some users.
SHA-1 has been on the way out for a while, and browsers refuse SHA-1 certificates these days. It might be a good idea to just skip SHA-1 entirely and rely on the SHA-2 family.
True. I was just noting there’s accelerators for it in many chips.
Isn’t SHA-512 faster on most modern hardware? ZFS uses SHA-512 cut down to SHA-256 for this reason, AFAIK.
A benchmark: https://crypto.stackexchange.com/questions/26336/sha512-faster-than-sha256
Oh idk. I havent looked at the numbers in a while. I recall some systems, esp cost- or performance-sensitive, stuck with SHA-1 over SHA-256 years ago when I was doing comparisons. It was fine if basic collisions weren’t an issue in thd use case.
Anecdotal, but I just timed running sha 512 and 256 10 times each, on a largeish (512MB) file. Made sure to run them a couple of times before starting the timer to make sure it was in cache. Results for sha-512 were:
And 256:
So it looks like sha-512 pretty clearly wins. (CPU is an i3-5005u).
Cool stuff. Modern chips handle good algorithms pretty well. What I might look up later is where the dirt-cheap chips are on offload performance and if they’ve upgraded algorithms yet. That will be important for IoT applications as hackers focus on them more.
You should probably be sure to have your facts straight before giving security advice.
I said there’s hardware accelerators for SHA-1 and SHA-2. Both are in use in new deployments with one used sometimes for weak-CPU devices or legacy support. Others added more points to the discussion with current, performance stats something I couldnt comment on.
Now, which of my two claims do you think is wrong?
Ideally, OP would also get a code signing certificate from Microsoft to decrease the amount of warnings Windows spouts about the executable.
Whenever I read about the legendary programmers and inventors, these people sought out programming by themselves.
So I don’t have much hope that pushing these things onto kids would give such good return. The top tier potentials will already find their own ways.
You don’t need to know how computers work to operate in a modern society just as you don’t need to know how a car works to drive. It’s good to know but not necessary.
Given the availability of computers, the extra ‘discovery’ of potential, I think would be small.
So the whole ‘teach xyz to program’ seems like a mostly cost-ineffective boondoggle to me.
We can safely drop mathematics, physics and literature from school curriculums then. Most of the students aren’t ever going to be good at it, and talent will find the way.
Learning programming by yourself does not necessarily make you top talent, though we’d all love to entertain that idea. It’s certainly not worse with a self motivated learner who is actually aided by school system. Besides the “self learners” of old days didn’t come from Amazon jungle to a running PDP rack and started hacking. They still had the fundamentals of logic, maths and reasoning taught in the school.
If school failed me back then the same way it’s failing kids today, it’s by teaching students idiotic facts beyond the basics of reading, writing and math. Introduce kids to as many matters as you can in such a way to cultivate curiosity and you’ll have won.
The whole ‘teach xyz to abc’ is ultimately pointless if what you’re seeking is innovation. It’s super good if you’re raising cattle-citizen though.
This seems orthogonal to what @varjag is and @LibertarianLlama are saying. Llama seems to be arguing that being great at programming is innate and we shouldn’t bother teaching kids programming because they’ll never be great because if they were great they don’t need to be taught. And varjag is pointing out that education, historically, has not been about making the greats. Whether or not the quality of education is any good seems quite different than the question if if we should educate.
I’m not responding to @varjag, although there is a relationship between what we both say. What I’m saying is, teaching programming for the sake of teaching programming is indeed pointless, but then again, so is pretty much everything beyond basic math and reading skills (then again, there’s the case of the enormous amount of functional illiterates so I’m not sure even that is technically necessary).
You’re correct, of course, both matters are vastly different. I think they’re ultimately connected, especially if you’re after cost-effectiveness, which I don’t agree should be the target of education, but that’s also another matter.
“Idiotic facts” is a very curious term.
Are you referring to history? Facts about how society is structured and how laws are made?
The last time I checked our local education directives “innovation”was just one facet of the welll-rounded citizens it was aiming to educate.
My bad, I did not correctly express my idea. I mean every part of my education which required the absorption of data for the sole purpose of regurgitation at a later date. I’ve lived that through many different subject matters. If you’re just pumping facts into brains so that you get graded on the quality of your repetition, it’s not really productive, and the students end up losing much of what they “learned”.
This I can definitely agree with is not a good way to learn.
This sounds like the old tired line “RAID is not backup”. Which is true, nobody disagrees, and so it is pointless to keep repeating.
FWIW, I continually have discussions with people that believe having a distributed database means they don’t need backups.
It’s less about someone actively disagreeing it’s about people naively thinking it’s the same concept
Wonder what he thinks of fossil and mercurial. The fossil author deliberately disallowed rewriting history; he makes a good case for it. At one time, Mercurial history was immutable too, but I believe this has changed.
I guess my point is that there are DVCS out there that satisfy his criteria; they’re just not git.
History can never be truly immutable so long as the data is stored on mutable media like a hard disk. Refusing to package tools that do it just makes people who need the feature find/build 3rd party tools
Do you see people doing that? I mostly see people just accepting the limitations and dealing with it.
The site seems down now :(
About Mercurial, I believe it has always allowed rewriting history, but not by default — you have to change your configuration files to opt-in to all the “dangerous/advanced” features.
Team lobste.rs, @lattera, @nickpsecurity?
Haha. I would love it if I had the time to play. Perhaps next year. Thanks for the ping, though. I’ve forwarded this on to a few of my coworkers who play CTFs.
I’d love to if I hadn’t lost my memory, including of hacking, to that injury. I never relearned it since I was all-in with high-assurance security at that point which made stuff immune to almost everything hackers did. If I still remembered, I’d have totally been down for a Lobsters hacking crew. I’d bring a dozen types of covert channels with me, too. One of my favorite ways to leak small things was putting it in plain text into TCP/IP headers and/or throttling of what otherwise is boring traffic vetted by NIDS and human eye. Or maybe in HTTPS traffic where they said, “Damn, if only I could see inside it to assess it” while the data was outside encoded but unencrypted. Just loved doing the sneakiest stuff with the most esoteric methods I could find with much dark irony.
I will be relearning coding and probably C at some point in future to implement some important ideas. I planned on pinging you to assess the methods and tooling if I build them. From there, might use it in some kind of secure coding or code smashing challenge.
I’m having a hard time unpacking this post, and am really starting to get suspicious of who you are, nickpsecurity. Maybe I’ve missed some background posts of yours that explains more, and provides better context, but this comment (like many others) comes off…almost Markovian (as in chain).
“If I hadn’t lost my memory…” — of all the people on Lobsters, you seem to have the best recall. You regularly cite papers on a wide range of formal methods topics, old operating systems, security, and even in this post discuss techniques for “hacking” which, just sentences before “you can’t remember how to do.”
You regularly write essays as comments…some of which are almost tangential to the main point being made. These essays are cranked out at a somewhat alarming pace. But I’ve never seen an “authored by” submitted by you pointing outside of Lobsters.
You then claim that you need to relearn coding, and “probably C” to implement important ideas. I’ve seen comments recently where you ask about Go and Rust, but would expect, given the number of submissions on those topics specifically, you’d have wide ranging opinions on them, and would be able to compare and contrast both with Modula, Ada, and even Oberon (languages that I either remember you discussing, or come from an era/industry that you often cite techniques from).
I really, really hate to have doubt about you here, but I am starting to believe that we’ve all been had (don’t get me wrong, we’ve all learned things from your contributions!). As far as I’ve seen, you’ve been incredibly vague with your background (and privacy is your right!). But, that also makes it all the more easy to believe that there is something fishy with your story…
I’m not hiding much past what’s private or activates distracting biases. I’ve been clear when asked on Schneier’s blog, HN, maybe here that I don’t work in the security industry: I’m an independent researcher who did occasional gigs if people wanted me to. I mostly engineered prototypes to test my ideas. Did plenty of programming and hacking when younger for the common reasons and pleasures of it. I stayed in jobs that let me interact with lots of people. Goal was social research and outreach on big problems of the time like a police state forming post-9/11 which I used to write about online under aliases even more than tech. I suspected tech couldn’t solve the problems created by laws and media. Had to understand how many people thought, testing different messages. Plus, jobs allowing lots of networking mean you meet business folks, fun folks, you name it. A few other motivations, too.
Simultaneously, I was amassing as much knowledge as I could about security, programming, and such trying to solve the hardest problems in those fields. I gave up hacking since its methods were mostly repetitive and boring compared to designing methods to make hacking “impossible.” Originally a mix of public benefit and ego, I’d try to build on work by folks like Paul Karger to beat the worlds’ brightest people at their game one root cause at a time until a toolbox of methods and proven designs would solve the whole problem. I have a natural, savant-like talent for absorbing and integrating tons of information but a weakness for focusing on doing one thing over time to mature implementation. One is exciting, one is draining after a while. So, I just shared what I learned with builders as I figured it out with lots of meta-research. My studies of work of master researchers and engineers aimed to solve both individual solutions in security/programming (eg secure kernels or high-productivity) on top of looking for ways to integrate them like a unified, field theory of sorts. Wise friends kept telling me to just build one or more of these to completion (“focus Nick!”). Probably right but I’d have never learned all I have if I did. What you see me post is what I learned during all the time I wasn’t doing security consulting, building FOSS, or something else people pushed.
Unfortunately, right before I started to go for production stuff beyond prototypes, I took a brain injury in an accident years back that cost me most of my memory, muscle memory, hand-eye coordination, reflexes, etc. Gave me severe PTSD, too. I can’t remember most of my life. It was my second, great tragedy after a triple HD failure in a month or two that cost me my data. All I have past my online writings are mental fragments of what I learned and did. Sometimes I don’t know where they came from. One of the local hackers said I was the Jason Bourne of INFOSEC: didn’t know shit about my identity or methods but what’s left in there just fires in some contexts for some ass-kicking stuff. I also randomly retain new stuff that builds on it. Long as it’s tied to strong memories, I’ll remember it for some period of time. The stuff I write-up helps, too, which mostly went on Schneier’s blog and other spaces since some talented engineers from high-security were there delivering great peer review. Made a habit out of what worked. I put some on HN and Lobsters (including authored by’s). They’re just text files on my computer right now that are copies of what I told people or posted. I send them to people on request.
Now, a lot of people just get depressed, stop participating in life as a whole, and/or occasionally kill themselves. I had a house to keep in a shitty job that went from a research curiosity to a necessity since I didn’t remember admining, coding, etc. I tried to learn C# in a few weeks for a job once like I could’ve before. Just gave me massive headaches. It was clear I’d have to learn a piece at a time like I guess is normal for most folks. I wasn’t ready to accept it plus had a job to re-learn already. So, I had to re-learn the skills of my existing job (thank goodness for docs!), some people stuff, and so on to survive while others were trying to take my job. Fearing discrimination for disability, I didn’t even tell my coworkers about the accident. I just let them assume I was mentally off due to stress many of us were feeling as Recession led to layoffs in and around our households. I still don’t tell people until after I’m clearly a high-performer in the new context. Pointless since there’s no cure they could give but plenty of downsides to sharing it.
I transitioned out of that to other situations. Kind of floated around keeping the steady job for its research value. Drank a lot since I can’t choose what memories I keep and what I have goes away fast. A lot of motivation to learn stuff if I can’t keep it, eh? What you see are stuff I repeated the most for years on end teaching people fundamentals of INFOSEC and stuff. It sticks mostly. Now, I could’ve just piece by piece relearned some tech in a focused area, got a job in that, built up gradually, transitioned positions, etc… basically what non-savants do is what I’d have to do. Friends kept encouraging that. Still had things to learn talking to people especially where politics were going in lots of places. Still had R&D to do on trying to find the right set of assurance techniques for right components that could let people crank out high-security solutions quickly and market competitive. All the damage in media indicated that. Snowden leaks confirmed most of my ideas would’ve worked while most of security community’s recommendations not addressing root causes were being regularly compromised as those taught me predicted. So, I stayed on that out of perceived necessity that not enough people were doing it.
The old job and situation are more a burden now than useful. Sticking with it to do the research cost me a ton. I don’t think there’s much more to learn there. So, I plan to move on. One, social project failed in unexpected way late last year that was pretty depressing in its implications. I might take it up again since a lot of people might benefit. I’m also considering how I might pivot into a research position where I have time and energy to turn prior work into something useful. That might be Brute-Force Assurance, a secure (thing here), a better version of something like LISP/Smalltalk addressing reasons for low uptake, and so on. Each project idea has totally different prerequisites that would strain my damaged brain to learn or relearn. Given prior work and where tech is at, I’m leaning most toward a combo of BFA with a C variant done more like live coding, maybe embedded in something like Racket. One could rapidly iterate on code that extracted to C with about every method and tool available thrown at it for safety/security checks.
So, it’s a mix of indecision and my work/life leaving me feeling exhausted all the time. Writing up stuff on HN, Lobsters, etc about what’s still clear in my memory is easy and rejuvenating in comparison. I also see people use it on occasion with some set to maybe make waves. People also send me emails or private messages in gratitude. So, probably not doing what I need to be doing but folks were benefiting from me sharing pieces of my research results. So, there it is all laid out for you. A person outside security industry going Ramanujan on INFOSEC and programming looking for its UFT of getting shit done fast, correct, and secure (“have it all!”) while having day job(s) about meeting, understanding, and influencing people for protecting or improving democracy. Plus, just the life experiences of all that. It was fun while it lasted. Occasionally so now but more rare.
Thank you for sharing your story! It provides a lot of useful context for understanding your perspective in your comments.
—
Putting my troll hat on for a second, what you’ve written would also make a great cover story if you were a human/AI hybrid. Just saying. :)
Sure. Im strange and seemingly contradictory enough that I expect confusion or skepticism. It makes sense for people to wonder. Im glad you asked since I needed to do a thorough writeup on it to link to vs scattered comments on many sites.
I have to admit similar misgivings (unsurprisingly, I came here via @apg and know @apg IRL). For someone so prolific and opinionated you have very little presence beyond commenting on the internet. To me, that feels suspicious, but who knows. I’m actually kind of hoping you’re some epic AI model and we’re the test subjects.
Occam’s Razor applies. ‘A very bright human bullshitter’ is more likely than somebody’s research project.
@nickpsecurity, have you considered “I do not choose to compete” instead of “If only I hadn’t had that memory loss”?
I, for one, will forgive and forget what I’ve seen so far. (TBH, I’m hardly paying attention anyway.)
But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.
I did say the way my mind works makes it really hard to focus on long-term projects to completion. Also, I probably should’ve been doing some official submissions in ACM/IEEE but polishing and conferencing was a lot of work distracting from the fun/important research. If I’m reading you right, it’s accurate to say I wasn’t trying to compete in academia, market, or social club that is the security industry on top of memory loss. I was operating at a severe handicap. So, I’d (a) do those tedious, boring, distracting, sometimes-political things with that handicap or (b) keep doing what I was doing, enjoying, and provably good at despite my troubles. I kept going with (b).
That was the decision until recently when I started looking at doing some real, public projects. Still in the planning/indecision phase on that.
“But, lies have a way of growing, and there is some line down the road where forgive-and-forget becomes GTFO.”
I did most of my bullshitting when I was a young hacker trying to get started. Quite opposite of your claim, the snobby, elitist, ego-centered groups I had to start with told you to GTFO by default unless you said what they said, did what they expected, and so on. I found hacker culture to be full of bullshit beliefs and practices with no evidence backing them. That’s true to this day. Just getting in to few opportunities I had required me to talk big… being a loud wolf facing other wolves… plus deliver on a lot of it just to not be filtered. I’d have likely never entered INFOSEC or verification otherwise. Other times have been personal failures that required humiliating retractions and apologies when I got busted. I actually care about avoiding unnecessary harm or aggravation to decent people. I’m sure more failures will come out over time with them costing me but there will be a clear difference between old and newer me. Since I recognize my failure there, I’m focusing on security BSing for rest of comment since it’s most relevant here.
The now, especially over past five years or so, has been me sharing hard-won knowledge with people with citations. Most of the BS is stuff security professionals say without evidence that I counter with evidence. Many of their recommendations got trashed by hackers with quite a few of mine working or working better. Especially on memory safety, small TCB’s, covert channels, and obfuscation. I got much early karma on HN in particular mainly countering BS in fads, topics/people w/ special treatment, echo chambers, and so on. My stuff stayed greyed out but I had references. They usually got upvoted back by the evening. To this day, I get emails thanking me for doing what they said they couldn’t since any dissenting opinion on specific topics or individuals would get slammed. My mostly-civil, evidence-based style survived. Some BS actually declined a bit since we countered it so often. Just recently had to counter a staged comparison here which is at 12 votes worth of gratitude, high for HN dissenters. The people I counter include high-profile folks in security industry who are totally full of shit on certain topics. Some won’t relent no matter who concrete the evidence is since it’s a game or something to them. Although I get ego out of being right, I mainly do this since I think safe, secure systems are a necessary, public good. I want to know what really works, get that out there, and see it widely deployed.
If anything, I think my being a bullshitting hacker/programmer early on was a mix of justified and maybe overdoing it vs a flaw I should’ve avoided. I was facing locals and an industry that’s more like a fraternity than meritocracy, itself constantly reinforcing bullshit and GTFO’ing dissenters. With my learning abilities and obsession, I got real knowledge and skills pretty quickly switching to current style of just teaching what I learned in a variety of fields with tons of brainstorming and private research. Irritated by constant BS, I’ve swung way in the other direction by constantly countering BS in IT/INFOSEC/politics while being much more open about personal situation in ways that can cost me. I also turned down quite a few jobs offers for likely five to six digits telling them I was a researcher “outside of industry” who had “forgotten or atrophied many hands-on skills.” I straight-up tell them I’d be afraid to fuck up their systems by forgetting little, important details that only experience (and working memory) gives you. Mainly admining or networking stuff for that. I could probably re-learn safe/secure C coding or something enough to not screw up commercial projects if I stayed focused on it. Esp FOSS practice.
So, what you think? I had justification for at least some of my early bullshit quite like playing the part for job interviews w/ HR drones? Or should’ve been honest enough that I never learned or showed up here? There might be middle ground but that cost seems likely given past circumstances. I think my early deceptions or occasional fuckups are outweighed by the knowledge/wisdom I obtained and shared. It definitely helped quite a few people whereas talking big to gain entry did no damage that I can tell. I wasn’t giving bad advice or anything: just a mix of storytelling with letting their own perceptions seem true. Almost all of them are way in my past. So, really curious what you think of how justified someone entering a group of bullshitters with arbitrary, filtering criteria is justified in out-bullshiting and out-performing them to gain useful knowledge and skills? That part specifically.
As a self-piloted, ambulatory tower of nano machines inhabiting the surface of a wet rock hurtling through outer space, I have zero time for BS in any context. Sorry.
I do have time for former BSers who quit doing it because they realized that none of these other mechanical wonders around them are actually any better or worse at being what they are. We’re all on this rock together.
p.s. the inside of the rock is molten. w t actual f? :D
Actually, come to think of it, I will sit around and B.S. for hours, in person with close friends, for fun. Basically just playing language games that have no rules. It probably helps that all the players love each other. That kind of BS is fine.
I somehow missed this comment before or was dealing with too much stuff to respond. You and I may have some of that in common since I do it for fun. I don’t count that as BS people want to avoid so much as just entertainment since I always end with a signal its bullshit. People know it’s fake unless tricking them is part of our game, esp if I owe them a “Damnit!” or two. Even then, it’s still something we’re doing voluntarily for fun.
My day-to-day style is a satirist like popular artists doing controversial comedy or references. I just string ideas together to make people laugh, wonder, or shock them. Same skill that lets me mix and match tech ideas. If shocking stuff bothers them, tone it way down so they’re as comfortable as they let others be. Otherwise, I’m testing their boundaries with stuff making them react somewhere between hysterical laughter and “Wow. Damn…” People tell me I should Twitter the stuff or something. Prolly right again but haven’t done it. Friends and coworkers were plenty fun to entertain without any extra burdens.
One thing about sites like this is staying civil and informational actually makes me hide that part of my style a lot since it might piss a lot of people off or risk deleting my account. I mostly can’t even joke here since it just doesn’t come across right. People interpret via impression those informational or political posts gave vs my in-person, satirical style that heavily leans on non-tech references, verbal delivery, and/or body language. Small numbers of people face-to-face instead of a random crowd, too, most of the time. I seem to fit into that medium better. And trying to be low-noise and low-provocation on this site in particular since I think it has more value that way.
Just figured I’d mention that since we were talking about this stuff. I work in a pretty toxic environment. In it, I’m probably the champion of burning jerks with improv and comebacks. Even most naysayers pay attention with their eyes and some smirks saying they look forward to next quip. I’m a mix of informative, critical, random entertainment, and careful boundary pushing just to learn about people. There’s more to it than that. Accurate enough for our purposes I think.
Lmao. Alright. We should get along fine then given I use this site for brainstorming, informing, and countering as I described. :)
And yeah it trips me out that life is sitting on a molten, gushing thing being supplied energy by piles of hydrogen bombs going off in a space set to maybe expand into our atmosphere at some point. That is if a stray star doesn’t send us whirling out of orbit. Standing in the way of all of this is the ingenuity of what appear to be ants on a space rock whose combined brainpower got a few off of it and then back on a few times. They have plans for their pet rock. Meanwhile, they scurry around on it making all kinds of different visual, IR, and RF patterns for space tourists to watch for a space buck a show.
As with most of Gary Bernhardt’s writing, I loved this piece. I read it several times over, as I find his writing often deeply interesting. To me, this is a great case study in judgement through attempting to apply Americanized principles to speech between two non-Americans (a Pole and a Finn) communicating in a second language.
There are several facets at play here as I see it:
There’s a generational difference between older hackers and newer ones. For older hackers, the code is all that matters, niceties be damned. Newer hackers care about politeness and being treated well. Some of this is a product of money coming in since the 90s, and people who never would’ve been hackers in the past are hackers now.
Linux is Linus’ own project. He’s not going to change. He’s not going to go away. If you don’t like the way he behaves, fork it. Run your own Linux fork the way you want, and you’ll see whether or not the niceties matters. Con Kolivas did this for years.
There are definitely cultural issues at play. While Linus has a lot of exposure to American culture, he’s Finnish. Finnish people are not like Americans. I find the American obsession with not upsetting people often infuriatingly two-faced, and I’m British. I have various friends in other countries who find the much more minor but still present British obsession with not upsetting people two-faced, and they’re right.
Go to Poland, fuck up and people will tell you. Go to Germany, do something wrong and people will correct you. Go to Finland, do something stupid getting in the way of a person’s job and probably they’ll swear at you in Finnish. I’m not saying this is right, or wrong, it’s just the rest of the world works differently to you, and while you can scream at the sea about perceived injustices, the sea will not change it’s tides for you.
Yes Linus is being a jerk, but it’s not like this is an unknown quantity. Linus doesn’t owe you kindness. You don’t owe Linus respect either. If his behaviour is that important to you, don’t use Linux.
In my experience of dealing with Finns, they don’t sugar coat things. When something is needed to be said, the Finns I’ve interacted with are extremely direct and to the point, compared to some other cultures. Would you say that’s fair?
I didn’t say that he’s representative of Finnish culture. He’s a product of it. He wasn’t raised American. He didn’t grow up immersed in American culture and values. It would be unrealistic to expect him to hold or conform to American values.
Definitely! Out of interest, what are your thoughts on this in terms of applicability to his communication style? I’m fairly certain there’s a general asshole element to his style, but I wonder how much (if any) is influenced by this.
As an Italian, I can say that after the WWII, US did a great job to spread their culture in Europe.
Initially to counter the “Bolsheviks” influx, later as a carrier for their products.
They have been largely successful.
Indeed, I love Joplin just like I love Vivaldi, Mozart and Beethoven! :-)
But we have thousands years of variegate history, so we are not going to completely conform anyway. After all, we are proud of our deep differences, as they enrich us.
At the risk of getting into semantics, Finland was much more neutral post WWII than other European nations due to realpolitik.
Also, there is something to say for Italian insults, by far some of the finest and most perverse, blasphemous poetry I’ve ever had the pleasure of experiencing. It’s the sort of level of filth that takes thousands of years to age well :)
Actually the Invettiva is a literary gender on its own, that date back to ancient Greek.
In Italian, there are several passages of Dante’s Divina Commedia that belong to the genre and are spectacular examples of the art you describe.
But since we are talking about jerk, I will quote Marziale, from memory: 2000 years later we still memorize his lines at school
Nothing Linus can say will ever compete! ;-)
Google translates this as
Which I assume is horribly wrong. Is it possible to translate for us non-worldly folks who only know English? :-)
The translation from Latin is roughly
It’s one of Martial’s Epygrams.
Not even one of the worse!
It’s worth noticing how nothing else remains of Menneia. And the same can be said of several people targeted by his insults.
Hah, that’s great. Thank you!
How is that relevant? On my current team, we have developers from Argentina, Bosnia, Brazil, China, India, Korea, and Poland, as well as several Americans (myself included). Yet as far as I can recall from the year that I’ve been on this team so far, all of our written communication has been civil. And even in spoken communication, as far as I can recall, nobody uses profanity to berate one another. To be fair, this is in a US-based corporate environment. Still, I don’t believe English being a second language is a reason to not be civil in written communication.
You’re comparing Linux, a Finnish-invented, international, volunteer-based non-corporate project to a US-based corporate environment, and judging Linus’ communications against your perception of a US-based corporate environment. You’re doing the same thing as the author, projecting your own values onto something that doesn’t share those values.
Additionally, by putting the words I’ve said, and following that up with a reference to a US-based corporate environment, you’ve judged the words of a non-American who wasn’t speaking to you by your own US-based corporate standards.
I hope that helps you understand my point more clearly. My point isn’t that Linus does or doesn’t act an asshole (he does), but that expecting non-Americans to adhere to American values, standards or norms is unrealistic at best, and cultural colonialism at worst.
No, people who would’ve never been hackers in the past, are not hackers now either.
And hackers have always cared about more than code. Hacking has always been a political act.
Linus is not a jerk, his behaviour is pretty deliberate. He does not want to conform.
He is not much different from Dijkstra, Stallman or Assange.
Today, cool kids who do not understand what hacking is, insult hackers while calling themselves hackers.
Guess what? Hackers do care about your polite corporate image as much as they do care about dress code.
Not an issue. It’s a feature! Hackers around the world are different.
And we are proud of the differences, because they help us to break mainstream groupthink.
This is a really interesting idea! I’m seeing this kind of idea more and more these days and I haven’t been able to work out what it means. I guess you don’t mean something as specific as “Hacking has always been in favour of a particular political ideology” nor something as general as “Hacking has always had an effect on reality”. So could you say something more precise about what you mean by that?
This is a good question that is worth of a deep answer. I’ll rush a fast one here, but I might write something more in the near future.
All hacks are political, but some are more evidently so. An example is Stallman’s GNU GPL. Actually the whole GNU project is very political. Almost as political as BSDs. Another evidently political hack was done by Cambridge Analytica with Facebook’s user data.
The core value of hackers activity is curiosity: hackers want to learn. We value freedom and sharing as a mean to get more knowledge for the humanity.
As such, hacking is always political: its goal is always to affect (theoretically, to improve) the community in one way or another.
Challenging laws or authorities is something that follows naturally from such value, but it’s not done to get power or profit, just to learn (and show) something new. This shows how misleading is who distinguish hats’ colours: if you are an hacker you won’t have problems to violate stupid laws to learn and/or share some knowledge, be it a secret military cablage, how to break a DRM system or how to modify a game console: it’s not the economical benefit you are looking for, but the knowledge. The very simple fact that some knowledge is restricted, forbidden or simply unexplored, is a strong incentive for an hacker to try to gain it, using her knowledge and creativity.
But even the most apparently innocent hack is political!
See Rust, Go, Haskell or Oberon: each with its own vision of how and who should program and of what one should expect from a software.
See HTTP browsers: very political tools that let strangers from a different state run code (soon assembly-like) on your pc (ironically with your consent!).
See Windows, Debian GNU/Linux or OpenBSD: each powerful operating systems which their own values and strong political vision (yes, even OpenBSD).
See ESR appropriation of the jergon file (not much curiosity here actually, just a pursuit for power)!
Curiosity is not the only value of an hacker, but all hackers share such value.
Now, this is also a value each hacker express in a different way: I want everyone to become an hacker, because I think this would benefit the whole humanity. Others don’t want to talk about the political responsibility of hacking because they align with the regime they live in (be it Silicon Valley, Raqqa, Moscow or whatever), and politically aware hackers might subvert it.
But even if you don’t want to acknowledge such responsibility, if you hack, you are politically active, for better or worse.
That’s also the main difference between free software and open source software, for example: free software fully acknowledge such ethical (and thus political) responsibility, open source negate it.
So if I understand you correctly you are saying something much closer to “Hacking has always attempted to change the world” than “Hacking has always been in support of a political party”.
Politics is to political parties, what economy is to bankers.
If you read “Hacking has always been a political act” as something related to political parties, you should really delve deeper in the history of politics from ancient Athens onwards.
No.
This is a neutral statement that could be the perfect motto/tagline for a startup or a war.
Hacking and politics are not neutral. They are both strongly oriented.
Politics is oriented to benefit the polis.
Indeed, lobbying for particular interests is not politics at all.
Hacking is not neutral either.
Hacking is rooted in the international scientific research that was born (at least) in Middle Age.
Hackers solve human problems. For all humans. Through our Curiosity.
IMO, you’re defining “Hacking is political” to the point of uselessness. Basically, nothing is apolitical in your world. Walking down the street is a political statement on the freedom to walk. Maybe that’s useful in a warzone but in the country I live in it’s a basic right to the point of being part of the environment. I don’t see this really being a meaningful or valuable way to talk about things. I think, instead, it’s probably more useful for people to say “I want to be political and the way I will accomplish this is through hacking”.
Read more carefully.
Every human action can serve the polis, but several human actions are not political.
Hacking, instead, is political in its very essence. Just like Science. And Math.
Maybe it’s the nature of knowledge: an evolutive advantage for the humanity as a whole.
Or maybe it is just an intuitive optimization that serves hackers’ curiosity: the more I share my discoveries, the more brains can build upon them, the more interesting things I can learn from others, the more problem solved, the more time for more challenging problems…
For sure, everyone can negate or refuse the political responsibility that comes from hacking, but such behaviour is political anyway, even if short-sight.
I just don’t see it. I think you’re claiming real estate on terminology in order to own a perspective. In my opinion, intent is usually the dominating factor, for example murder vs manslaughter (hey, I’m watching crime drama right now). Or a hate crime vs just beating someone up.
You say:
But I know plenty of people who do what would generally be described as hacking with no such intent. It may be a consequence that the community is affected but often times it’s pretty unlikely and definitely not what they were trying to do.
Saying that “intent is usually the dominating factor” is a political act. :-)
It’s like talking about FLOSS or FOSS, like if free software and open source were the same thing. It’s not just false, it does not work.
Indeed it creates a whole serie of misunderstanding and contraddictions that are easily dismissed if you simply recognise the difference between the two world.
Now, I agree that Hacking and Engineering overlap.
But they differ more than Murders and Manslaughters.
Because hackers use engineering.
And despite the fact that people abuse all technical terms, we still need proper terms and definitions.
So despite the fact that everyone apparently want to leverage terms like “hacking” and “freedom” in their own marketing, we still need to distinguish hackers from engineers and free software from open source.
And honestly I think it’s easy to take them apart, in both cases.
Could you help me understand better then your usage of the word “politics” because I don’t think it’s one that I am familiar with.
Good question! You caught me completely off-guard!
Which is crazy, given my faculty at University was called “Political Science”!
I use the term “Politics” according to the original meaning.
Politics is the human activity that creates, manages and preserves the polis.
Polis was the word ancient Greeks used for the “city”, but by extension we use it for any “community”. In our global, interconnected world, the polis is the whole mankind.
So Politics is the set of activities people do to participate to our collective life.
One of my professors used to define it as “the art of living together”.
Another one, roughly as “the science of managing power for/over a community”.
Anyway, the value of a political act depends on how it make the community stronger or weaker. Thus politics is rarely neutral. And so is hacking.
Thanks a lot. That does make things clearer. However I am still confused why under the definition of “Politics is the human activity that creates, manages and preserves the polis.” I admit that I don’t understand what ‘Saying that “intent is usually the dominating factor” is a political act’ but at least I now have a framework in which to think about it more.
That’s very good explanation. I might add:
Linus has none of these luxuries. He cannot err on the side of being too subtle.
This blog post is just another instance of an American that believes that the rest of the world has to revolve around his cultural norms.
I think the author did a pretty good job of editing the message in such a way that it was more clear, more direct, and equally forceful, while ensuring that all of that force was directed in a way relevant to the topic at hand.
(Linus has strong & interesting ideas about standardization & particular features. I would love to read an essay about them. The response to a tangentially-related PR is not a convenient place to put those positions: they distract from the topic of the PR, and also make it difficult to find those positions for people who are more interested in them than in the topic of the PR.)
The resulting message contains all of the on-topic information, without extraneous crap. It uses strong language and emphasis, but limits it to Linus’s complaints about the actually-submitted code – in other words, the material that should be emphasized. It removes repetition.
There is nothing subtle about the resulting message. Unlike the original message, it’s very hard to misread as an unrelated tangent about standardization practices that doesn’t address the reasons for rejecting the PR at all.
The core policy being implemented here is not “be nice in order to avoid hurting feelings”, but “remove irrelevant rants in order to focus anger effectively”. This is something I can get behind.
Just wanted to point out that America is a huge country and its population is not homogenous. For example, you could have replaced Poland, Germany, and Finland with “Boston” and still have been correct (though, they’d just swear at you in English 🙂).
I think because most American tech comes out of San Francisco/Silicon Valley that it skews what is presented as “Americanized principals” to the international tech community.
Down here in the South, they have an interesting mix of trying to look/sound more civil or being blunt in a way that lets someone know they don’t like them or think they’re stupid. Varies by group, town, and context. There’s plenty of trash talking depending on that. Linus’s style would fit in pretty well with some of them.
Rather don’t develop the kernel. One can use Linux without having ever heard the nettle Torvalds (the majority I guess)
Not a bug. It cuts down on reader confusion, especially as replies come in.
Does the backend actually store all of the comment revisions? Is it possible to see the diff on comments?
Nope.
Thanks for the clarification!
Where YAML gets most of it’s bad reputation from is actually not from YAML but because some project (to name a few; Ansible, Salt, Helm, …) shoehorn a programming language into YAML by adding a template language on top. And then try to pretend that it’s declarative because YAML. YAML + Templating is as declarative as any languages that has branches and loops, except that YAML hasn’t been designed to be a programming language and it’s rather quite poor at it.
In the early days, Ant (Java build tool) made this mistake. And it keeps getting made. For simple configuration, YAML might be fine (though I don’t enjoy using it), but there comes a point where a programming language needs to be there. Support both: YAML (or TOML, or even JSON) and then a programming language (statically typed, please, don’t make the mistake that Gradle made in using Groovy – discovery is awful).
I’m very intrigued by Dhall though I’ve not actually used it. But it is, from the github repo,
it sounds neat
There is also UCL (Universal Config Language?) which is like nginx config + json/yaml emitters + macros + imports. It does some things that bother me so I stick to TOML but it seems like it is gaining some traction in FreeBSDd world. There is one thing I like about it which is there is a CLI for getting/setting elements in a UCL file.
Yes! This is one of the reasons I’m somewhat scared of people who like Ansible.
Yep! People haven’t learned from mistakes. Here’s a list of XML based programming languages.
I disagree with the negative posts. Writing about something you’ve just learned is absolutely a wonderful way to cement the knowledge, record it as you understand it for posterity if only for yourself, and help you pull others up right behind you. It’s not your responsibility to keep your ideas to yourself until some magic day where you reach enlightenment and only then can convey blessed knowledge on the huddled masses, a lot of this stuff (the specific tech, for the most part) moves too damn fast for that anyway. Maybe we need better mechanisms for surfacing the best information, sure, but discouraging people (yes, even noobs) from sharing what they’ve learned only ensures we’ll have fewer people practiced in how to do it effectively in the future.
That said, I do 1000% agree that people writing in public should be as up front as possible about where they are coming from and where they are at. I definitely get annoyed with low quality information that also carries an authoritative tone.
There’s a world between documenting how you learned a thing, and writing a tutorial for that same thing. If you’re learning a thing, probably don’t write a tutorial. I agree with you, writing about a freshly learned lesson helps in making the learning more permanent, though.
In the case of projects, I’d rather see people committing documentation changes back to the project, at least here the creator of the project can review it.
It’s a free internet and nobody can stop someone from doing this, but, IMO, the problem with technology is not that there is too little poorly written tutorial out there. Maybe it’s worth finding other ways of being constructive.
Writing it down can help the mind remember or think on things. If errors are likely, then maybe they just don’t publish it. They get the benefits of writing it down without the potential downsides.
Java is a language, while Node is a runtime. Node should be compared against the JVM because each platform can be targeted by different languages. For example, I can target both Node and the JVM with Clojure. In that scenario the problems regarding locking threads don’t exist because Clojure is designed to be thread safe and it provides tools, such as atoms, for working with shared mutable state.
My experience targeting both the JVM and Node, is that the JVM provides a much simpler mental model for the developer. The JVM allows you to write predominantly synchronous code, and the threads are used to schedule execution intelligently ensuring that no single chunk of code hogs the CPU for too long. With Node you end up doing scheduling by hand, and it’s your responsibility to make sure that your code isn’t blocking the CPU.
Here’s a concrete example from a script I ended up writing on Node:
here’s what the JVM equivalent would look like:
You could use promises or async to make the Node example a bit cleaner, but at the end of the day you’re still doing a lot more manual work and the code is more complex than it would’ve been with threads.
Couldn’t this be better described as a limitation of the implementation Clojure on Node and not actually node?
I don’t really see how that’s the case. The problem I’m describing is that Node has a single execution thread, and you can’t block it. This means that the burden of breaking up your code into small chunks and coordinating them is squarely on the developer.
As I said, you could make the code a bit more concise, but the underlying problem is still there. For example, I used promises here, but that’s just putting on a bandaid in my opinion.
Threads are just a better default from the developer perspective, and it’s also worth noting that you can opt into doing async on the JVM just fine if you really wanted to. It’s not a limitation of the platform in any way.
There is the caveat that threads (at last in the JVM) dramatically increase the complexity of the memory model and are generally agreed to make it harder to write correct code. Single threaded event-loop style programs don’t remove the chance of race conditions and dead locks but they remove a whole class of issues. Personally, I like something like the Erlang model which is fairly safe and scales across hardware threads. My second personal preference is for a single threaded event-loop (although I generally use it in Ocaml which makes expressing the context switches much more pleasant than in JavaScript/Node).
The part about it being harder to write correct code only applies to imperative languages though. This is why I’m saying that it’s important to separate the platform from the language. I like the Erlang model as well, however shared nothing approach does make some algorithms trickier.
Personally, I found Clojure model of providing thread safe primitives for managing shared mutable state to work quite well in practice. For more complex situations the CPS model such as core.async or Go channels is handy as well in my experience.
Good on you. It’s worth mentioning here that Microsoft is going in the other direction. https://www.mercurynews.com/2018/06/19/microsoft-defends-ties-with-ice-amid-separation-outcry/amp/
Maybe I’m missing something, but it seems they are going in the exact same direction…
It’s a very confusing article; my best guess is that they are working with ICE, but not on “projects related to separating children from their families at the border”.
And just because Microsoft isn’t directly helping, they are still helping. That nuance is discussed in OP’s article - any support to an morally corrupt institution is unacceptable, even if it is indirect support.
But that perspective is very un-nuanced. Is everything ICE does wrong? It’s a large organization. What if the software the company that @danielcompton denied service to is actually just trying to track down violent offenders that made it across the border? Or drug trafficking?
To go even further, by your statement, Americans should stop paying their taxes. Are you advocating that?
ICE is a special case, and deserves to be disbanded. It’s a fairly new agency, and its primary mission is to be a Gestapo. So yes, very explicitly, everything ICE does is wrong.
On what ground and with which argument can you prove your statement? I mean, there is probably an issue with how it’s run, but the whole concept of ICE doesn’t sound that wrong to me.
From https://splinternews.com/tear-it-all-down-1826939873 :
I’m going to just delegate all arguments to that link, basically, with a comment that of it’s not exceedingly obvious, then I probably can’t say anything that would persuade you. Also, this is all extremely off-topic for this forum, but, whatevs.
There’s always a nuance, sure. Every police force ever subverted for political purposes was still continuing to fight petty crime, prevent murders and help old ladies cross the street. This always presented the regimes a great way to divert criticism, paint critics as crime sympathisers and provide moral leeway to people working there and with them.
America though, with all its lip service to small government and self reliance was the last place I expected that to see happening. Little did I know!
Just like people, organizations should be praised for their best behaviors and held responsible for their worst behaviors. Also, some organizations wield an incredible amount of power over people and can easily hide wrongdoing and therefore should be held responsible to the strictest standard.
Its worth pointing out that ICE didn’t exist 20 years ago. Neither, for that matter did the DHS (I was 22 when that monster was born). “Violent offenders” who “cross the border” will be tracked down by the same people who track down citizen “violent offenders” ie the cops (what does “violent offender” even mean? How do we who these people are? how will we know if they’re sneaking in?) Drug trafficking isn’t part of ICEs institutional prerogative in any large, real sense, so its not for them to worry about? Plenty of americans, for decades, have advocated tax resistance precisely as a means to combat things like this. We can debate its utility but it is absolutely a tactic that has seen use since as far as I know at least the Vietnam war. Not sure how much nuance is necessary when discussing things like this. Doesn’t mean its open season to start dropping outrageous nonsense, but institutions which support/facilitate this in any way should be grounds for at the very least boycotts.
Why is it worth pointing out it didn’t exist 20 years ago? Smart phones didn’t either. Everything starts at some time.
To separate out arguments, this particular subthread is in response to MSFT helping ICE, but the comment I responded to was referring to the original post, which only refers to “border security”. My comment was really about the broader aspect but I phrased it poorly. In particular, I think the comment I replied to which states that you should not support anything like this indirectly basically means you can’t do anything.
Its worth pointing out when it was founded for a lot of reasons; what were the conditions that led to its creation? Were they good? Reasonable? Who created it? What was the mission originally? The date is important because all of these questions become easily accessible to anyone with a web browser and an internet connection, unlike, say, the formation of the FBI or the origins of Jim Crow which while definitely researchable on the net are more domains of historical research. Smart phones and ethnic cleansing however, not so much in the same category.
If you believe the circumstances around the formation of ICE are worth considering, I don’t think pointing out the age of the institution is a great way to make that point. It sounds more like you’re saying “new things are inherently bad” rather than “20 years ago was a time with a lot of politically questionable activity” (or something along those lines).
dude, read it however you want, but pointing out that ICE is less than 20 years old, when securing a border is a foundational issue, seems like a perfect way to intimate that this is an agency uninterested in actual security and was formed expressly to fulfill a hyper partisan, actually racist agenda. Like, did we not have border security or immigration services or customs enforcement prior to 2002/3? Why then? What was it? Also, given that it was formed so recently, it can be unformed, it can be dismantled that much easier.
I don’t understand your strong reaction here. I was pointing out that if your goal was to communicate something, just saying it’s around 20 years old didn’t seem to communicate what you wanted to me. Feel free to use that feedback or not use it.
In addition, I bet the ICE is using Microsoft Windows and probably Office too.
That’s a great point, and no I don’t advocate for all Americans to stop paying taxes.
A very interesting position. It just requires you to stop using any currency. ;-)
No, it requires you to acknowledge that using any currency is unacceptable.
Of course not using any currency is also unacceptable. When faced with two unacceptable options, one has to choose one. Using the excuse “If I follow my ethics I can never do anything” is just a lazy way to never think about ethics. In reality everything has to be carefully considered and weighed on a case by case basis.
Why? Currency is just a tool.
I completely agree.
Indeed I think that we can always be ethical, but we should look beyond the current “public enemy”, be it Cambridge Analytica or ICE. These are just symptoms. We need to cure the disease.
This seems more appropriate for barnacles.
I don’t really understand this. Sure, it’s cool to optimize something so well, but I don’t see the point of going to so much effort to reduce memory allocations. The time taken to run this, what it seems like you would actually care about, is all over the place and doesn’t get reduced that much. Why do we care about the number of allocations and GC cycles? If you care that much about not “stressing the GC”, whatever that means, then better to switch to a non-GC language than jump through hoops to get a GC language to not do its thing.
On the contrary, I found this article a refreshing change from the usual Medium fare. Specifically, this article is actually technical, has few (any?) memes, and shows each step of optimization alongside data. More content like this, please!
More to your point, I imagine there was some sort of constraint necessitating it. The fact that the allocation size dropped so drastically fell out of using a pooled allocator.
Right at the beginning of the article, it says:
So: They’re doing bulk imports of data, and the extra allocation produces so much overhead that they need to schedule around it (“outside of business hours”). Using 7.5GB may be fine for processing a single input batch on their server, but it’s likely they want to process several data sets in parallel, or do other work.
Sure, they could blast the data through a DFA in C and probably do it with no runtime allocation at all (their final code is already approaching a hand-written lexer), but completely changing languages/platforms over issues like this has a lot of other implications. It’s worth knowing if it’s manageable on their current platform.
That’s what they claim, but it sounds really weird to me. I’ve worked with plenty of large data imports in GCed languages, and have never had to worry about overhead, allocation, GC details, etc. I’m not saying they don’t have these problems, but it would be even more interesting to hear why these things are a problem for them.
Also of note - their program never actually used 7.5GB of memory. That’s the total allocations over the course of the program, virtually all of which was surely GC’ed almost immediately. Check out the table at the end of the article - peak working set, the highest amount of memory actually used, never budged from 16kb until the last iteration, where it dropped to 12kb. Extra allocations and GC collections are what dropped. Going by the execution time listing, the volume of allocations and collections doesn’t seem to have much noticeable effect on anything. I’d very much like to know exactly what business goals they accomplished by all of that effort to reduce allocations and collections.
You’re right – it’s total allocations along the way rather than the allocation high water mark. It seems unlikely they’d go out of their way to do processing in off hours without running into some sort of problem first (so I’m inclined to take that assertion at face value), though I’m not seeing a clear reason in the post.
Still, I’ve seen several cases where bulk data processing like this has become vastly more efficient (from hours to minutes) by using a trie and interning common repeated substrings, re-using the same stack/statically allocated buffers, or otherwise eliminating a ton of redundant work. If anything, their timings seem suspicious to me (I’d expect the cumulative time to drop significantly), but I’m not familiar enough with the C# ecosystem to try to reproduce their results.
From what I understood, the 7.5GB of memory is total allocations, not the amount of memory held resident, that was around 15 megs. I’m not sure why the memory usage requires running outside business hours.
EDIT: Whoops, I see you responded to a similar comment that showed up below when I was reading this.
The article doesn’t explain why they care, but many garbage collection make it hard to hit a latency target consistently (i.e. while the GC is running its longest critical section). Also, garbage collection is (usually better optimized for short-living allocations than malloc, but still) somewhat expensive, and re-using memory makes caches happier.
Of course, there’s a limit to how much optimization one needs for a CSV-like file in the hundreds of MBs…
Maybe their machines don’t have 8gb of free memory lying around.
As shown in the table, they don’t use anywhere close to 8gb of memory at a time. This seems like a case that .NET is already very good at at a baseline level