What are the odds that VS Code doesn’t respect telemetry settings in nightly builds? I mean, the entire point of nightly builds is for testing, of which telemetry is potentially of particular value. If I wanted to minimise my exposure to telemetry, I wouldn’t be running nightly test builds in the first place, precisely because I’d expect it might not respect the normal telemetry settings (among several other reasons).
EDIT: I vaguely recall a similar post re: someone being outraged at Mozilla collecting telemetry from their Nightly builds which are published precisely for testing.
If the setting exists, it should be respected. I am willing to test software and submit bug reports when I observe breakage, but I am not willing to send back an unfiltered data stream.
Just as important as Firefox being a distinct product is that it uses its own rendering engine. There are only three major browser rendering engines: Trident/WhateverTheyCallTheEdgeEngine (Microsoft), WebKit/Blink (Chrome, Chromium, Safari, Opera, most one-off browsers like Web in GNOME, WebPositive on Haiku, etc), and Gecko/Servo (Firefox).
It’s not enough to have a million different browsers if they’re all using the same rendering engine. We need variety in implementations on that front too…
Very good point; I spoke of this issue with a colleague a couple of weeks ago, and I asked: “what happens if Microsoft decides to get out of the web browser market and Mozilla collapses?” Then, for all intents and purposes, Google would literally control the web. Now, I don’t know how likely it is that Microsoft would stop its Edge effort or that Mozilla would go under—probably not very likely in the short term. Still, it seems to me that the complexity of the modern web (HTML 5, CSS, JavaScript with a state-of-the-art JIT compiler, DRM, etc.) makes it unlikely that we’ll see another web engine—much less a free web engine—that can compete with the current trio, and practically impossible to see enough new web engines to actually create competition.
Agreed, the only way you web standards can realistically be called standards is if there’s a variety of implementations, and a variety of vendors with enough influence to shape those standards.
EdgeHTML. No really, that’s the name of the rendering engine. Microsoft continues to amaze with their horrendous naming :)
Older versions of IE could be called “FML”, which is exactly how you feel when tracking those IE only bugs.
I am surprised there are no major websites down today. This seems pretty trivial to abuse, and has a good chance of working with other certificate authorities.
Symantec should just retire from certificate business at this point. With a little coordination between the major browser vendors: Apple, Google Microsoft, and Mozilla, this would be trivially easy.
It is the right thing to do.
I’m interested to know why Let’s Encrypt gets the exception? Its validation process isn’t any better than all the other major CAs as far as I’m aware? DV validation is essentially the most minimal validation that any CA does…
Just because it is well known, has CT, the EFF is involved and it is free.
I could totally see an additional DV CA operated from the EU that follows the same idea as LE and same backing. Some redundancy, distribution and different jurisdiction seems like a good idea.
I use a custom bash script. What is the advantage of Stow? The advantage of my solution is that it does require to install stow. I use this on machines, where I have no root/sudo access.
I used a custom bash script for a long time, but eventually switched to Stow and a custom Makefile. For me, the advantage is mostly organizational. It’s nice to install a subset of configs, have different configs depending on operating system or environment, an easy ability to remove the configs, etc. The flexibility of the --no-folding option is also nice depending on whether or not everything in a directory is managed via Git.
There are a lot of dotfile management frameworks out there, but Stow is a very simple tool that’s ubiquitously available as a package, but also trivial to bootstrap into a local directory if you don’t have admin rights on a machine.
$ mkdir -p ~/.local/src
$ cd ~/.local/src/
$ curl https://ftp.gnu.org/gnu/stow/stow-latest.tar.gz | tar xvf -
$ cd stow-2.2.2/
$ ./configure --prefix=/home/elasticdog/.local
$ make
$ make install
As it’s just a Perl script with few dependencies you can also just install it into your dotfiles. You can then clone your dotfiles and literally stow stow (ala. cd dotfiles && stow/bin/stow stow). Now you have stow symlinked into ~/bin and you’re good to go :)
I also just use a script. It seems so trivial to me, a new piece of software shouldn’t be necessary.
One feature of Stow that I find useful is the ability to undo the symlinks. I use that feature to install two versions of a program in /usr/local/stow and easily switch between the two.
For the Windows admins (hello? Bueller?):
For all of the above except Message Analyzer you really want to configure symbols. The Debugging Tools effectively need them to be useful in most scenarios, while the WPT tools and many Sysinternals tools are much more useful with them (e.g. viewing a call stack for a thread in Process Explorer).
Nice article. In a similar vein and possibly helpful to others, I made this a while ago while dealing with bash quirks: bash-script-template. Idea was to have a sane starting point for creating more complex scripts that needed to be somewhat durable, with sane scripting defaults.
It provides several functions I’d often find myself effectively writing whenever I started a new script, hence the creation of a template I can simply source, or edit if I want it to be entirely self-contained. Might be useful to others, and constructive criticism is always welcome!
This is way too much for what I normally write, but a lot of it looks useful! I’ll definitely be picking and choosing pieces from this for my scripts. Thanks!
If you’re writing anything more than a couple of lines, take ten minutes to set up the “shellcheck” linter. It’ll tell you what could go wrong, line by line, and how to fix it.
I’ve found it elevates the reliability of my shell scripts to be comparable to my ruby.
It’s such an incredible shame that the video is littered with poorly drawn ejaculated penises (yes, really) and using such clever usernames as “cuntballs” (again, yes, really). The presenter’s voice gives me the definite impression of an adult male and not a 12 year old.
It’s a shame as the actual content is both interesting as a trip down memory lane and a testament to the truly impressive effort Microsoft puts in to backwards compatibility. The fact it’s possible to run 16-bit Windows 2.0 apps (1987!) on 32-bit Windows 10 is truly remarkable. But it’s hard to have a mature discussion or casually show it to anyone with the pervasive immaturity throughout.
I personally found it funny. Its easy to criticise other peoples work, but he is certainly creative.
For those of you marking this comment as ‘incorrect’ or as a ‘troll’, I would remind you that an unpopular opinion does not warrant a down vote in this community.
I do enjoy the “increased error rates” terminology. It’s the IT equivalent of political doublespeak. It’s not an outage, we’re just experiencing increased error rates of around 100%! The AWS status page might as well just be “ALL IS FINE MOVE ALONG” in 72 pt Comic Sans MS.
Sure, but what if you don’t need something as complex as s3? If I just want to serve static files I can probably manage that just fine - probably better than AWS can manage something as complex as s3.
Internally I’m sure S3 is super complex, but none of that is exposed to me.
If you were using the AWS-recommended setup for a static site on s3 (that is, putting a CDN in front of it) then you likely didn’t notice the outage at all (for a few hours you couldn’t post new stuff but existing content is served out of your CDN).
Had a static site setup exactly this way on us-east-1 and it went down.
Was able to get a back up working on Firebase is about 30 minutes.
[Comment removed by author]
There were still other availability zones up for S3–most devs are just not interested enough in HA to use them.
Regions. All availability zones in east-1 were down.
For S3, it’s difficult and expensive (you pay again for each region you duplicate into, and the automatic duplication features aren’t up to symmetric multi-region setups) to go multi-region. For most other things, it’s either impossible or seriously kills the benefits of a cloud platform. (You don’t have to deal with renting remote server space or setting up a VPN; all the other technical issues with setting up a widely-distributed system remain your problem.) And finally, when Amazon goes down, your particular outage is probably not the most severe one your customers are facing.
So yeah, you’re not wrong, it would certainly be possible with good distributed design to have stayed up through this outage. It’s a hell of a lot more difficult and expensive than just flipping a switch, though.
It’s worth noting also that you can get better uptime even within a single region by not making that one region us-east-1. As far as I can tell it’s the oldest, most crowded, and least stable one out there.
They could just do good clustering on a reliable OS like VMS did in 80’s. Those clusters often had uptime that exceeded lifetime of current clouds.
Compared to all the piles of tech I see in cloud-style deployments that are constantly changing, varing quality, and varying docs?
Yes, “just” install this OS on a few boxes, follow manual on setting up clustering, setup networking dide, and you’re good to go. Even consultants that can do it for you available for reasonable fees. Old timers call such tech a “known quantity” where most surprises have been ironed out.
OpenVMS won’t even have x86-64 support until 2019. Not sure it’s a super great candidate for a modern business operating system unless you’re in it out of necessity. (http://vmssoftware.com/pdfs/VSI_Roadmap_20161205.pdf)
Remember I said “like VMS did in 80’s.” I’m not saying you have to use that specific product. I meant it more open-ended. You’d have to be OK with Itanium servers for VMS itself. There’s other clustering solutions out there that are similar. There’s potential for FOSS to clone some of them. And so on.
So I was around in the 80s. The clustering options were super proprietary, fragile, and rarely survived an upgrade. Although we had high reliability systems, we had exactly zero distributed, globally accessible, low latency, highly available, geographically disparate systems. And even fewer storage systems of that type. In fact, I don’t remember anything that could survive the loss of 5U of a rack, much less a whole rack, much less a cage, much less a datacenter. In fact, I could tell you stories about various hilarious attempts to make SCSI reliable that would make your hair stand on end.
The reason why you don’t see a proliferation of that type of system today, and in fact reach to find any example, is that it turned out to be an evolutionary dead end. The pattern of using many commodity components and tolerating failure turned out to be far more successful than using a smaller number of highly engineered components that armor themselves against failure. It turns out that in the presence of rapidly mutating state, and arbitrary threats, there are no tradeoff-free solutions.
Appreciate the perspective. Many others told me something different where their stuff was quite resilient with VMS admins saying it the most. Far as 5U, one bank lost a whole site of VMS servers in WTC with failover happening with claimed loss of no transactions. Who knows what story is for remote filesystems.
Is there a modern OS that can provide the strongly reliable clustering you describe at a relatively comparable cost to S3? Honest question.
That’s why standard practice in highly-available clusters is redundant, networking links over different providers. Many used leased lines, too.
Uh…yes? Think universities, engineering companies (real ones, not YCStartupprgonnabegoneintwomonths.ly), research labs…
Not to be too much of a tinfoil or anything, but if the relevant 3-letter agencies knew about this before disclosure, it must have been an absolute gold-mine. Tinfoil aside, I’d like to see a list of affected sites, even if limited to a Top 100, so that a judgement call can be made by users as to if they want to update their credentials.
How confident are you that you that you don’t use any services that are backed by CloudFlare, were afflicted, and sensitive data relevant to your usage wasn’t exposed? I’m not.
Here’s a preliminary list: https://github.com/pirate/sites-using-cloudflare/blob/master/README.md
This list needs a lot more upvotes.
The most frustrating part about this entire debacle is that the breadth and depth of (necessary) modern caching is beyond the control or even knowledge of the average Joe on the Internet. What I mean by that is that as someone who (for example) is a customer of DigitalOcean, my credentials and activities there have been compromised by a service I wasn’t even aware was a factor in their operation.
I’ve reset my password there, but what other data is already out there from my sessions since the fall?
And now we face the arduous task of resetting however many passwords we have on however many services. Some of which won’t even be affected.
To be clear, I’m not whining with malice at Cloudflare; shit happens. I’m just reeling at the scale and reach of this.
Does it really still count as tinfoil to think that 3-letter agencies are involved when there is clear evidence this is something they’re highly involved/interested in? I don’t think you’re being tinfoil as much as just making a good point.
Most TLAs are interested in specific targets. They may collect all the traffic for later analysis, but still after targeted individuals. A random collection of dating site messages and Fitbit credentials doesn’t help them. Probably too fragmentary to be useful.
How would you use this data? If it were easy to find your arch nemesis in the dump, you could heap shit on them. But if all you’ve got is some uber trip that a random dude in Kansas City took, I guess you could grief him, but to what end?
They may collect all the traffic for later analysis, but still after targeted individuals.
Hmm, the idea that they only go after specific targets doesn’t sound right. The XKeyScore stuff showed them running queries against aggregated data, such as keywords entered into search engines.
That’s a reasonable point. Also trying to find targets. Again though, not sure how much a leak like this helps them do that.
I recognise this could be considered a duplicate of this submission (sorry calvin!) but have posted it regardless as I think the blog post is both interesting and adds more context. It’s not seemingly linked to from the repository itself.
It’s so disappointing seeing these bugs as at least for me it has an enormous impact on my confidence in open-source encryption software to safeguard my data. Bruce Schneier remarked way back in 1999 that “In the cryptography world, we consider open source necessary for good security.” He’s right, but at the same time, this bug and many before it is amateur stuff. I mean, look at this code excerpt:
// paranoid default setup mode
//write (fd[1], "y\n", 2);
//write (fd[1], "y\n", 2);
write (fd[1], "p\n", 2);
write (fd[1], password, strlen (password));
write (fd[1], "\n", 1);
Coupled with the earlier execve call to an interactive command-line utility there is just so much wrong with this. I could be wrong, but I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software? I’m not sure what the conclusion here is, but it seems there’s a real lack of serious code review, adherence to well defined coding standards, periodic code audits, etc… to avoid this stuff from ever making it into a stable release. OpenBSD is the only open-source security project I can think of that gets this stuff right. They’ll still make mistakes, but I at least have confidence they’re working hard to avoid them.
Likely controversially, I’d argue there’s a broader issue of applying the Unix philosophy of individual utilities that perform a few well-defined functions, and often shell scripts binding them together, to security software. It’s inherently brittle, with each additional independent binary or script making the overall system more fragile. A monolithic approach to ensure the system has a minimum of external dependencies, relies exclusively on stable, ideally low-level APIs that can be reasonably trusted to be correctly implemented, makes sense. It’s just too damn hard to manage a web of disparate utilities, scripts and dependencies with a guarantee they will work together exactly as intended for tasks which are security critical.
I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software
You are very naive to think this, people are people, no matter whether you can see the implementation, or who paid for it.
It’s not about the people, I realise people are fallible. It’s about processes to combat that inherent fallibility. My point is I wonder if much of the open-source community w.r.t. security software is behind in implementing processes, some of which I referenced, to reduce or hopefully eliminate these sorts of mistakes. Irrespective of your opinion on proprietary software, it’s indisputable that companies like Microsoft have implemented processes designed to improve the security of their software and they’re often quite rigidly enforced. See e.g. the Security Development Lifecycle.
Are there parallels in prominent open-source security software? Open question, because I legitimately don’t know, but if there are, they often don’t seem to work. It’s clearly possible to get it right though, with OpenBSD the gold standard for high quality, high security, open-source software.
Put bluntly, assuming I’m not protecting against state-level actors, I’d have more confidence in something like BitLocker encrypting my data properly than most open-source security software. And that’s sad, because I’d much rather use open-source cryptography for the numerous obvious reasons. But taking security seriously means that bugs which result in my data being encrypted with the passphrase “p” comprehensively destroy my confidence. It’s all well and good that the software is open-source, but if nobody is reviewing that the implementation is correct, then one of the most convincing points in favour of using open-source cryptography is pretty much eliminated.
Comparing the bugs you can see with the bugs you can’t will only ever have one outcome. You say that processes don’t seem to work for OSS, but what makes you think that Microsoft’s process is working out any better for them?
I’d trust dm-crypt/cryptsetup-luks (or whatever the current mainstream option is) over BitLocker any day. The popular OSS crypto does get in-depth review (at least going by the maintainers I’ve talked to). This “cryptkeeper” is a tool I’d never heard of that sounds like it’s virtually unmaintained. I would say it’s worth checking that security software in particular, even if OSS, comes from a reputable source, and I do think that Ubuntu and Debian in particular need to get a lot more serious about auditing the software they put their name on, especially security software.
I’m less confident that there’s greener grass.
I just don’t see these sort of fundamental, basic mistakes being made in major proprietary cryptography software?
I don’t know for sure, but my guess depends on what type of software you mean. If you mean expensive enterprise stuff that went through security audits, maybe some confidence. I mean there is really shoddy enterprise software too, but I’d be willing to venture some cautious optimism that it’s possible to purchase a not-totally-useless encryption solution.
But in the space cryptkeeper is operating in, i.e. desktop software targeted at regular end users, I have less confidence. The quality of proprietary software in this space is… bad. There is some extremely bad code out there hiding inside proprietary backup tools, anti-virus suites, etc. The fact that most proprietary disk-encryption software is part of these same suites with a half-dozen utility tools bundled together, most of which are just bad all around, makes me even more skeptical. I would really not bet any money that something like Symantec Endpoint Encryption or Sophos SafeGuard isn’t riddled with holes.
As someone who has done security audits on some of these “expensive enterprise stuff” I think you should have about as much faith in it as you do for the “desktop software”. There is a massively disturbing lack of good cryptography in them and even when they are found by auditors they tend to be thrown off as “acceptable risk”. I’ve been told that crypto was unbreakable and that crypto findings didn’t matter because the attacker would need access to the machine, despite me showing things like privilege escalation about 10 minutes ago in a debrief. In fact the more expensive the less likely they seem to be to adopt non-FIPS-140 crypto and thus are just fine with 3DES. At least that is my experience.
3DES is fine isn’t it? A bit slower than AES and rarely hardware-accelerated, but it’s not like it’s known-broken or anything.
Depends on how you define fine I suppose. 3DES has a couple of key type weakening attacks, padding issues, relatively low key sizes (I’ve seen 56-bit 3DES more than you could imagine), all the issues that come with CBC, and most importantly it isn’t Authenticated Encryption (AE/AEAD). All of these essentially make 3DES the weakest common denominator, and that’s not counting the speed issues like you mentioned, which are pretty critical. When dealing with things like FIPS mode devices it’s pretty important to future proof yourself, when NIST says that you won’t be in compliance for supporting 3DES in 2030 that means that if you don’t add or remove approved ciphers and 2030 rolls by you could very easily lose your certification, which is absurdly expensive and can crush an entire business.
So it’s definitely a genuine issue, but it’s in my view nowhere near as big a deal as the author or the comments on many other sites covering it are making it out to be. Definitely a long-way off from an all-caps “SEVERE”; I reserve those for unauthenticated RCE and wormable exploits, or similar mass carnage variety flaws (e.g. Heartbleed).
The main issue he points out is you can press Shift + F10 and get a Command Prompt during the install/upgrade process, which has been true dating back to at least 2006 with Windows Vista. So not exactly new, and it’s not a “CRAZY bug”, it’s by design, as it’s frequently incredibly useful. When you’re running in a minimal Windows environment (WinPE), if something goes wrong in the early setup, it’s going to be one of the few and possibly only options available to you to figure out what’s going wrong. It’s effectively the Windows equivalent of swapping to a different tty during a Linux or BSD setup. More importantly, it can be disabled, and this isn’t hard to do for enterprise deployment scenarios using tools like SCCM (System Center Configuration Manager). In fact, for memory the checkbox that enables it in SCCM is marked “Test mode only”. But if you’re in an unmanaged environment, you probably don’t want to have to hack your setup media to be able to access basic troubleshooting tools.
As for the BitLocker point, this is again not a bug but documented behaviour. The issue is that BitLocker doesn’t just encrypt your system and ask for a password at boot, but verifies the execution chain and boot environment up to the point where it prompts for that password. Upgrading the OS means that many of those binaries that are part of that trust chain are going to change. Disabling BitLocker protectors keeps everything encrypted but with a null key, so the binaries can be updated without requiring a BitLocker recovery, then the protectors can be re-enabled once the upgrade is complete. It definitely should be improved, as it feels like a bit of a kludge, but it’s a tricky problem to solve elegantly, particularly with respect to unattended upgrades. It doesn’t cause me to lose sleep, as the only attack scenario this is useful in again requires physical access to a system during an OS upgrade, and while I’d like this process handled better, it’s pretty low on my threat assessment.
In short, both of these issues could probably do with more awareness, but they’re really only pertinent I think to enterprise scenarios, and the people who know what they’re doing are aware of both of these issues, and mitigate them if it’s worth the effort.
OpenBSD is also unfree per RMS. There is a URL in a ports tree makefile, and if you visit it, you will be presented with the option to install unfree software.
Not a fun read, and as the author acknowledges, they’re rightly ashamed of their part in the development of the site. That said, I applaud him for taking the time to write about it. It takes courage to publicly document your professional failings, and much more so when they venture into the deeply unethical. Hopefully, other developers, young and old, will read tales like this and apply a greater deal of introspection on such questionable tasks than they may otherwise have done.
So glad I started my Sunday morning by reading this. I often feel I work in an industry that’s particularly brutal w.r.t. its treatment of mistakes or even those simply lacking in experience. It’s nice to see a post highlighting an incredibly reasonable and productive response, instead of the abuse and mockery I often see.
Semi-related to this: In the limited amount I’ve been exposed to containers at my work, the suggested practice is to run everything in the container as root because you’re just going to run one thing in your container anyways, so giving it access to everything is just fine.
Does anyone with more experience have an opinion on this?
Erk. There have been several vulnerabilities which have meant that if you have root inside a container, you can break out of it and get root on the host. All of them have been patched, but I wouldn’t mind betting there will be more in time. (My guess: the next one will involve user namespaces in some way.)
In general I recommend following the principle of least privilege.
My prediction was wrong :) Here’s the latest one I’ve become aware of, and it doesn’t even require root: https://lobste.rs/s/kg6yf1/dirty_cow_cve_2016_5195_docker_container
root usually has access to more parts of the kernel, no? Increasing attack surface.
There are also a number of scenarios in which a user outside a jail and root inside a jail can collude to become root outside.
If your applications do not need to write to any files, then they do not need write access within the container, even if root does (to say, update files, etc). Dropping root means that remote execution may degrade but might not deny service.
If your application does not need to open new network connections, then it should not have the ability to do so, even if root has the ability. iptables -m owner can prevent a local vulnerability from spreading.
If your application does not execute, then you can drop this with ulimit. Keeping root means you can’t. This eliminates entire classes of problems
All of my programs run with very low privileges because while it is bad for my customers that service is denied, it is worse if someone can spin up a bunch of ec2 boxes with my credit card to run worthless cpu miners putting me in the poorhouse.
On linux:
The security ramifications aside of doing so (which are in my view serious), it’s just plain bad practice. Bugs in code can become far more serious due to having unrestricted access within the container (e.g. file system manipulation bugs), while similarly, other issues may be masked by having superuser access (e.g. insufficient permissions). If you’re designing your software to work without root privileges, which you almost always should, then why would you run it as root in the container just because you can? It should work fine without such privileges, and there are potential security and stability benefits in doing so.
Some may argue that because you’re running in a container that such bugs can be recovered from faster by just redeploying the container instance, and they’d be right, but that doesn’t make it a good mindset. That’s just using containers as a way to mask deeper problems with the quality of software.
I’m glad you pointed out that there are classes of attacks made more difficult by e.g. using an app user without write access to the FS.
RE the rest of the post - I’m not sure “It’s bad practice / poor quality” is a super helpful answer when the question boils down to “Why is it bad practice”.
Juggling users and permissions on servers is a substantial time sink that doesn’t immediately improve my users lives and it’s not unreasonable to ask ‘why should I do it when things work fine anyway’.
Thank you for the responses @0, @tedu, @geocar, @danielrheath, and @ralish. What you said matches my suspicion and it’s nice to hear some confirmation.
If you believe your container infrastructure is more secure than linux user/root then that makes sense. Personally I don’t consider either of them reliable enough to constitute a security boundary; I would do least-privilige at the machine level (and within my application runtime) but assume that control of a running user process == root on that machine and design my security model around that. At that point using user accounts becomes something to do only if it’s super cheap.
Incredibly disappointing. Irrespective of the consequences they suffer, and I think Mozilla’s proposal of a year-long ban on new certificates is both reasonably punitive without being disproportionate, I’ll be moving all the networks I manage off of StartCom. Which is a shame, as I did like their service, but integrity and security comes first, and they’ve failed.
On a related note, across my half-dozen StartCom accounts, I have not received any notification of a change of business ownership at any point. That may not be legally required, but I’d certainly suggest it’s ethically important that customers are apprised of such a change, particularly given the business operates in the security industry.
I’ll be moving all the networks I manage off of StartCom
I’m surprised anyone is still, or even was using their service. I’m talking specifically here about the “free” certificates. If you had some other kind of business relationship with them, that may have been different of course. They sucked long before WoSign came in the picture as they charged you for revoking certificates (in case of a breach) and were lagging behind in the SHA256 adoption. Taking this in consideration they were more expensive than the competition. I’d call that pretty irresponsible, both of StartCom and anyone using their “free” service. So, the signs were already there for a long time…
You aren’t alone in that transition. As a university student, I was using StartCom because it was fast, easy, and free to use. I, by chance, sparked curiosity in switching over to Let’s Encrypt over a month ago before this whole fiasco involving WoSign/StartCom emerged, so I have been lucky to avoid my website getting any negative certificate reputation.
I would imagine that if Mozilla does take significant action, there will emerge a unique website to alert webmasters about the need to replace their certificates.
A problem with stow is that, very often, I only want the files to be symlinked, with the directories created rather than symlinked. Otherwise, too many applications have a habit of writing to temporary and log files within the config directories, and these files appear inside the dotfile repository, which I do not want.
For example, my configuration file for foo is .foo/config. Unfortunately, foo will also write a file .foo/history. If I create a foo/.foo/config directory in my dotfile repo, and stow it, the ~/.foo is made into a symlink to the directory foo/.foo. So, the file ~/.foo/history actually appears under foo/.foo/log
Stow unfortunately does not support making directories (it makes directories only if the directory is shared between another application). I currently get by with some scripting on top of stow, but it would have been nice if this could have been implemented.
Unless I’m fundamentally misunderstanding something, shouldn’t stow’s “--no-folding” argument do what you want?
It seems it is. My version of stow (1.3.3) seems to not to have it, and hence missed it. Thank you for pointing it out.
I also use .gitignore to deal with this, but I would prefer to have the directory structure copied and the files symlinked. Still, it works reasonably well for personal use.
If you do this with your .emacs.d directory, the .gitignore can get extensive.
Kinda late on UNIX bashing bandwagon :)
Also, Windows owes more of it’s legacy to VMS.
It does, but users don’t use any of the VMSy goodness in Windows: To them it’s just another shitty UNIX clone, with everything being a file or a program (which is also a file). I think that’s the point.
Programmers rarely even use the VMSy goodness, especially if they also want their stuff to work on Mac. They treat Windows as a kind of retarded UNIX cousin (which is a shame because the API is better; IOCP et al)
Sysadmins often struggle with Windows because of all the things underneath that aren’t files.
Message/Object operating systems are interesting, but for the most part (OS/2, BeOS, QNX) they, for the most part, degraded into this “everything is a file” nonsense…
Until they got rid of the shared filesystem: iOS finally required messaging for applications to communicate on their own, and while it’s been rocky, it’s starting to paint a picture to the next generation who will finally make an operating system without files.
If we talk user experiences, it’s more a CP/M clone than anything. Generations later, Windows still smells COMMAND.COM.
yes, the bowels are VMS, the visible stuff going out is CP/M
Bowels is a good metaphor. There’s good stuff in Windows, but you’ve got to put on a shoulder length glove and grab a vat of crisco before you can find any of it.
I think you’re being a little bit harsh. End-users definitely don’t grok the VMSy goodness; I agree. And maybe the majority of developers don’t, either (though I doubt the majority of Linux devs grok
journaldv. syslogs, really understand how to use/proc, grok Linux namespaces, etc.). But I’ve worked with enough Windows shops to promise you that a reasonable number of Windows developers do get the difference.That said, I have a half-finished book from a couple years ago, tentatively called Windows Is Not Linux, which dove into a lot of the, “okay, I know you want to do
$xbecause that’s how you did it on Linux, and doing$xon Windows stinks, so you think Windows stinks, but let me walk you through$yand explain to you why it’s at least as good as the Linux way even though it’s different,” specifically because I got fed up with devs saying Windows was awful when they didn’t get how to use it. Things in that bucket included not remoting in to do syswork (use WMI/WinRM), not doing raw text munging unless you actually have to (COM from VBScript/PowerShell are your friends), adapting to the UAC model v. thesudomodel, etc. The Windows way can actually be very nice, but untraining habits is indeed hard.I don’t disagree with any of that (except maybe that I’m being harsh), but if you parse what I’m saying as “Windows is awful” then it’s because my indelicate tone has been read into instead of my words.
The point of the article is that those differences are superficial, and mean so very little to the mental model of use and implementation as to make no difference: IOCP is just threads and epoll, and epoll is just IOCP and fifos. Yes, IOCP is better, but I desperately want to see something new in how I use an operating system.
I’ve been doing things roughly the same way for nearly four decades, despite the fact that I’ve done Microsoft/IBM for a decade, Linux since Slackware 1.1 (Unix since tapes of SCO), Common Lisp (of all things) for a decade, and OSX for nearly that long. They’re all the same, and that point is painfully clear to anyone who has actually used these things at a high level: I edit files, I copy files, I run programs. Huzzah.
But: It’s also obvious to me who has gone into the bowels of these systems as well: I wrote winback which was for a long time the only tools for doing online Windows backups of standalone exchange servers and domain controllers; I’m the author of (perhaps) the fastest Linux webserver; I wrote ml a Linux emulator for OSX; I worked on ECL adding principally CL exceptions to streams and the Slime implementation. And so on.
So: I understand what you mean when you say Windows is not Linux, but I also understand what the author means when he says they’re the same.
That actually makes a ton of sense. Can I ask what would qualify as meaningfully different for you? Oberon, maybe? Or a version of Windows where WinRT was front-and-center from the kernel level upwards?
I didn’t use the term “meaningfully different”, so I might be interpreting your question you too broadly.
When I used VMS, I never “made a backup” before I changed a file. That’s really quite powerful.
The Canon Cat had “pages” you would scroll through. Like other forth environments, if you named any of your blocks/documents it was so you could search [leap] for them, not because you had hierarchy.
I also think containers are very interesting. The encapsulation of the application seems to massively change the way we use them. Like the iOS example, they don’t seem to need “files” since the files live inside the container/app. This poses some risk for data portability. There are other problems.
I never used Oberon or WinRT enough to feel as comfortable commenting about them as I do about some of these other examples.
If it’s any motivation I would love to read this book.
Do you know of any books or posts I could read in the meantime? I’m very open to the idea that Windows is nice if you know which tools and mental models to use, but kind of by definition I’m not sure what to Google to find them :)
I’ve just been hesitant because I worked in management for two years after I started the book (meaning my information atrophied), and now I don’t work with Windows very much. So, unfortunately, I don’t immediately have a great suggestion for you. Yeah, you could read Windows Internals 6, which is what I did when I was working on the book, but that’s 2000+ pages, and most of it honestly isn’t relevant for a normal developer.
That said, if you’ve got specific questions, I’d love to hear them. Maybe there’s a tl;dr blog post hiding in them, where I could salvage some of my work without completing the entire book.
I, for one, would pay for your “Windows is not Linux” book. I’ve been developing for Windows for about 15 years, but I’m sure there are still things I could learn from such a book.
Most users don’t know anything about UNIX and can’t use it. On the UI side, pre-NT Windows was a Mac knockoff mixed with MSDOS which was based on a DOS they got from a third party. Microsoft even developed software for Apple in that time. Microsoft’s own users had previously learned MSDOS menu and some commands. Then, they had a nifty UI like Apple’s running on MSDOS. Then, Microsoft worked with IBM to make a new OS/2 with its philosophy. Then, Microsoft acquired OpenVMS team, made new kernel, and a new GUI w/ wizard-based configuration of services vs command line, text, and pipes like in UNIX.
So, historically, internally, layperson-facing, and administration, Windows is a totally different thing than UNIX. Hence, the difficulty moving Windows users to UNIX when it’s a terminal OS with X Windows vs some Windows-style stuff like Gnome or KDE.
You’re also overstating the everything is a file by conflating OS’s that store programs or something in files vs those like UNIX or Plan 9 that use file metaphor for about everything. It’s a false equivalence: from what I remember, you don’t get your running processes in Windows by reading the filesystem since they don’t use that metaphor or API. It’s object based with API calls specific to different categories. Different philosophy.
Bitsavers has some internal emails from DEC at the time of David Cutler’s departure.
I have linked to a few of them.
David Cutler’s team at DECwest was working on Mica (an operating system) for PRISM (a RISC CPU architecture). PRISM was canceled in June of 1988. Cutler resigned in August of 1988 and 8 other DECwest alumni followed him at Microsoft.
I have my paper copy of The Unix Hater’s Handbook always close at hand (although I’m missing the barf bag, sad to say).
I always wanted to ask the author of The Unix Hater’s Handbook if he’s using Mac OS X
8~)
It was edited by Simson Garfinkel, who co-wrote Building Cocoa Applications: a step-by-step guide. Which was sort of a “port” of Nextstep Programming Step One: object-oriented applications
Or, in other words, “yes” :)
Add me to the list curious about what they ended up using. The hoaxers behind UNIX admitted they’ve been coding in Pascal on Macs. Maybe it’s what the rest were using if not Common LISP on Macs.
Beat me to it. Author is full of it right when saying Windows is built on UNIX. Microsoft stealing, cloning, and improving OpenVMS into Windows NT is described here. This makes the Linux zealots’ parodies about a VMS desktop funnier given one destroyed Linux in desktop market. So, we have VMS and UNIX family trees going in parallel with the UNIX tree having more branches.
The author doesn’t say Windows is built on Unix.
“we are forced to choose from: Windows, Apple, Other (which I shall refer to as “Linux” despite it technically being more specific). All of these are built around the same foundational concepts, those of Unix.”
Says it’s built on the foundational concepts of UNIX. It’s built on a combo of DOS, OS/2, OpenVMS, and Microsoft concepts they called the NT kernel. The only thing UNIX-like was the networking stack they got from Spider Systems. They’ve since rewritten their networking stack from what I heard.
I don’t see any reason to disagree with that.
I don’t think that’s a helpful definition of “unix-like”.
It’s got files. Everything is a file. Windows might even be a better UNIX than Linux (since UNC)
Cutler might not have liked UNIX very much, but Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.
Windows is object-based. It does have files which are another object. The files come from MULTICS which UNIX also copied in some ways. Even the name was a play on it: UNICS. I think Titan invented the access permissions. The internal model with its subsystems were more like microkernel designs running OS emulators as processes. They did their own thing for most of the rest with the Win32 API and registry. Again, not quite how a UNIX programming guide teaches you to do things. They got clustering later, too, with them and Oracle using the distributed, lock approach from OpenVMS.
Windows and UNIX are very different in approach to architecture. They’re different in how developer is expected to build individual apps and compose them. It wasn’t even developed on UNIX: they used OS/2 workstations for that. There’s no reason to say Windows is ground in the UNIX philosophy. It’s a lie.
“Windows NT ended up UNIX anyway because none of that VMS-goodness (Versions, types, streams, clusters) ended up in the hands of Users.”
I don’t know what you’re saying here. Neither VMS nor Windows teams intended to do anything for UNIX users. They took their own path except for networking for obvious reasons. UNIX users actively resisted Microsoft tech, too. Especially BSD and Linux users that often hated them. They’d reflexively do the opposite of Microsoft except when making knockoffs of their key products like Office to get desktop users.
Consider what methods of that “object” a program like Microsoft Word must be calling besides “ReadFile” and “WriteFile”.
That the kernel supports more methods is completely pointless. Users don’t interact with it. Programmers avoid it. Sysadmins don’t understand it and get it wrong.
That is clear, and yet you’re insisting I’m wrong.
Except, that’s completely wrong.
I just started Word and dumped a summary of its open handles by object type:
Each of these types is a distinct kernel object with its own characteristics and semantics. And yes, you do create and interact with them from user-space. Some of those will be abstracted by lower-level APIs, but many are directly created and managed by the application. You’ll note the number of open “files” is a very small minority of the total number of open handles.
Simple examples of non-file object types commonly manipulated from user-land include Mutants (CreateMutex) and Semaphores (CreateSemaphore). Perhaps the most prominent example is manipulating the Windows Registry; this entails opening “Key” objects, which per above are entirely distinct from regular files. See the MSDN Registry Functions reference.
None of these objects can exist on a disk; they cannot persist beyond shutdown, and do not have any representation beyond their instantaneous in-memory instance. When someone wants an “EtwRegistration” they’re creating it again and again.
Did you even read the article? Or are you trolling?
Key objects do typically exist on disk. Albeit, the underlying datastore for the Registry is a series of files, but you never directly manipulate those files. In the same sense you may ask for
C:\whatever.txt, you may ask forHKLM:\whatever. We need to somehow isolate the different persisted data streams, and that isolation mechanism is a file. That doesn’t mean you have to directly manipulate those files if the operating system provides higher-level abstractions. What exactly are you after?From the article:
The Windows Registry, which is a core part of the operating system, is completely counter to this. It’s a bunch of large binary files, precisely because Microsoft recognised storing all that configuration data in plain text files would be completely impractical. So you don’t open a text file and write to it, you open a Registry key, and store data in it using one of many predefined data types (
REG_DWORD, etc…).It sounds like you’re not interested in a constructive and respectful dialogue. If you are, you should work on your approach.
Just go read the article.
It’s about whether basing our entire interactions with a computer on a specific reduction of verbs (read and write) is really exploring what the operating system can do for us.
That is a very interesting subject to me.
Some idiot took party to the idea that Windows basically “built on Unix” then back-pedalled it to be about whether it was based on the same “foundational” concepts, then chooses to narrowly and uniquely interpret “foundational” in a very different way than the article.
Yes, windows has domains and registries and lots of directory services, but they all have the exact same “file” semantics.
But now you’re responding to this strange interpretation of “foundational” because you didn’t read the article either. Or you’re a troll. I’m not sure which yet.
Read the article. It’s not well written but it’s a very interesting idea.
Why do you bring this up in response to whether Windows is basically the same as Unix? Unix has lots of different kernel “types” all backed by “handles”. Some operations and semantics are shared by handles of different types, but some are distinct.
I don’t understand why you think this is important at all.
Do you often jump into the middle of a conversation with “Except, that’s completely wrong?”
Or are you only an asshole on the Internet?
I’m not in the habit of calling people “asshole” anywhere, Internet or otherwise. You’d honestly be more persuasive if you just made your points without the nasty attacks. I’ll leave it at that.
Them being what? Is the BSD socket API really the ultimate networking abstraction?
The TCP/IP protocols were part of a UNIX. AT&T gave UNIX away for free. They spread together with early applications being built on UNIX. Anyone reusing the protocols or code will inherit some of what UNIX folks were doing. They were also the most mature networking stacks for that reason. It’s why re-using BSD stacks was popular among proprietary vendors. On top of the licensing.
Edit: Tried to Google you a source talking about this. I found one that mentions it.