When I’m in a trolling mood, I show that to people while omitting the space after the # comment and tell them shells support Twitter-style hashtags. :D
I’m mostly enjoying fish’s completion today. I like the fact that it takes the directory into account. It has its annoyances, but it has never driven me mad to the point that I would start looking for other mechanisms, so far.
Regarding your public .bash_history, what do you do to prevent sensitive information from accidentally ending up there?
It seems like it is not a real history file. For example, it is alphabetically sorted. I assume that they use it as a place to store common commands, searchable by ^R.
Yeah, otherwise it would murder my productivity if I had to stop after typing an alias- and shortcut-ladden sausage of a line to add a comment. I spam that enter like crazy, the commenting would never work.2
This has actually caused outages at stock exchanges, usually when some hapless fool accidentally mistakes the SCRAM fire suppression button for the unlock door button.
The noise of the gas nozzles is loud enough and of long enough duration to seriously upset the hard disks in the servers.
It happened to the Nordic Nasdaq in April 2018 , and also to the Australian ASX in June 2018.
What a wonderful nostalgia-trip! A 386DX (DOS 6!) was my family’s first computer, and I wasted HOURS on it with Chopper Commando, Bartender Battle, and lord knows how many others. Thanks so much for taking the time to do the port and share it with the rest of us.
I just pipe a lot of json data around for a project I am working on for some analytics, core utils does sorting really well with multi core and temporary files, but the standard sort tool doesn’t play well with json.
My main use case right now is just sorting a huge dump of records by time stamp.
This week I am working on polishing up the second part for release, which is to do with emulating a MIPS 4Kc processor.
I recently managed to get the prototype of my C-based MIPS emulator to boot linux to the prompt, after a lot of blood sweat and tears, so it is time to step back and refine it.
The work this week involves neatening up parts of the prototype for release, and extending mipsgen to generate the core of the emulator for both Javascript and C.
I am currently writing a very basic transpiler that will hopefully do the job.
Back when I was playing with this, I toyed with the idea of writing the mips opcode descriptions in a DSL I could compile to many languages. My goal was to port to lua and run it inside computer games that use lua as a scripting engine.
In the end I wrote luamips, though never finished it completely.
Luamips looks interesting. Which games were you targeting?
With my mips stuff, I am traveling down a similar path. My goal is to be able to host ‘real’ C code in weird environments like the browser.
My current plan for the DSL for the opcode descriptions will be a subset of C, or as much as I can be bothered to parse and transpile.
My target languages for the emulator are currently C, javascript and ruby. I aim to keep the amount of language specific code down to around 500 lines, and autogen the rest.
The metaprogramming does look like a little bit of overkill, but it has already come in handy for my other MIPS projects that are in the pipe.
I’ve used it to generate code for an assembler and an emulator core, and going forward it will be very useful for creating both C and javascript versions of those projects.
Yeah, your cmips project, along with QEMU came in handy when I was trying to figure out some of the weirder bits of getting linux to boot on my emulator. Cheers for that!
I may send some small issues or PRs your way when I post mine up.
Thanks, mine is most definitely is buggy. Once it booted i just called it a day even though I didn’t always know what i was doing at the time. Glad it helped.
I have been trying to get the Linux kernel to boot on the 4Kc MIPS emulator I am writing.
It has been a fun but frustrating process of digging through kernel source, QEMU source, horizontal folkdancing with buildroot and stepping through code and muttering wtf more than a few times! Figuring out interrupts is ‘fun’. Plenty of little landmines as well.
The good news is that I have managed to hack enough code into the kernel tree in arch/mips/mipsemu to get the kernel to boot to a prompt on QEMU with a setup that matches my emulator too.
Now that I can boot and debug the kernel easily, the job for this week is adding support for the remaining privileged opcodes to my emulator until it boots to a prompt too.
I can’t imagine any scenario in which a regulatory response by NIST or any other government agency would’ve made this incident being handled better.
From my experience when gov agencies are involved in such incidents there’s much more secrecy and less transparency than how google and cloudflare handled this.
Don’t get me wrong: I think there are areas in Infosec where government response would be welcomed by me (think: IoT). I don’t think this is one of them.
I can certainly understand your viewpoint, and it is most likely true given the current state of the world.
But what we have here is Google, a large private company, essentially performing some of the investigative and reporting role that an incident such as this requires, but not being able to do much in the way of regulation or punishment. It certainly has no power to do root cause analysis at Cloudflare, change any laws or hand out fines.
My hope is certainly that the industry can sort itself out, but if it doesn’t, and the scope of these leaks gets worse, then what is the answer?
Their recommendations would’ve prevented it. Prior regulations and recommendations included using tools that generate code [hopefully] free of security vulnerabilities and memory-safe languages. The code submitted for review under DO-178C certification typically has to pass static analysis + testing of every path. A whole ecosystem of tools and safe libraries emerged to support such things with idea that re-certification after screw-ups would be more costly than investing in solid development. Same thing happened under TCSEC, original certification for security, where high-assurance systems had to have all states precisely modeled with evidence none broke a precise, security policy. They also usually used things like Pascal or Ada for better safety.
Long story short: existing and prior regulations pushing the minimal, best practices for secure coding could’ve prevented it if it was just a pointer or parsing error. The safety regulations continue to do their job in aerospace. The security regulations were canned…. despite effectiveness during pentests by NSA… due to bad management at NSA (MISSI initiative) and bribery of politicians by software companies (esp IBM and Microsoft) wanting DOD to use their stuff instead of secure stuff. Policy changes killed assurance requirements & demand of them. A bullshit standard called Common Criteria followed with many insecure systems certified and adopted per DOD’s COTS policy. A subset of it (EAL6/7) worked but it wasn’t required & evaluators allowed more hand-waiving. The common mantra that we don’t know how to prevent many security problems with regulation comes from people who have never studied regulation of safety/security in IT to begin with & probably couldn’t name a single, evaluated system under TCSEC unless they read one of my comments naming them.
Gov regs that require disclosure I’m ok with. Every company wants to downplay breaches. Big hoary notices that must be snail mailed to every single person who’s data may have been compromised? That’s a fitting punishment. And yes, that punishes CF’s customers somewhat unjustly, but that makes all of them very angry at CF. So maybe CF will try a little harder not to screw up again.
This incident started at Cloudflare, probably by just one individual, when they made a one line programming error in their use of a HTML parser library.
I disagree whole heartedly. This is an organizational problem. Now, I’ll give the author the benefit of the doubt and assume he actually agrees with what I’m about to argue, based the the rest of the article, but I think that stating it like this upfront is actively harmful.
It should not have been possible for one person to get this kind of bu into production. It wasn’t one person’s decision to do do fuzz testings. It wasn’t one person’s decision to not test with known bad inputs. It wasn’t one person’s decision to allow to allow a custom parser into a production system. It wasn’t one person’s decision to allow C code in a system of this type.
And if any of those things were one person’s decision that’s an even bigger problem.
I’ve been wondering this myself. The Cloudflare incident report referred to fuzz testing they conducted after being notified of Cloudbleed, but didn’t get into details about precisely what testing had been performed prior. Have they published any details about this?
I think under a lot of ordinary circumstances it’d be unfair to demand a detailed test plan from an arbitrary piece of software. But: Cloudflare’s reverse proxy nginx extensions are some of the most sensitive C modules on the planet. In the same way that we know a good bit about how aggressively Google tests the codecs behind Youtube and the image parsers in Chromium, I feel like it would be reasonable to want to know what Cloudflare did with this code.
It’s not like nobody knew writing this kind of code in C wouldn’t be dangerous.
I agree. I think the scope of the problem is sufficient enough to demand transparency about why the problem happened, and not just how. I imagine that will be problematic, considering the legal liability issues, and is precisely the problem in letting the private sector handle this itself.
I agree organizational priorities were main problem. I will add that wise use of techniques to reduce defects might have knocked it out. One person can do that. Happens regularly in safety-critical industries and the few in embedded that care about quality. A specific example is what I used to do where I’d autogenerate the code for parsers from a grammar then analyze and test that code. This got common with DO-178B since they had to buy static analysis & automated testings tools anyway to meet certification requirements with high confidence. The combo of generation followed by static analysis was common enough that Eschersoft made it the standard practice for their product: spec language w/ formal analysis & code generation in C/Ada; static analyzer for C to spot any errors in that (or other code).
There’s also many open-source tools that can catch errors in C with either low or no false positives. I’d have a repo running any C code through all of them as part of build/testing process. Automatically. So, I blame Cloudfare mostly but one developer could prevent this without much time. We have two memes here where a company and an individual doesn’t care enough about security/quality to use cost-effective means to achieve it. The baseline for IT.
A useful shorthand I find for this kind of thing is to tag shell commands with comments.
Now the command is searchable with Ctrl-R >
frob...
When I’m in a trolling mood, I show that to people while omitting the space after the
#
comment and tell them shells support Twitter-style hashtags. :DBeen there, done that ;) My .bash_history
I’m mostly enjoying fish’s completion today. I like the fact that it takes the directory into account. It has its annoyances, but it has never driven me mad to the point that I would start looking for other mechanisms, so far.
Regarding your public
.bash_history
, what do you do to prevent sensitive information from accidentally ending up there?Crazy you keep yours public. I have mine in a separate private repo from my dotfiles. Too easy to leak a secret.
It seems like it is not a real history file. For example, it is alphabetically sorted. I assume that they use it as a place to store common commands, searchable by ^R.
Yeah, otherwise it would murder my productivity if I had to stop after typing an alias- and shortcut-ladden sausage of a line to add a comment. I spam that enter like crazy, the commenting would never work.2
Regarding the comment in the article about how to do a context switch outside of an interrupt - this should work ?
i.e., the relevant regs are stored in a struct, and passed via a0,a1
https://raw.githubusercontent.com/mit-pdos/xv6-riscv/riscv/kernel/swtch.S
Call swtch to jump from t0 -> t1, and when done call it again to jump back t1 -> t0, and return to where you swtched from.
This has actually caused outages at stock exchanges, usually when some hapless fool accidentally mistakes the SCRAM fire suppression button for the unlock door button.
The noise of the gas nozzles is loud enough and of long enough duration to seriously upset the hard disks in the servers.
It happened to the Nordic Nasdaq in April 2018 , and also to the Australian ASX in June 2018.
https://www.bleepingcomputer.com/news/technology/loud-sound-from-fire-alarm-system-shuts-down-nasdaqs-scandinavian-data-center/
https://www.afr.com/companies/financial-services/asx-back-online-after-fire-alarm-bungle-20180605-h10ytt
What a wonderful nostalgia-trip! A 386DX (DOS 6!) was my family’s first computer, and I wasted HOURS on it with Chopper Commando, Bartender Battle, and lord knows how many others. Thanks so much for taking the time to do the port and share it with the rest of us.
Thanks law, I appreciate that.
I’m glad someone else out there remembers this era as fondly as I do! The machines and games were primitive, but they were fun.
I hope the web port ran OK for you, I’d like to make it work in more browsers, but wasm isn’t quite there yet. Cheers.
I know what this game looks like but it just seems unfathomable to have an article about the game and not have a screenshot.
Yeah, first thing I thought as well. Looks amazing otherwise, huge pile of effort.
Fixed - I knew I left something out. I have added a little video. Thanks for the feedback.
Can you elaborate a bit more on what the impetus was for writing this tool?
Just curious as to why you needed to sort json out of core.
I just pipe a lot of json data around for a project I am working on for some analytics, core utils does sorting really well with multi core and temporary files, but the standard sort tool doesn’t play well with json.
My main use case right now is just sorting a huge dump of records by time stamp.
Ah yep, that makes sense, I have used coreutils sort for similar reasons to partition large datasets by key out-of-core.
A lot of the unix tools kick-ass at data processing once you get things into the right (i.e textual, whitespace separated, newline delimited) format.
Just watch out for collation issues (use LC_ALL=C), and other weird locale stuff, if you are depending on a particular order.
I just recently published the first part of my ongoing MIPS work, you may have seen it here :
mipsdis
This week I am working on polishing up the second part for release, which is to do with emulating a MIPS 4Kc processor.
I recently managed to get the prototype of my C-based MIPS emulator to boot linux to the prompt, after a lot of blood sweat and tears, so it is time to step back and refine it.
The work this week involves neatening up parts of the prototype for release, and extending mipsgen to generate the core of the emulator for both Javascript and C.
I am currently writing a very basic transpiler that will hopefully do the job.
Back when I was playing with this, I toyed with the idea of writing the mips opcode descriptions in a DSL I could compile to many languages. My goal was to port to lua and run it inside computer games that use lua as a scripting engine.
In the end I wrote luamips, though never finished it completely.
Luamips looks interesting. Which games were you targeting?
With my mips stuff, I am traveling down a similar path. My goal is to be able to host ‘real’ C code in weird environments like the browser.
My current plan for the DSL for the opcode descriptions will be a subset of C, or as much as I can be bothered to parse and transpile.
My target languages for the emulator are currently C, javascript and ruby. I aim to keep the amount of language specific code down to around 500 lines, and autogen the rest.
Funny, my original goal was to run openssh inside the browser.
At the time it was garry’s mod + wiremod which is a sandbox electronics game.
Author here, thanks for your feedback
The metaprogramming does look like a little bit of overkill, but it has already come in handy for my other MIPS projects that are in the pipe.
I’ve used it to generate code for an assembler and an emulator core, and going forward it will be very useful for creating both C and javascript versions of those projects.
Here are my tables I used a long time ago to generate a disassembler for a small mips emulator:
https://github.com/andrewchambers/cmips https://github.com/andrewchambers/cmips/blob/master/disgen/mips.json
Yeah, your cmips project, along with QEMU came in handy when I was trying to figure out some of the weirder bits of getting linux to boot on my emulator. Cheers for that!
I may send some small issues or PRs your way when I post mine up.
Thanks, mine is most definitely is buggy. Once it booted i just called it a day even though I didn’t always know what i was doing at the time. Glad it helped.
In these cases, it is often a niche thing, where the author has had few if any comments at all giving feedback.
I have found that contacting the author directly has led to a rich and rewarding discussion.
I have been trying to get the Linux kernel to boot on the 4Kc MIPS emulator I am writing.
It has been a fun but frustrating process of digging through kernel source, QEMU source, horizontal folkdancing with buildroot and stepping through code and muttering wtf more than a few times! Figuring out interrupts is ‘fun’. Plenty of little landmines as well.
The good news is that I have managed to hack enough code into the kernel tree in arch/mips/mipsemu to get the kernel to boot to a prompt on QEMU with a setup that matches my emulator too.
Now that I can boot and debug the kernel easily, the job for this week is adding support for the remaining privileged opcodes to my emulator until it boots to a prompt too.
I can’t imagine any scenario in which a regulatory response by NIST or any other government agency would’ve made this incident being handled better. From my experience when gov agencies are involved in such incidents there’s much more secrecy and less transparency than how google and cloudflare handled this.
Don’t get me wrong: I think there are areas in Infosec where government response would be welcomed by me (think: IoT). I don’t think this is one of them.
I can certainly understand your viewpoint, and it is most likely true given the current state of the world.
But what we have here is Google, a large private company, essentially performing some of the investigative and reporting role that an incident such as this requires, but not being able to do much in the way of regulation or punishment. It certainly has no power to do root cause analysis at Cloudflare, change any laws or hand out fines.
My hope is certainly that the industry can sort itself out, but if it doesn’t, and the scope of these leaks gets worse, then what is the answer?
Their recommendations would’ve prevented it. Prior regulations and recommendations included using tools that generate code [hopefully] free of security vulnerabilities and memory-safe languages. The code submitted for review under DO-178C certification typically has to pass static analysis + testing of every path. A whole ecosystem of tools and safe libraries emerged to support such things with idea that re-certification after screw-ups would be more costly than investing in solid development. Same thing happened under TCSEC, original certification for security, where high-assurance systems had to have all states precisely modeled with evidence none broke a precise, security policy. They also usually used things like Pascal or Ada for better safety.
Long story short: existing and prior regulations pushing the minimal, best practices for secure coding could’ve prevented it if it was just a pointer or parsing error. The safety regulations continue to do their job in aerospace. The security regulations were canned…. despite effectiveness during pentests by NSA… due to bad management at NSA (MISSI initiative) and bribery of politicians by software companies (esp IBM and Microsoft) wanting DOD to use their stuff instead of secure stuff. Policy changes killed assurance requirements & demand of them. A bullshit standard called Common Criteria followed with many insecure systems certified and adopted per DOD’s COTS policy. A subset of it (EAL6/7) worked but it wasn’t required & evaluators allowed more hand-waiving. The common mantra that we don’t know how to prevent many security problems with regulation comes from people who have never studied regulation of safety/security in IT to begin with & probably couldn’t name a single, evaluated system under TCSEC unless they read one of my comments naming them.
Gov regs that require disclosure I’m ok with. Every company wants to downplay breaches. Big hoary notices that must be snail mailed to every single person who’s data may have been compromised? That’s a fitting punishment. And yes, that punishes CF’s customers somewhat unjustly, but that makes all of them very angry at CF. So maybe CF will try a little harder not to screw up again.
I disagree whole heartedly. This is an organizational problem. Now, I’ll give the author the benefit of the doubt and assume he actually agrees with what I’m about to argue, based the the rest of the article, but I think that stating it like this upfront is actively harmful.
It should not have been possible for one person to get this kind of bu into production. It wasn’t one person’s decision to do do fuzz testings. It wasn’t one person’s decision to not test with known bad inputs. It wasn’t one person’s decision to allow to allow a custom parser into a production system. It wasn’t one person’s decision to allow C code in a system of this type.
And if any of those things were one person’s decision that’s an even bigger problem.
I’ve been wondering this myself. The Cloudflare incident report referred to fuzz testing they conducted after being notified of Cloudbleed, but didn’t get into details about precisely what testing had been performed prior. Have they published any details about this?
I think under a lot of ordinary circumstances it’d be unfair to demand a detailed test plan from an arbitrary piece of software. But: Cloudflare’s reverse proxy nginx extensions are some of the most sensitive C modules on the planet. In the same way that we know a good bit about how aggressively Google tests the codecs behind Youtube and the image parsers in Chromium, I feel like it would be reasonable to want to know what Cloudflare did with this code.
It’s not like nobody knew writing this kind of code in C wouldn’t be dangerous.
I agree. I think the scope of the problem is sufficient enough to demand transparency about why the problem happened, and not just how. I imagine that will be problematic, considering the legal liability issues, and is precisely the problem in letting the private sector handle this itself.
Author here: I definitely do not blame that individual, and agree wholeheartedly that it is an organisational problem.
I agree organizational priorities were main problem. I will add that wise use of techniques to reduce defects might have knocked it out. One person can do that. Happens regularly in safety-critical industries and the few in embedded that care about quality. A specific example is what I used to do where I’d autogenerate the code for parsers from a grammar then analyze and test that code. This got common with DO-178B since they had to buy static analysis & automated testings tools anyway to meet certification requirements with high confidence. The combo of generation followed by static analysis was common enough that Eschersoft made it the standard practice for their product: spec language w/ formal analysis & code generation in C/Ada; static analyzer for C to spot any errors in that (or other code).
There’s also many open-source tools that can catch errors in C with either low or no false positives. I’d have a repo running any C code through all of them as part of build/testing process. Automatically. So, I blame Cloudfare mostly but one developer could prevent this without much time. We have two memes here where a company and an individual doesn’t care enough about security/quality to use cost-effective means to achieve it. The baseline for IT.
Peter Miller’s Recursive Make Considered Harmful taught me how to truly use make, and a lot about build systems.
https://web.archive.org/web/20070205051133/http://members.canb.auug.org.au/~millerp/rmch/recu-make-cons-harm.html