I made something like this for generating tutorials from documentation (though no editing, and more focused on shell commands) https://github.com/schneems/rundoc.
Very cool, I like the screenshot feature!
Seems like we have a few common primitives, such as “commands” and “file operations”. We did an empirical study on executability of software tutorials, where we looked at how often these types of things came up and the impact on execution. Paper should be interesting to check out:
Just got back to my replies. Awesome paper, great work. I’ll have to check out your additional command types I like the uncomment idea.
If I had more time the one thing I’ve never done is to put this somewhere to run totally automated. I generate some of the Heroku docs via rundoc. I’ve always wanted them triggered by releases in Rails versions, but haven’t had the time to wire it up. Right now I just manually run the files.
This looks like exactly what I was looking for. The biggest feature it has over docable(the OP) for me, is it can be run standalone, i.e. as part of a CI/CD pipeline, etc and doesn’t require some process running somewhere.
This in some sense even beats emacs org mode’s literate programming as getting the org files ran and publishable outside of emacs is usually a giant pita.
OP: compatibility between the 2 syntax’s might be nice.
Yes, the standalone version for what you’re talking about is here (but not as polished)—this is what we use for CI, etc: https://github.com/ottomatica/docable
This also works on external documentation, by using selectors… this is how we tested the 600+ tutorials.
To execute a code block as a script, would mainly require inserting plumping to make sure to call the right interpreter/compiler. For example, you can see an example with node.js code snippets here: https://docable.cloud/chrisparnin/examples/basics/script.md. Adding more languages, would be straightforward, but we were mainly waiting for interest in this feature before going too crazy.
Our use cases have been mainly for building interactive runbooks we use for simple deployments and operation tasks, and supporting workshop material for our university courses, so we’ve been largely working with tools and shells.
Finally, since the file blocks could work with any code snippet, technically, you could just make any content you want, and run the necessary command to run it, though in some cases, running a snippet directly might be a better experience.
```go | {type:'file', path:'sendmessage.go'}
// Run `go get golang.org/x/sys/windows`
package main
import (
"fmt"
"unsafe"
"golang.org/x/sys/windows"
)
var (
user32DLL = windows.NewLazyDLL("user32.dll")
SendMessageTimeout = user32DLL.NewProc("SendMessageTimeoutW")
SendMessage = user32DLL.NewProc("SendMessageW")
)
func main() {
text, _ := windows.UTF16PtrFromString("Environment");
SendMessage.Call(0xffff, 26, 0, uintptr(unsafe.Pointer(text)) );
}
```
```bash | {type: 'command'}
go run sendmessage.go
```
I totally disagree:
So no, never delete anything you spent substantial amount of time doing. Delete experiments and weekend tryouts only.
We ended needing an old plan9 filesystem server for virtualization—same thing Docker for Mac uses. Glad some one archived and ported this!
https://bitbucket.org/plan9-from-bell-labs/u9fs/src/default/
Looks like it will go nicely with orderly (which I released a few days ago) :D . This is very similar to what I am doing on one of my own projects, though I use nix packages instead of docker, the end result is the same.
Here is orderly: https://github.com/andrewchambers/orderly which I think would go nice as a mini init system for a group of services on slim.
My nix vm stuff is called ‘boot2nix’ but I didn’t open source it yet - it builds a vm image that is around 50 megs or so and boots very fast, like slim :). Actually slim will also work well with nix, because nix can build docker images too. The more cool lightweight tools the better, thanks for making it :).
In short, one use case is for building immutable infrastructure—borrowing answer from LinuxKit:
LinuxKit runs as an initramfs and its system containers are baked in at build-time, essentially making LinuxKit immutable. Moreover, LinuxKit has a read-only root filesystem: system configuration and sensitive files cannot be modified after boot. The only files on LinuxKit that are allowed to be modified pertain to namespaced container data and stateful partitions.
Unlike LinuxKit, you’re not limited to just read-only filesystems. This is just a simple utility for creating bootable images given a filesystem.
Another use case is that unlike containers, you can access hardware more readily (usb, etc.), and thus provide development/computing environments for hardware dev/etc. Other things like ip addresses and multiple NICs are easier to add and work with.
Stripping a distribution down to its most basic core is a really fun way to learn about linux systems! Interesting education opportunities.
The final use case is that while Docker offers a nice ecosystem for building and sharing curated file systems, it is not always super desirable to also adopt the isolation/union-fs/container/chroot paradigm for development; instead, sometimes people just don’t want to pip/apt-get install everything to run a piece of code. In short, you could use this to build persistent containers, like lxc.
We certainly plan on experimenting with different host images + filesystem combinations, so this will be on our radar.
In the short-term, seems that exporting the instance from VirtualBox (vmdk) and upload to a provider, like digitalocean would work: https://www.digitalocean.com/docs/images/custom-images/overview/#image-requirements
A helpful and thorough exploration of some useful SSH options–including
ProxyJump
, which I was not aware of.It is hosted on the blog of a company who has products built in this space, but without a nice CTA in the article and with enough info it’s kinda useful. As always, please flag aggressively if you find it to be content marketing.
… is something of a hot take!
Yes, had a good snort when I read that. I think the author must have meant manual management of keys or something.
SSH certs are terrific but they are not suitable for every single environment.
We had to use this at the university to access our VSphere box.
One thing I found useful is if you didn’t want to spam your local ~/.ssh/config with this sort of stuff, you can pass this to ssh with the
-F
option: