Wonder if it actually checks the crypto or just a string match on the issuer. Could you maybe try that?
I can’t see any chance they’re that stupid, they know what they’re doing when they implement something like this. This doesn’t eliminate the possibility of more subtle vulnerabilities.
I don’t have/intend to procure any i.MX8M devices, so I don’t possess a copy of its boot ROM. Anyone want to dump it?
Stupider things have happened.
Antivirus companies have actually made this mistake. Worth a shot to check.
How did you find the bug? Why haven’t your automated unit test not caught it?
Not the author, but how would you test against that? I assume that If he knew the bug was possible he would just create a constraint preventing experience without an associated account or he would have a check constraint preventing anyone from creating a user with cpu/guest names. You have to know that there is a vector for the bug before testing for it - do you usually write tests which would find data inconsistency issues?
On the one hand, reserving certain usernames so that nobody can use them to create an account is a good practice for systems like this, and thinking through the set of names to reserve should be part of the development process. On the other hand, very few people actually do that.
It wasn’t a bug before Rogan found it.
What makes a code buggy? Its inconsistency with the spec. Without a spec, code is not correct or incorrect. In many cases the spec is only in one or more heads. When nobody has considered some case then there is no spec. This implies it was not a bug. The code just worked as coded. It was only when Rogan considered the emergent behavior that he worked out that detail of the spec. Now the code was inconsistent with the spec and thus buggy. Before that moment the code was fine, so of course nobody wrote a test about that.
Ok, maybe Rogan had a spec. I just assumed that he did not.
I like the article and the series. However I find your website theme hard to read, contrast an colour wise. Here is a plain text mirror of the article without formatting: https://gopher.floodgap.com/gopher/gw?gopher://txtn.ws:70/0/lobsters/20190531/christine.website/20190530T1038%5fTempleOS%5f%5f2%5f%5f%5fgod%5f%5fthe%5fRandom%5fNumber%5fGenerator.txt
I have been meaning to make a native gopher server for christine.website for a while. How would you suggest I go about doing that?
Also I’m sorry about the theme issues. Would a lighter theme option of some kind help?
Native gopherhole would be cool. I do like your content. Theming is not something I would burden you with, since it’s my issue. I can just turn off CSS in Firefox and have the plain text or use reader mode.
The simplest gopher hole could just be a bunch of text files in /var/gopher with Pygopherd installed. More fancy stuff could generate a gophermap with some nice text and links.
If you’re still generating from markdown, one way to do it is to stick the markdown files in the gopher hosted directory, generate a gophermap every time you add a new post, and host with something like pygopherd.
A simple shell script like ls -ltr --time-style=full-iso blog/*.markdown | while read x ; do set -- $x; echo "1$6 $(head -n 1 $9 | sed 's/^# *//;s/\t.*$//')\t$9"; done can produce a gophermap index for your blog directory, assuming that the title of the blog post is the first line of each markdown file & you index by edit time rather than creation time.
ls -ltr --time-style=full-iso blog/*.markdown | while read x ; do set -- $x; echo "1$6 $(head -n 1 $9 | sed 's/^# *//;s/\t.*$//')\t$9"; done
This is how I render markdown: https://github.com/Xe/site/blob/master/internal/blog/blog.go
Right. Some folks prefer viewing raw markdown over gopher if the formatting matters semantically (which might be the case here, since you’ve got a lot of code blocks), & so that’s what my script above is for: creating a TOC for your blog posts, to be served as markdown.
Alternatively, the formatting could be stripped or selectively stripped, & you can emit plaintext or a gophermap. This wouldn’t necessarily change the TOC substantially (emit ‘0’ as the beginning of each line instead of ‘1’ to render as gophermap). One way of producing text is to render the markdown as html and then render the html through w3m or lynx, but you’re liable to have non-functional headers and footers (since links will disappear).
You can also just host html in a gopherhole (gophermap type code ‘h’), but that sort of defeats the point: we can strip formatting from your website just by visiting it in lynx, after all, & not all gopher clients have html support because ‘h’ is not a completely standard code.
The script sed 's/^/i/;s/\[([^\]]*)\]\(([^\)])\)/\nh\1\tURL:\2\ni/g' will take your markdown and emit a gophermap where everything except links are displayed as markdown but links are navigable (assuming the client supported both ‘h’ typecode and ‘URL:’ extension, which lynx does).
There are some good tutorials floating around on how to create gophermaps. They’re a lot easier than generating HTML, particularly for indexes.
Recommend historical tag.
Added it. Did look, but for legacy and that doesn’t exist.
But did he test if windows would crash with his ntfs-3g modified (0x8000) files?
If you can, never do in place upgrade. Way too many things that can go wrong, be in an unexpected state or the upgrade goes wrong and leaves your system broken.
Install a new machine and migrate to there.
If you have a deployment framework, even better. Roll out new cattle and if needed, migrate. Let (Ansible/salt/puppet etc) do the work for you .
And never ever use a non LTS version for production systems…
Thanks for your feedback.
I’ve shared the process that I usually use in my local machine when I upgrade the Ubuntu version to a new version.
It’s not intended for production system but only for development environment.
In my opinion, the beauty of gopher is that everyone is equal and limited. Then you have to differentiate yourself by the actual content instead of fancy markup and themes.
I run a gopher news sync, http://txtn.ws or gopher://txtn.ws (if your browser supports it), and I notice that most of the content is the same. Different sites use the same press bureau. Without the theme or markup it shows that they’re nothing more than shills designed to make advertising money.
Actual long form content where research was done is rare, but it’s there and specially outlets like Ars (tech) or the Guardian (sometimes) have such content.
So, by using gopher and the limitations, (just text and nothing more) it forces you to write quality content.
For anyone wondering, I’ve written a few articles on this USB HSM device: https://raymii.org/s/tags/smartcard-hsm.html - the previous version that is.
Love them, I added them to the nixers newsletter last week. Can you recheck the formatting of the code blocks on your website, newslines are not shown properly.
You have actual vax hardware? How cool! Do you know the power usage?
Would a Gopher client work faster? Since it’s a simpler protocol without encryption?
Small VAXen are fairly low on power - I don’t have the figures but a VAXstation 4000/90 only has a 174W PSU.
I do not. Follow the link, It’s a talk someone presented at BSDCan.
This looks really interesting. I wish this were open source.
I’ve wanted to make some digital guitar pedals for a while, and I feel like this could be a good way to do something like that.
They will need to release some sources if they base on Linux ( which they do) since it’s gplv2.
I personally really like sslh: it’s a protocol multiplexer that accepts all connections and acts as a reverse proxy by picking an upstream server based on the protocol that each client seems to expect. For example, if a client connects and immediately sends traffic that looks like an HTTP request, sslh will let an upstream HTTP server deal with the client. If the client does nothing, after a configurable timeout sslh will proxy the connection over to an SSH server.
I use sslh at work in order to expose Prometheus metrics and a SSH server on a single port of a Docker container. Just like this snaps fingers I’ve gained SSH access to all of my containers that are also scraped by Prom.
Ah, that’s a pretty cool tool. I hadn’t seen anything quite so generalized before.
Why do you use a container for more than one process?
I set up supervisord as the entry point in a base image, and then have derived images add their config files into the directory from which supervisord picks up program configs. I’ve tried conjuring up a similar setup using OpenRC and whatever SysV init comes with BusyBox, but it never did turn out as smooth as my supervisord setup, so I’m rolling with that.
Edit: you asked “why?” and I answered the “how?” - well done! As to why: one of my projects at work ships as a Docker container that runs on machines that I have no access to. For ease of debugging, tailing logs, and generally poking around, nothing beats having good old shell access. Running sshd inside my container alongside the main process nicely sidesteps the need to provision shell access to the Docker host, which is organizationally unpalatable when done on a systematic basis. Simply hiding behind Prom’s metrics port is way easier.
The how is just as interesting, as I have tried it with systemd (which blatantly refuses to run in docker), so I now use supervisord as well for mostly the same purpose. However, I thought that a container was meant for one process, thus perceiving myself to do containers “wrong”. Lightweight VMs for trusted software where actual kvm would be way to much.
What kind of project is it?
You can also use something like https://github.com/Yelp/dumb-init which works very well too. Probably more lightweight then supervisord though.
The nice part of supervisord is it also works everywhere else as well, so you only have to learn one tool.
I use dumb-init in containers that only have a single process in them, while supervisord allows me to ship multiple independent but related processes in a single container.
I’ve come to accept that anything goes inside a Docker container that would have previously gone into a complete Linux system. After all, Docker is just convenience machinery atop Linux namespaces, which is exactly in line with how I’m using Docker: to isolate & virtualize a complete system.
The project I’m currently working on is a log ingestion daemon for a very temperamental legacy application that emits Valuable Business Data™️ in a variety of mostly textual formats. The daemon acts as a bridge between this legacy application and a streaming data pipeline by tailing files and transforming them into streams of events. Most of the difficulty with this daemon has to do with really weird stuff that the legacy application does, so convenient introspection via a sidecar sshd has proven invaluable.
“One process per container” is unjustified dogma left over from the early days of docker. I like the notion of one service per container, where any given application is comprised of multiple smaller services. When a service depends on two or two or more proceses that are tightly coupled together and it never would make sense to handle them separately, they should go in the same container.
And when the service is in fact only one process, there’s still the issue that it probably doesn’t behave anything like init, which can be a problem, especially if it does any forking.
Nginx supports this natively: https://raymii.org/s/tutorials/nginx_1.15.2_ssl_preread_protocol_multiplex_https_and_ssh_on_the_same_port.html
No need for sshl or other multiplexers.
This seems a lot of effort. screen can do this simply by chaining the keys, e.g. an extra ‘a’ for commands to nested screen sessions, i.e. ‘^a-a-c’ to create a tab in the nested screen. Doesn’t require changing any settings locally or remote.
The issue here is what to do with keys that do not need to be prefixed. If you have non-prefixed keybindings, then the outer tmux will always grab them unless told not to. That’s what this article shares: a way to tell TMUX to behave.
Yeah, this is the tricky bit as I understood it too.
Tmux of course can do that, too.
That is exactly what I do in tmux.
Personal experience: I’ve had raymii.org since 2006 (and actively use it on this domain since 2012) and have used a multitude of languages and systems to run it. From Joomla and Mambo to WordPress, to my own PHP creation to the current state, static site generated with my own python code. Here you can see some screenshots: https://raymii.org/s/blog/Site_updates_new_layout_for_overview_pages.html from 2010 until 2018 and this is the most recent update: https://raymii.org/s/blog/Site_updated_new_2019_layout.html - one thing I did try in all those years is to never break the URLs to content. I’ve now got 5000 to 10000 unique visitors a day so it seems people do like my writing. And I don’t participate on FB, Instagram or other modern social networks. Just here and some Reddit.
Having a personal site allows me to do things like just add Gopher support. Or to have a cluster of 10 webservers do the hosting (and use a deployment framework like Ansible to manage that fleet). Heck I recently rolled out an openBSD server just for fun.
Most people I talk about with on this subject say they don’t have anything to write about. Or that they find keeping it up to date too much effort. Well, a personal site doesn’t rot and you’re free to do whatever you want. Just a front-page with contact details and your resume could be enough. A blog that hasn’t been updated since 2013? Doesn’t matter since the content is still there and whenever you feel like it you can go back to writing again. I’ve had my fair share of periods where I didn’t write much. In 2017 I published just as much as one month of 2019, but hey, no one cares.
So just get started with your own site!
I will say that I happen to have discovered your blog via tomasino’s phlog aggregator, and read it regularly, but only by gopher.
Glad to hear that! There is a gopher RSS feed as well if you ever find a gopher supporting RSS reader
I honestly can’t remember when I last stumbled across your site or why, but your post has prompted me to finally add your feed to my feed reader. Thanks (from a fellow long-time static site writer)
This is cool! I’m going to try this on Nucleus as well, since that’s my daily embedded stack.
I’d rather have built-in honeykey support in SSH to log out the suspicious attempt without exposing any attack surface. That could be used routinely on production systems.
This doesn’t give the attacker a shell. It forces the command “/usr/local/bin/honeykey kulinacs@honeypot” to run and then disconnects the session.
Updated, thanks. I’m a quite concerned about the proposed implementation. There are too many moving parts around executing a script.
If it wasn’t a script but a statically linked binary (no dependencies) would that be better? Using command in authorized_keys is an extremely common way to automate tasks that need to be remotely prodded
Slightly. There’s still a lot of code being executed to create a user session and SSH is not perfect: https://www.cvedetails.com/vendor/120/SSH.html
Look at all the parameters in “2”: https://research.kudelskisecurity.com/2013/05/14/restrict-ssh-logins-to-a-single-command/
Parsing logs as @raymii suggested sounds 10x times safer.
Authorized key command can be useful, but for security I’d rather use a chroot without root and just the thing the user needs to do. With authorized key command you have to be careful, if you allow too much, the user might be able to append or edit the authorized key file and give themselves other permissions.
Every solution has its use but I often try to look at what’s behind the question to build a better solution. In this case, information gathering and notification, which in my opinion can be done by parsing logs. Then you don’t even need a user on your system and can just check the log for that specific key.
Putting sshd in debug3 log mode does log ssh keys used. Have done that before, and a little log parsing can get you far. No need for a special script.
This sounds much better. It would be very nice if SSH could detect honeykeys and log them with ERROR level.
Configure ssh to log to syslog, and filter honeykey messages to a separate log file that you alert on. You can then discard the extra debug messages in your syslog config.
I wasn’t expecting this article from my site to be picked up.
*looks up submitter raymii’s profile & website*
my primary focus is on building high-available cloud environments
my primary focus is on building high-available cloud environments
Ah, so you’re no stranger to the HFS. The further you are away from it the better.
But how do you deal with the mosquito men?
I saw your other article submitted and liked your site, read most of the non-electronic articles and found the DF article to be cool.
I do need to update the website, I’m not building clouds anymore, almost a year into embedded C(++) now. Just hardware, valves, pumps and current for me. I like it way better than the sysadmin stuff.
“passive revocation” === expiration
I think everybody knows what expiration is and how it’s implemented. The point is that expiration can also be used deliberately as a revocation policy.
Yeah this title seems to be a good candidate for a moderator to fix.
FYI it looks like you have an encoding bug:
Kosovo heeft met Amerikaanse hulp 110 Kosovaren uit SyriÃ« teruggehaald
Kosovo heeft met Amerikaanse hulp 110 Kosovaren uit SyriÃ« teruggehaald
Floodgap’s web proxy uses Windows-1252 encoding by default for, what I presume to be, compatibility reasons. Most clients default to UTF-8, however.
In my gopher clients it seems to work as intended: https://i.postimg.cc/dVq5pDVj/Screenshot-20190421-160850-Pocket-Gopher.png and https://i.postimg.cc/C1jDwHvB/Screenshot-20190421-161321-Diggie-Dog.png Lynx on the client as well. Have you tried a Gopher client?
The browser expects an encoding and I’m not sure how the floodgap proxy handles that for plain text files that don’t specify one.
Interesting, so gopher clients assume all content is UTF-8 (or whatever encoding you’ve used here)?
I guess so? The python script explicitly does UTF-8 and the clients I test with seem to as well. For the filenames I do strip out all except a-zA-Z.
Why not allow numbers? Right now there are names like Queen_Attends_Easter_Service_on___rd_Birthday.
No specific reason. I changed the regex so numbers are allowed now.
The one I wrote assume UTF-8 as I found more pages using that than ISO-8859-1 or Windows-1252. It’s also pretty clear that UTF-8 is the way forward for encoding.
Hey you’re behind the Boston diaries, but rumor goes, not even from, in or near Boston! I like reading your site and Gopherhole
Thanks. There is a story behind the name.
I’m amused to see a number of people I’ve started following on gopher I’ve already seen on Lobste.rs or Mastodon and didn’t know it.