In my day job I write automated tests for web applications and APIs using selenium so a lot of what was said about the web being a bad target for automation resonates with me.
Assigning structure to elements of a web page and giving it a structure isn’t actually that difficult if you add id attributes to elements on the page. It just seems so rare that people are interacting with their page HTML at a level where they think about that now. Every time I see a HTML mess with no structure/targetable elements it’s almost always because there is a library/framework abstracting out the process of writing HTML for the developers.
I’ll also say that the distinction between automation and programming made in this article seems strange to me. Automating any task is programming and the people doing it are programmers in some capacity.
Good stuff. But now I’m a little curious about what it takes to safely operate an exit node. Not using my home IP is a pretty obvious first requirement, but I don’t want to bring unnecessary grief upon my VPS provider either, or strain my relationship with them as a customer. Any recomendations or experience reports?
I’ve used Linode to run a tor exit node for about 2 years now. It’s been pretty boring for the most part.
A lot of VPS providers have some sort of rules about how you can/can’t operate exits. As long as you are proactive in asking about their expectations and willing to accept some limitations it’s pretty easy to find somewhere to host one. If you want to run an exit node with no reduced exit policy allowing all traffic on all ports, you might have a hard time finding somewhere that will allow that.
Disclaimer: I worked for Linode and helped set up some of their internal guidelines for Tor nodes so I’m pretty biased by knowing the rules. Your mileage may vary
You are probably not allowed to use a generic VPS as an exit node, since you are exposing the provider to a significant risk of having the business disrupted due to law enforcement kicking in the data center door and confiscating any machine for an extended period of time.
Thanks for the speculation, I guess? But I’ve already seen the Tor Project’s ISP table, including the “Exit” column, and was hoping for better quality information.
This is a pretty weak guide, and I’d suggest (with no malice intended) that the author isn’t qualified to write this.
Just as one example for starters, it’s possible to hide processes:
https://sysdig.com/blog/hiding-linux-processes-for-fun-and-profit/
The author admits that the guide is pretty weak at the start.
This seems to be targeted at less experienced users and offers some basic advice. To someone who just has a WordPress site on a VPS this kind of guide is incredible useful.
If you are dealing with a talented adversary who is hiding their processes this won’t help. But a pretty big portions of compromises are people who left their password as “root123” and got owned by a bot.
On one side, it’s kinda cool because no ads and better potential for predictable income.
On the other hand, no, this is my CPU and the various tabs and apps I have open make it slow enough already.
The upside to this assumes a responsible site doing this intentionally and disclosing it.
Most of the cases here are compromised websites, so they have whatever ads they normally have and are mining crypto on someone else’s behalf. I’ve seen this happen quite a few times now while working with compromised customers (I work for a company that provides VPS services)
While I think the idea of replacing the internets ad based model with a crypto mining based one is interesting, that isn’t really the issue being discussed in the article.
It would be amusing if we started to have additional cores for offloading ad-mining. Or if people found a way to offload the mining to the Intel ME.
This is pretty interesting. I work for a hosting company and even when working with people to remove content this has never been mentioned or used. I think this is one of those things were unless it gains critical mass (and everyone does it) there isn’t much point because it becomes hard to tell what is legal/missing.
Not really a fan of this idea. This article isn’t so much about “defending” your website as it is about attacking anyone who scans it. Vulnerability scanners are often run from servers that are themselves compromised, so retaliatory attacks like this can further victimize people who have already been owned :(
Still pretty neat on a technical level though.
Just because you’re being attacked from compromised server, doesn’t mean that you’re not being attacked.
About what I expect from a Debian release. Nothing too exciting from a user perspective, but some cool stuff going on for infra and desktop.
The reproducible build stuff is really cool, but I doubt most people will notice.
I actually worked at a hosting company and received some of the reports in this story, so it is pretty neat to see this post with the results.
I prefer vcsh, because it doesn’t leave symlinks scattered everywhere but instead shadows the cfg files in its repositories. It’s just tidier on the outside.
I don’t understand how this is better than symlinks? Both of these solutions seem equally messy. I also don’t really see much issue with symlinks, especially when managed by Stow, but that is just a matter of taste.
One very important design feature that vcsh
has, is that it stores the actual repositories separately from their working trees so one can simply rm -fr $HOME/.config/vcsh/repo.d
without any ill effects - your config files are still there, you “only” getting rid of their repositories’ history.
With stow
, if you remove the repository, you end up with dangling symlinks, not to mention broken environment.
Also, vcsh
is just a POSIX
shell script - not that I have anything against Perl
, but the latter might be outdated or not at all available on your system.
Taking to account all of the above, vcsh
seems like a leaner and cleaner solution… but that’s only IMVHO.
This is a pretty interesting review. I feel like most people end up sticking with whichever of these programs they encounter first, as the differences aren’t that great.
I agree with the author’s criticism that both choices of prefix keys are “bad”, but I also don’t think there is ever really a good choice there. No matter what key combination you choose it will likely conflict with some software and it will be infinitely frustrating when it does.
Once you grok the passthrough mechanic of screen it isn’t quite as terrible, but it still sucks when you end up in nested screen sessions or want to go to the start of a line. Also it seems particularly egregious to have CTRL-A as the prefix since this is one of the main keys in emacs and readine, which are also GNU projects.
Control-Z. You don’t need job control inside a screen/tmux session because having multiple screens is job control. It’s literally been 20 years since I set that default and I can’t understand why they never fixed it.
I remapped tmux to use Ctrl-a over Ctrl-b because, after pass-through, I only ever really use Ctrl-a once, but would use Ctrl-b multiple times. Ctrl-a results in fewer keystrokes over time.
Ctrl and a are too for my pinkie and ring finger to make it at all comfortable. I don’t know how you manage to put up with that!
I started using CTRL-space as my prefix a few years ago, and never came back. It is easy to type, and I never encountered it elsewhere.
I tend to forget EMACS potentially has a keybind for each key already. But EMACS can replace tmux anyway, by treating windows as panes
It can replace some uses of tmux, but the ability to share a set of windows seamlessly with a collaborator is much more difficult without tmux.
I hope they were kidding about using dash as a shell. I pity anyone who would do this to themselves.
Get off my lawn. I used the port of the Plan9 rc shell for many years after using the basic Bourne shell and maybe an old ksh. I used to feel that if I need a heavy shell I was doing something wrong.
But a colleague turned me on to zsh last year. After loading it up with tweaks (many from ohmyzsh, others local) and ignoring that I have several 4-5MB shells around, I quite like it.
I’ve never actually used Dash, just read that it was lightweight. What would be your recommendation?
You might find Dash a little too lightweight as an interactive shell. Most would suggest one of bash, zsh (my choice), fish or even ksh or tcsh if you’re a traditionalist.
Dash is a bare bones, POSIX shell for (portable?) scripting only.
Only features designated by POSIX, plus a few Berkeley extensions, are being incorporated into this shell.
Having used a bare bones shell on mainframes, trust me when I say you’re going to want something more heavy. If it’s where you spend a lot of your time, then it’s worth making it comfortable.
The usual suggestions – bash, zsh – are just fine. I like the idea of keeping the root account on a lightweight shell and not making it fancy. Compared to graphical desktop environments, every shell is remarkably lightweight anyway.
Dash was made by Debian developers to speed up the boot scripts. Portability was not a goal (initially at least).
Seconding mksh, or OpenBSDs port of ksh. They’re lightweight, posix compliant and fairly minimalist.
These are honestly some of the best tools I have used for managing servers.
Ranger is amazing when you just want to browse a few files and having a terminal multiplexer can change your whole workflow.
A lot of the tools designed to make configuration management more declarative fall short for me, mostly because they end up feeling pretty imperative by the time I get it where I need it to be.
I really like NixOS’s approach because the options are a very nice declarative API IMO. On top of that if you have to make more complicated options nix gives you the power to abstract with functions. You can then offer nice clean interfaces to users without them having to understand the nitty gritty, which I feel is the goal of being declarative.
The Nix approach also makes rolling things back pretty painless, as it keeps every declarative configuration as a generation of the system. So if something goes wrong you can just switch to the previous generation.
That sounds fantastic. Unfortunately not everyone has their choice of OS to run :)
Most times I’ve done that class of work in my career the OS / distro has been pre-chosen and I have to work with that.
In my current gig it’s Amazon Linux all the way.
While some of this is specific to Haskell, such as avoiding damaging stereotypes, I think most of this is good practice for any sort of study group. So I think this article may be better titled “What a Study Group is Not”.
I am currently participating in a small Common Lisp study group and almost all of these points still apply.
There argument that simply a local username is well… just local is a bit weird. Noone expects me to get ckeen@ on every mail server in the universe so noone can pose as me…
It seems weird to me as well, but this is a pretty major block for large companies. I don’t think Mastodon will ever be able to get any sort of organization to adopt it because of this.
As someone who used to run Parabola and run linux-libre I find myself agreeing with this rant pretty hard. Trying to use a hard line freedom distro rapidly became a major obstacle to getting anything I actually needed done.
I like free software but I really wish there were more people taking a practical approach (and doing so publicly) to fight against the prevailing FSF “too holy to talk about proprietary software” stance.
Better support for things like license filtering with easy to add exceptions and sandboxing to let people make their own choices on the degree of freedom they want rather than just trisquel vs debian vs ubuntu.