Connections from CloudFlare’s reverse proxy are dropped. Do not help one private company expand its control over all internet traffic.
This is my favorite line of their docs and good on someone for doing it.
One feature I would really love to see though is adding HTTP headers not in the HTML document (like how Netlify does it) as there are certain things you can’t add the document like X-Frame-Options I think, it can simplify your build phase having features to build in, files are smaller, and you can get a head start on preload/prefetch/preconnect links because the document doesn’t have to first be finished downloading and have its <head> parsed.
Cloudflare users are getting man-in-the-middle by cloudflare, for technical reasons.¹ Because they already have ~25% of the internet traffic as customers,² they’re in a unique position to do cross-site tracking without any cookie, or complex fingerprinting techniques. Of course, being able to do something does not mean that they are doing it. But history has proven that when companies and governments are able to do something, they do it.
Cloudflare runs a large part of the internet to “protect” sites from DDoS attacks – but they also host the very same DDoS webshop sites, where you can ruin someone’s business for the price of a cup of coffee. There has been thousands of articles about this.
Adding, Cloudflare has gone down and we saw a massive chunk of the net just fail because of a single point of failure. There’s also a ton of hCAPTCHA sudokus to solve for Cloudflare for free for the privilege to see the site behind it if you’re using Tor, a VPN service, or just live in a non-Western country. Then, as a result they suggest you use their DNS and browser extension to ‘help’ with the situation to further collect even more data on users.
(For a while I made a living doing version control and configuration management in a high-assurance environment.)
I recall when Stefan first announced this. I was very happy that it was him, because I knew he was involved with other version control systems, so he wouldn’t just wade in and make silly mistake after mistake. I was right.
I am looking forward to got taking over the world.
Things belonging “somewhere else” is a good strategy for minimizing complexity.
And in excess, a good way to make sure nobody ever actually goes back and learns about the things you reference. Think of inline Wikipedia Citation needed versus a reference number.
I’m not sure why would you want to run this and OpenBSD over an OS with a better storage layer (with a proper LVM, ZFS, anything), that’d give you much better performance, operations, and reliability guarantees..
For me, the history of privsep and now pledge() and unveil() is worth a lot. I don’t encounter performance, operations or reliability issues with my OpenBSD systems.
Has anyone found a nice pattern for where to create these additional directories? Like the author I create them 1 directory above but it pollutes this space that holds directories for many other projects…
Something like the following I guess should be melded into my head:
mkdir project-being-cloned && cd project-being-cloned && git clone ... # now start making trees?
i like to think of a project as more than just code, it has notes, references, sample data, blueprints and what not. so i start by creating a directory for the project and have all the code (worktrees) reside in one corner of this directory:
What I do is I clone the project to a branch named after the default branch, ie main, inside a directory named after a project. That way, when I checkout a worktree 1 directory above it is on the same level as the main branch
I’ve been initializing repositories from a bare clone for a few years now. The trick is to clone the git directory under, well, .git.
git clone --bare $remote $out/.git
cd $out
git worktree add main
git worktree add fix-something
# …
With this strategy, you also get a simple git integration with any file manager, i.e. directories correspond to branches and worktrees are not nested.
Unfortunately, Git currently writes absolute path to $out/.git in worktree. That can be fixed by manually editing worktree/.git to refer to gitdir: ../.git.
I like this idea but I’ve trouble updating the bare repository.
A plain git fetch won’t fetch new branches from the remote and in a worktree something like git switch --track <remote>/<branch> which I normally use in non-worktree setups to switch branches is not available.
I use $project for the main and then $project–$branch in the same parent directory. Also, I’ve noticed that git worktrees don’t work exactly the same as a regular repo. For instance you can’t switch a worktree to an already checked out branch, for some reason.
I struggle to understand the root cause author is working around. If a Makefile says foo depends on bar.c baz.c and honk.c, does foo not get built when it’s appropriate?
As both Git and S3 set the file lastModified dates to the time they ran, the build process either never ran (artifacts are newer than source), or always ran (sources are newer than artifacts).
To use the caching across multiple (ephemeral) build agents.
“When appropriate” means “if bar.c, baz.c or honk.c have changed since foo was last built, then foo needs to be rebuilt.” But, how do you test “have changed since foo was last built” if modification dates aren’t reliable? Answer: content change detection.
jcs runs his own business in a manner that lets him hack on OpenBSD and whatever other projects he wants to whenever he wants. This should be applauded and emulated, not dismissed with a capitalistic “too much free time” handwave.
Wow, I felt a good dose of self righteousness here, and you also managed to veer the conversation toward whatever ideology you identify with ; which is amazing.
I know who is jcs, what he stands for, and I’m both a fan and a very happy customer. Still, “too much free time 🤣” is a valid comment about this feat.
I apologize if my comment(s) rubbed you the wrong way. I’ll refrain from further comments as it’s way off-topic. Feel free to educate me about capitalism in private messages. Have a nice day.
I think pledge(), although convenient, is not a very good design as a syscall. Linux’s seccomp is more general and flexible, a good example of separating mechanism from policy. And with good tooling it isn’t difficult too use. The author even implemented pledge() using seccomp.
This is why there are many more pledge()’d programs in OpenBSD than seccomp applications in Linux. Because one is easier and safer to use than the other.
Ooh, you are hitting one of my pet offline-first fantasies here: Email (and mastodon posts) should be fetched in the background and stored on disk. I want to read and post as I please and it should flow out when I am back online.
I am looking forward to see what you come up with.
Personally, I struggle if my brain is drained but my body is not.. then I need to go for a brisk walk. If I don’t, it impacts my sleep, appetite, mood, etc. But because my brain is tired, I need to force myself because tired.
If only my body is tired I can easily read something interesting, play a game or watch a movie and still get a good sleep. I also heavily prioritize my daughter, girlfriend and nerd friends.
There seemed to be quite a bit of interest when I mentioned what I was doing in the What are you doing this week? post, so thought I would polish off the blog post sharpish and get it shared!
I don’t normally write this much about a topic, but I had to stop myself going too in the reeds for this one. If there is a specific subject that people find interesting I could deep dive though
but I had to stop myself going too in the reeds for this one
If you ever felt like going far off in the reeds, I would click that link so hard. Did you tweak kern.bufcachepercent or whatever it’s called to keep YAML files in memory?
Thanks for the encouragement. I haven’t needed to change it from the default, mainly because the data volumes I have right now are relatively small.
One of my hobby projects is a clone of Lobste.rs written in this stack and while it is functional it isn’t ready for an onslaught of traffic yet.
Memory usage is of course something that gets constant attention though, so if it transpires I need to up kern.bufcachepercent then the option is there like you say.
The youtube link is broken because I moved to vimeo when youtube started unconditionally showing ads on all videos. Here’s the vimeo link
edit: never mind, I figured out how to get the embed code for vimeo and updated the article itself to have the new video link.
edit2: this article (and the syntax highlighting, formatting, etc) is the main reason I have been dragging my feet to redo my personal web site. I want to avoid breaking this page. it’s encouraging to see that this is not in vain, and that it is worth preserving. cheers
I remember seeing this article around the time it was published and I think I wrote to you. It’s still one of the most impressive write-ups I have seen.
This is a great review, even taking OpenBSD aside! In particular:
New to the seventh generation is a Dolby Atmos four-speaker sound system, which thankfully does not need any awful hacks to get working, and produces a very full, loud sound. There are now two speaker grilles on the top of the keyboard deck in addition to the two on the underside of the laptop. The new sound system is pretty much the only reason I decided to get another X1 Carbon, as the speakers on my Matebook X are also Dolby Atmos powered and it’s hard to use any other laptop that doesn’t sound as good.
This is huge. I switched from Mac back to PC last year and bought an Alienware 17 R5 because I wanted the big display and scissor keyboard with lots of travel and good feedback. The laptop is overall really really nice but… The built in speaker is AW-FUL.
Why do so many PC makers think they can cheap out on this? You’re paying serious $$$ for a machine is a decent sounding speaker too much to ask?
Can relate.
The only 2-3 times I’ve ever used the laptop speakers was on business trips when I was alone in my hotel room and didn’t want to use headphones… and ofc when I didn’t have speakers with me.
Mac laptop speakers are of good enough sound quality that in ~15ish years of using Mac laptops I never felt the need for external speakers. It’s that good.
Given that the tech exists, I’d expect PC laptop makers to do the same.
Guess we have to agree to disagree then. Never owned any MacBook but whenever someone was playing music in the office from a MBP.. I’d day Lenov is 2-3/10 and a MBP is maybe 5-6/10. I never thought “Wow, this sounds awesome” - more like “Wow, this is only 40% terrible” ;)
NB: My main machine has never been a laptop, so I’m really used to a 5.1 system or big speakers or headphones.
Sure, it may be hard, but it is possible to give up graphical interfaces
entirely—even in 2019.
Graphical browsers have the benefit of presenting a readable layout.
Most webpages are difficult to navigate in w3m, unless they are
extremely barebones. Looks like w3m is not fully pledged, either.
I live in the terminal for most non-browser things and being able to SSH
from my wimpy laptop/desktop to a beast of a server is my killer feature.
Editing images, audio and video from the commandline is something I
rarely do but I have seen people do it.
Editing images, audio and video from the commandline is something I rarely do but I have seen people do it.
I’m not sure command line tools are capable of editting images and video in any useful or meaningful sense. Adjusting color, touching up areas, checking focus, etc. are innately visual thing to do, and by their nature don’t lend themselves to bulk processing and automation in most situations.
Depends on the edit, doesn’t it? If you have two images and you just want to place one next to the other, that’s easy. If you have an audio file and you want to splice out the section from 1:30 to 1:45 and cut the volume on everything after 3:00 to 50%, you don’t need anything visual for that.
I didn’t mention audio, and it does seem more amenable to command line processing than images and video because it doesn’t really have a visual component.
For images and video it does depend somewhat on what the task is. Certainly specific edits lend themselves to command line tools, but it’s such a small subset of possible edits I don’t think it’s very useful.
The image editting I’m most familiar with is photo touch up and RAW conversion, and even the most basic adjustments like white balance or exposure would be incredibly tedious and error prone with command line tools.
In a sense the whole concept of image editting is pointless without a way to view the results anyway.
I agree with you, for an image here or there, when you aren’t quite sure exactly what you want done, a GUI image editor is generally easier.
When you need to edit more than 10 images… or automate your editing(as part of some application/website) then a CLI interface to do these things is beautiful.
If you know exactly what you want done, regardless of file count, then a CLI might be easier. Like @technomancy said, or stripping JPEG metadata or something.
So I think it has a lot more to do with what you are doing and why, as to the CLI being better or not for editing of images, movies, etc.
When you need to edit more than 10 images… or automate your editing (as part of some application/website) then a CLI interface to do these things is beautiful.
Interesting. Do you have any suggestions for a tool to do these things that is more amenable to the task of on-the-fly image editing than, say, GIMP or Photoshop?
I use python’s pil/pillow[0] for automation stuff, and for CLI imagemagick[1].
Both are great ways to mess around with images. There is ffmpeg[2] and vlc[3] for video, etc.
As I mentioned before none of these CLI tools are particularly great for on-the-fly, not sure what you want done sort of tasks. But they are great tools to have in your toolbelt for automation or repetition.
One thing I’ve been experimenting with – very preliminarily, just for personal use, and not entirely in the terminal, to add some caveats – is moving some of my browser usage out of the browser. I don’t really like any of the textmode browsers, but when there is some kind of alternative API, you can interface with things in non-browser ways. For example, two websites I use very frequently are Wikipedia and OpenStreetMap, and they both have APIs.
For various reasons most of the experimentation with alternative interfaces seems to be on mobile. Wikipedia has an official app (which loads faster and has a nicer UI than the browser version), and there are various OSM apps. I don’t necessarily want to write full-fledged native apps on the desktop, but it’s nice that I have the option to bypass the browser layer if I want to look up a Wikipedia article by name and then display the text somewhere.
I have been doing the same, but on the terminal. There’s dict for dictionary lookups, translate-shell for Google Translate (and a few other translation services), I wrote my own scripts to get DuckDuckGo and Google search results, Wikipedia summaries, and I use rtv pretty often.
I still have a browser open most of the time, but just having them be an option is pretty nice - it cuts down on unnecessary searches, I have not needed to write translate.google.com in years, and it makes even very underpowered machines usable.
I don’t necessarily want to write full-fledged native apps on the desktop, but it’s nice that I have the option to bypass the browser layer if I want to look up a Wikipedia article by name and then display the text somewhere.
You could throw Tcl/Tk at the problem. Put a search bar at the top, add an htmllib widget, and bask in the HTML 2.0, browserless goodness.
This is a very well-written article. I really dislike GitHub and hate working with it. Especially for small changes, it is very cumbersome to open a pull-request and having to deal with all the kitchen-sinking. I much prefer the way to just send a patch to a mailing list, where it can be discussed and merged.
Isn’t code browsing of the pull request on the web much more convenient than applying the latch locally? I’ve used both GitHub and gitlab pull request flows on a lot of commercial products, and it would’ve been a pain to go through the email process.
TBH, I can’t remember when was the last time I used email aside from automatic notifications and some of the headhunters (most already prefer linkedin and messengers anyway).
Here’s a video which demonstrates the email-based workflow using aerc. This isn’t quite ready for mass consumption yet, so I’d appreciate if if you didn’t repost it elsewhere:
Wow… I’ve known for years that the git+email workflow was behind a more distributed model for open source development, but all my experience is on github and so making the switch has felt too-difficult. This article and this video together make me feel compelled (inspired?) to give it a go. aerc looks amazing.
There are only a small handful of installations in the wild, so you might run into a few bumps. Shoot an email to ~sircmpwn/sr.ht-discuss@lists.sr.ht or join #sr.ht on irc.freenode.net if you run into issues.
Having tried to record a few casts like this, I know how hard it is to do a take with few typos or stumbling on words. Well done.
The idea of using a terminal emulator in the client is a cool idea :). I usually use either VS code or sublime text, though this gives me an idea for a terminal ‘editor’ that just forwards to a gui editor, but displays either a message, or mirrors the file contents.
I have done the same thing albeit differently in my client[0]! I don’t have any casts handy, but the whole patch apply, test, HEAD reset, branch checkout/creation is handled by the client. I’ve also started patchwork integration.
[0] https://meli.delivery/ shameful plug, because I keep posting only about this lately. Guess I’m too absorbed in it.
This is a very well-written article. I really dislike GitHub and hate working with it. Especially for small changes, it is very cumbersome to open a pull-request and having to deal with all the kitchen-sinking. I much prefer the way to just send a patch to a mailing list, where it can be discussed and merged.
I think a bridge would be nice, where emails sent can become pull requests with no effort or pointless github ‘forks’.
Worse yet, barely anyone remembers that Git is still a “real” distributed version control system and that “request pull” exists - and, yes, you didn’t mean to say “pull request”. The fact that GitHub called their functionality a “pull request” is somewhat annoying as well.
Edit: I’m glad the article mentions this in the P.S. section - and I should really read the entire article before I comment.
Maybe I’m old and bitter.. but I have serious concerns about how we as a community can get captured by Microsoft via things like GitHub and their Citus Data purchase. “We” struggled to keep up with free implementations of things like CIFS and now some popular open source resources are under Microsoft’s control.
We risk that all the people able and willing to do important work are all tied up on Microsoft products and don’t have the energy or legal freedom to work on open source.
I think we should be extremely careful. For may people e-mail means Google Mail, search means Google search, social network means Facebook/Instagram/WhatsApp.
It is not inconceivable that GitHub becomes synonymous with development, especially with the strong backing of Microsoft. Network effects are extremely strong and I think we are already at a point where a lot of (newer) developers don’t know how to do code reviews outside GitHub PRs, only consider putting their open source projects on GitHub in the fear of missing out on contributions, and/or put their projects on GitHub since it gives the largest opportunity to get stars which are good for their resume/careers.
This trend of tying more and more things from GitHub into GitHub makes things worse, since additions to GitHub are not a level playing field anymore. GitHub can make all the APIs that they need, 3rd parties have to use whatever APIs GitHub chooses to make available.
We should try to make more and more projects available through sr.ht, GitLab, and other ‘forges’ to ensure that there are healthy and viable alternatives.
I hesitate to reply since I don’t have much to say that goes beyond “me too”, but in this case I think the importance of the subject merits a supportive response anyway. I very much agree with these concerns and would like to thank everyone who’s raising them.
but the alternatives i know of are even worse. sourcehut doesnt even offer HTTPS push:
Date: Fri, 16 Nov 2018 14:07:39 -0500
From: Drew DeVault <sir@cmpwn.com>
Subject: Re: Welcome to sr.ht!'
On 2018-11-16 1:04 PM, Steven Penny wrote:
> I would prefer to write over https not ssh, is it possible
This is deliberately unsupported - SSH is more secure.
We risk that all the people able and willing to do important work are all tied up on Microsoft products and don’t have the energy or legal freedom to work on open source.
Is this risk related to GitHub Sponsors in any way?
I don’t remember SourceForge relying on network effects that much though. Sure, the source and releases were there, but I don’t think all of the development activity was tied up to it, was it?
It was also a all-in-one platform and people who learned to contribute to one project could translate that knowledge to the other projects.
At the time there were much less integrations between services and there were at least an order of magnitude less developers, so it doesn’t translate 1:1.
One advantage GitHub has is all the special treatment for tooling but other than that I don’t see the network effect being too strong. Developers are the best equipped to escape. Projects are still independent from each-other and it’s easy to migrate projects to GitLab if necessary. If fact they must have seen a lot of projects leave already after the Microsoft acquisition and I bet they are being extra careful, which is good for us :)
Needless to say, SSH is no longer exposed to the general internet. We are rolling out a VPN as the main access to dev network
I see this often with SSH, RDP and it baffles me. It’s as if people think VPN services cannot have security bugs, be bruteforced or otherwise abused. I have dismantled several VPN solutions that were ‘protecting’ much safer services.
Bastion hosts, however, are a fine way of reducing the attack surface, and users can have one key for the bastion hosts and another key for the internal services they need. The ProxyJump feature is too overlooked.
This is my favorite line of their docs and good on someone for doing it.
One feature I would really love to see though is adding HTTP headers not in the HTML document (like how Netlify does it) as there are certain things you can’t add the document like X-Frame-Options I think, it can simplify your build phase having features to build in, files are smaller, and you can get a head start on preload/prefetch/preconnect links because the document doesn’t have to first be finished downloading and have its
<head>
parsed.I haven’t heard this before. Why is Cloudflare’s reverse proxy bad?
Cloudflare users are getting man-in-the-middle by cloudflare, for technical reasons.¹ Because they already have ~25% of the internet traffic as customers,² they’re in a unique position to do cross-site tracking without any cookie, or complex fingerprinting techniques. Of course, being able to do something does not mean that they are doing it. But history has proven that when companies and governments are able to do something, they do it.
Cloudflare also contribute to the reduction of privacy. They maintain a list of IPs of Tor exit nodes. They force Tor users to solve a captcha for every page, or allow users to block the country “Tor” directly.
¹. Cloudflare will install captchas on your website during a DDoS, to limit access to legitimate users and weed out bots.
². This is my guess-timate
Cloudflare runs a large part of the internet to “protect” sites from DDoS attacks – but they also host the very same DDoS webshop sites, where you can ruin someone’s business for the price of a cup of coffee. There has been thousands of articles about this.
Adding, Cloudflare has gone down and we saw a massive chunk of the net just fail because of a single point of failure. There’s also a ton of hCAPTCHA sudokus to solve for Cloudflare for free for the privilege to see the site behind it if you’re using Tor, a VPN service, or just live in a non-Western country. Then, as a result they suggest you use their DNS and browser extension to ‘help’ with the situation to further collect even more data on users.
(For a while I made a living doing version control and configuration management in a high-assurance environment.)
I recall when Stefan first announced this. I was very happy that it was him, because I knew he was involved with other version control systems, so he wouldn’t just wade in and make silly mistake after mistake. I was right.
I am looking forward to got taking over the world.
I feel like this kind of information belongs in the issue tracker, not the version control system.
Doing so might also help ensure documentation is updated correctly.
Things belonging “somewhere else” is a good strategy for minimizing complexity.
And in excess, a good way to make sure nobody ever actually goes back and learns about the things you reference. Think of inline Wikipedia Citation needed versus a reference number.
I’m not sure why would you want to run this and OpenBSD over an OS with a better storage layer (with a proper LVM, ZFS, anything), that’d give you much better performance, operations, and reliability guarantees..
“Technology exists” is a bad reason to avoid developing more technology.
Variation in a server fleet is not entirely free.
For me, the history of privsep and now pledge() and unveil() is worth a lot. I don’t encounter performance, operations or reliability issues with my OpenBSD systems.
Has anyone found a nice pattern for where to create these additional directories? Like the author I create them 1 directory above but it pollutes this space that holds directories for many other projects…
Something like the following I guess should be melded into my head:
i like to think of a project as more than just code, it has notes, references, sample data, blueprints and what not. so i start by creating a directory for the project and have all the code (worktrees) reside in one corner of this directory:
so i suppose it starts with:
I really like this 🙂 Stealing!
I completely agree here, but I am curious why you then don’t think that all of these other things belong in your revision control system.
What I do is I clone the project to a branch named after the default branch, ie main, inside a directory named after a project. That way, when I checkout a worktree 1 directory above it is on the same level as the main branch
I usually have one worktree per project. E.g. projectA/linux, projectB/linux, etc.
I’ve been initializing repositories from a bare clone for a few years now. The trick is to clone the git directory under, well,
.git
.With this strategy, you also get a simple git integration with any file manager, i.e. directories correspond to branches and worktrees are not nested.
Unfortunately, Git currently writes absolute path to
$out/.git
in worktree. That can be fixed by manually editingworktree/.git
to refer togitdir: ../.git
.I like this idea but I’ve trouble updating the bare repository. A plain
git fetch
won’t fetch new branches from the remote and in a worktree something likegit switch --track <remote>/<branch>
which I normally use in non-worktree setups to switch branches is not available.You could steal the suggested layout from Subversion:
Replace trunk with “master” or “main” as needed.
I use $project for the main and then $project–$branch in the same parent directory. Also, I’ve noticed that git worktrees don’t work exactly the same as a regular repo. For instance you can’t switch a worktree to an already checked out branch, for some reason.
I struggle to understand the root cause author is working around. If a Makefile says foo depends on bar.c baz.c and honk.c, does foo not get built when it’s appropriate?
To use the caching across multiple (ephemeral) build agents.
“When appropriate” means “if bar.c, baz.c or honk.c have changed since foo was last built, then foo needs to be rebuilt.” But, how do you test “have changed since foo was last built” if modification dates aren’t reliable? Answer: content change detection.
Nice. This makes me wish flyctl would build and run on OpenBSD. I can see how it has it uses.
This is wonderful. I envy the dedication to long term projects like this.
I do too, but I can’t ignore the “too much free time” thoughts either. 🤣
jcs runs his own business in a manner that lets him hack on OpenBSD and whatever other projects he wants to whenever he wants. This should be applauded and emulated, not dismissed with a capitalistic “too much free time” handwave.
Wow, I felt a good dose of self righteousness here, and you also managed to veer the conversation toward whatever ideology you identify with ; which is amazing.
I know who is jcs, what he stands for, and I’m both a fan and a very happy customer. Still, “too much free time 🤣” is a valid comment about this feat.
I apologize if my comment(s) rubbed you the wrong way. I’ll refrain from further comments as it’s way off-topic. Feel free to educate me about capitalism in private messages. Have a nice day.
I think pledge(), although convenient, is not a very good design as a syscall. Linux’s seccomp is more general and flexible, a good example of separating mechanism from policy. And with good tooling it isn’t difficult too use. The author even implemented pledge() using seccomp.
This is why there are many more pledge()’d programs in OpenBSD than seccomp applications in Linux. Because one is easier and safer to use than the other.
Ooh, you are hitting one of my pet offline-first fantasies here: Email (and mastodon posts) should be fetched in the background and stored on disk. I want to read and post as I please and it should flow out when I am back online.
I am looking forward to see what you come up with.
Personally, I struggle if my brain is drained but my body is not.. then I need to go for a brisk walk. If I don’t, it impacts my sleep, appetite, mood, etc. But because my brain is tired, I need to force myself because tired.
If only my body is tired I can easily read something interesting, play a game or watch a movie and still get a good sleep. I also heavily prioritize my daughter, girlfriend and nerd friends.
There seemed to be quite a bit of interest when I mentioned what I was doing in the What are you doing this week? post, so thought I would polish off the blog post sharpish and get it shared!
I don’t normally write this much about a topic, but I had to stop myself going too in the reeds for this one. If there is a specific subject that people find interesting I could deep dive though
If you ever felt like going far off in the reeds, I would click that link so hard. Did you tweak kern.bufcachepercent or whatever it’s called to keep YAML files in memory?
Thanks for the encouragement. I haven’t needed to change it from the default, mainly because the data volumes I have right now are relatively small.
One of my hobby projects is a clone of Lobste.rs written in this stack and while it is functional it isn’t ready for an onslaught of traffic yet.
Memory usage is of course something that gets constant attention though, so if it transpires I need to up
kern.bufcachepercent
then the option is there like you say.The youtube link is broken because I moved to vimeo when youtube started unconditionally showing ads on all videos. Here’s the vimeo link
edit: never mind, I figured out how to get the embed code for vimeo and updated the article itself to have the new video link.
edit2: this article (and the syntax highlighting, formatting, etc) is the main reason I have been dragging my feet to redo my personal web site. I want to avoid breaking this page. it’s encouraging to see that this is not in vain, and that it is worth preserving. cheers
I remember seeing this article around the time it was published and I think I wrote to you. It’s still one of the most impressive write-ups I have seen.
I use rsync.net for off-site backups.
This is a great review, even taking OpenBSD aside! In particular:
This is huge. I switched from Mac back to PC last year and bought an Alienware 17 R5 because I wanted the big display and scissor keyboard with lots of travel and good feedback. The laptop is overall really really nice but… The built in speaker is AW-FUL.
Why do so many PC makers think they can cheap out on this? You’re paying serious $$$ for a machine is a decent sounding speaker too much to ask?
I am the complete opposite: I plug headphones in when I want sound. I hate that laptops ship with speakers at all.
You don’t EVER want to fill your room with the sound of the game you’re playing or the music you’re listening to?
Fascinating, captain :)
Can relate. The only 2-3 times I’ve ever used the laptop speakers was on business trips when I was alone in my hotel room and didn’t want to use headphones… and ofc when I didn’t have speakers with me.
So, that’s my point.
Mac laptop speakers are of good enough sound quality that in ~15ish years of using Mac laptops I never felt the need for external speakers. It’s that good.
Given that the tech exists, I’d expect PC laptop makers to do the same.
Guess we have to agree to disagree then. Never owned any MacBook but whenever someone was playing music in the office from a MBP.. I’d day Lenov is 2-3/10 and a MBP is maybe 5-6/10. I never thought “Wow, this sounds awesome” - more like “Wow, this is only 40% terrible” ;)
NB: My main machine has never been a laptop, so I’m really used to a 5.1 system or big speakers or headphones.
No disagreement really!
Ah, THIS makes perfect sense!
If you’ve got your ears/brain tuned to the sound of a full bore 5.3 setup, no laptop speaker is ever going to cut it.
Been thinking about buying a SMALL 5.3 system for my home office, but we’re super crunched for space and the center speakers all seem beastly big.
Graphical browsers have the benefit of presenting a readable layout. Most webpages are difficult to navigate in w3m, unless they are extremely barebones. Looks like w3m is not fully pledged, either.
I live in the terminal for most non-browser things and being able to SSH from my wimpy laptop/desktop to a beast of a server is my killer feature.
Editing images, audio and video from the commandline is something I rarely do but I have seen people do it.
I’m not sure command line tools are capable of editting images and video in any useful or meaningful sense. Adjusting color, touching up areas, checking focus, etc. are innately visual thing to do, and by their nature don’t lend themselves to bulk processing and automation in most situations.
Depends on the edit, doesn’t it? If you have two images and you just want to place one next to the other, that’s easy. If you have an audio file and you want to splice out the section from 1:30 to 1:45 and cut the volume on everything after 3:00 to 50%, you don’t need anything visual for that.
I didn’t mention audio, and it does seem more amenable to command line processing than images and video because it doesn’t really have a visual component.
For images and video it does depend somewhat on what the task is. Certainly specific edits lend themselves to command line tools, but it’s such a small subset of possible edits I don’t think it’s very useful.
The image editting I’m most familiar with is photo touch up and RAW conversion, and even the most basic adjustments like white balance or exposure would be incredibly tedious and error prone with command line tools.
In a sense the whole concept of image editting is pointless without a way to view the results anyway.
I agree with you, for an image here or there, when you aren’t quite sure exactly what you want done, a GUI image editor is generally easier.
When you need to edit more than 10 images… or automate your editing(as part of some application/website) then a CLI interface to do these things is beautiful.
If you know exactly what you want done, regardless of file count, then a CLI might be easier. Like @technomancy said, or stripping JPEG metadata or something.
So I think it has a lot more to do with what you are doing and why, as to the CLI being better or not for editing of images, movies, etc.
Interesting. Do you have any suggestions for a tool to do these things that is more amenable to the task of on-the-fly image editing than, say, GIMP or Photoshop?
I use python’s pil/pillow[0] for automation stuff, and for CLI imagemagick[1].
Both are great ways to mess around with images. There is ffmpeg[2] and vlc[3] for video, etc.
As I mentioned before none of these CLI tools are particularly great for on-the-fly, not sure what you want done sort of tasks. But they are great tools to have in your toolbelt for automation or repetition.
0: https://pillow.readthedocs.io/
1: https://imagemagick.org/script/command-line-tools.php
2: https://ffmpeg.org/
3: https://www.videolan.org/
I feel like the ideal here is a gui tool with an internal shell without its own language just with several new shell builtins to handle the work.
One thing I’ve been experimenting with – very preliminarily, just for personal use, and not entirely in the terminal, to add some caveats – is moving some of my browser usage out of the browser. I don’t really like any of the textmode browsers, but when there is some kind of alternative API, you can interface with things in non-browser ways. For example, two websites I use very frequently are Wikipedia and OpenStreetMap, and they both have APIs.
For various reasons most of the experimentation with alternative interfaces seems to be on mobile. Wikipedia has an official app (which loads faster and has a nicer UI than the browser version), and there are various OSM apps. I don’t necessarily want to write full-fledged native apps on the desktop, but it’s nice that I have the option to bypass the browser layer if I want to look up a Wikipedia article by name and then display the text somewhere.
I have been doing the same, but on the terminal. There’s
dict
for dictionary lookups,translate-shell
for Google Translate (and a few other translation services), I wrote my own scripts to get DuckDuckGo and Google search results, Wikipedia summaries, and I use rtv pretty often.I still have a browser open most of the time, but just having them be an option is pretty nice - it cuts down on unnecessary searches, I have not needed to write
translate.google.com
in years, and it makes even very underpowered machines usable.You could throw Tcl/Tk at the problem. Put a search bar at the top, add an htmllib widget, and bask in the HTML 2.0, browserless goodness.
are there fully pledged graphical browsers?
This is a very well-written article. I really dislike GitHub and hate working with it. Especially for small changes, it is very cumbersome to open a pull-request and having to deal with all the kitchen-sinking. I much prefer the way to just send a patch to a mailing list, where it can be discussed and merged.
Isn’t code browsing of the pull request on the web much more convenient than applying the latch locally? I’ve used both GitHub and gitlab pull request flows on a lot of commercial products, and it would’ve been a pain to go through the email process.
TBH, I can’t remember when was the last time I used email aside from automatic notifications and some of the headhunters (most already prefer linkedin and messengers anyway).
Here’s a video which demonstrates the email-based workflow using aerc. This isn’t quite ready for mass consumption yet, so I’d appreciate if if you didn’t repost it elsewhere:
https://yukari.sr.ht/aerc-intro.webm
Wow… I’ve known for years that the git+email workflow was behind a more distributed model for open source development, but all my experience is on github and so making the switch has felt too-difficult. This article and this video together make me feel compelled (inspired?) to give it a go.
aerc
looks amazing.Consider giving sourcehut a try, too :)
Thanks for making sourcehut!
I took a look a few times but I can’t seem to find any obvious documentation to setup and configure it. I fully acknowledge that I may be blind.
I assume you don’t want to use the hosted version? If you want to install it yourself, instructions are here:
https://man.sr.ht/installation.md
There are only a small handful of installations in the wild, so you might run into a few bumps. Shoot an email to ~sircmpwn/sr.ht-discuss@lists.sr.ht or join #sr.ht on irc.freenode.net if you run into issues.
Having tried to record a few casts like this, I know how hard it is to do a take with few typos or stumbling on words. Well done.
The idea of using a terminal emulator in the client is a cool idea :). I usually use either VS code or sublime text, though this gives me an idea for a terminal ‘editor’ that just forwards to a gui editor, but displays either a message, or mirrors the file contents.
I can’t do this either :) this is edited down from 10 minutes of footage.
I have done the same thing albeit differently in my client[0]! I don’t have any casts handy, but the whole patch apply, test, HEAD reset, branch checkout/creation is handled by the client. I’ve also started patchwork integration.
[0] https://meli.delivery/ shameful plug, because I keep posting only about this lately. Guess I’m too absorbed in it.
Ah, it’s this! I filed an issue because there doesn’t appear to be any public source, and I wanted to try it.
I think locally you can script your workflow to make it as easy as you want. However not many people take the time to do this (I haven’t either).
I think a bridge would be nice, where emails sent can become pull requests with no effort or pointless github ‘forks’.
Worse yet, barely anyone remembers that Git is still a “real” distributed version control system and that “request pull” exists - and, yes, you didn’t mean to say “pull request”. The fact that GitHub called their functionality a “pull request” is somewhat annoying as well.
Edit: I’m glad the article mentions this in the P.S. section - and I should really read the entire article before I comment.
Maybe I’m old and bitter.. but I have serious concerns about how we as a community can get captured by Microsoft via things like GitHub and their Citus Data purchase. “We” struggled to keep up with free implementations of things like CIFS and now some popular open source resources are under Microsoft’s control.
We risk that all the people able and willing to do important work are all tied up on Microsoft products and don’t have the energy or legal freedom to work on open source.
I think we should be extremely careful. For may people e-mail means Google Mail, search means Google search, social network means Facebook/Instagram/WhatsApp.
It is not inconceivable that GitHub becomes synonymous with development, especially with the strong backing of Microsoft. Network effects are extremely strong and I think we are already at a point where a lot of (newer) developers don’t know how to do code reviews outside GitHub PRs, only consider putting their open source projects on GitHub in the fear of missing out on contributions, and/or put their projects on GitHub since it gives the largest opportunity to get stars which are good for their resume/careers.
This trend of tying more and more things from GitHub into GitHub makes things worse, since additions to GitHub are not a level playing field anymore. GitHub can make all the APIs that they need, 3rd parties have to use whatever APIs GitHub chooses to make available.
We should try to make more and more projects available through sr.ht, GitLab, and other ‘forges’ to ensure that there are healthy and viable alternatives.
I hesitate to reply since I don’t have much to say that goes beyond “me too”, but in this case I think the importance of the subject merits a supportive response anyway. I very much agree with these concerns and would like to thank everyone who’s raising them.
I would love to ditch GitHub as:
its been ugly for 2 years now https://twitter.com/mdo/status/830138373230653440
its been bloated for several years
its closed source https://github.com/github/pages-gem/issues/160
but the alternatives i know of are even worse. sourcehut doesnt even offer HTTPS push:
GitLab doesnt offer contributions in last year:
https://gitlab.com/gitlab-org/gitlab-ce/issues/47320
and their commits use… shudder infinite scrolling:
https://gitlab.com/gitlab-org/release/tasks/commits/master
sourcehut supports HTTPS cloning but only SSH pushing
corrected thanks - I want HTTPS clone and push - seems silly to offer only 1
Is this risk related to GitHub Sponsors in any way?
GitHub is popular now. If they start abusing their power too much then there is plenty of competition.
Since you mention you’re old, do you remember when SourceForge was great and all the developers would host their projects there?
I don’t remember SourceForge relying on network effects that much though. Sure, the source and releases were there, but I don’t think all of the development activity was tied up to it, was it?
SourceForge also provided mailing lists and that was probably the primary code review and support channel for many projects.
SourceForge also had issue tracker. It was headache to migrate. For example, Python project wrote custom tooling to migrate SourceForge issues.
It was also a all-in-one platform and people who learned to contribute to one project could translate that knowledge to the other projects.
At the time there were much less integrations between services and there were at least an order of magnitude less developers, so it doesn’t translate 1:1.
One advantage GitHub has is all the special treatment for tooling but other than that I don’t see the network effect being too strong. Developers are the best equipped to escape. Projects are still independent from each-other and it’s easy to migrate projects to GitLab if necessary. If fact they must have seen a lot of projects leave already after the Microsoft acquisition and I bet they are being extra careful, which is good for us :)
Agreed. This should be obvious and I’m surprised people who care about free software are giving GitHub any attention at all.
And our battle cry will be “Remember Stacker”.
I see this often with SSH, RDP and it baffles me. It’s as if people think VPN services cannot have security bugs, be bruteforced or otherwise abused. I have dismantled several VPN solutions that were ‘protecting’ much safer services.
Bastion hosts, however, are a fine way of reducing the attack surface, and users can have one key for the bastion hosts and another key for the internal services they need. The
ProxyJump
feature is too overlooked.I’m not sure I understand the idea here.
We are supposed to generate unencrypted keypairs and leave the private keys floating around on our systems in the hopes of catching SSH key abuse?