The obvious question is why freenode was never registered as a charity. Remember: never donate to organizations not regulated as charities in their place of incorporation.
libera, the new organisation, is registered under a swedish non-profit.
For what it’s worth, neither libera nor freenode ever taken cash donations etc from normal users.
it sustained freenode for 20 years and was never the limiting factor. When you have dozens of large communities like fedora, gentoo, python, etc that you host, there are plenty of responsible and generous donors when it comes to getting the few, small servers that irc requires.
Agreed. Let’s all take note, this is why we have nonprofits: so we can codify the sorts of arrangements people build to operate Communal Things Involving Money. Person Foo may start a fight and chase off Person Bar, and Person Baz may start neutral but then get pissed off and leave such a toxic environment… But a nonprofit provides a framework to make sure things can actually keep operating in a sane fashion. Otherwise you end up with Foo needing a lawyer to get the domain name, Bar needing to be hunted down and asked for the server passwords, and Baz accidentally being left as primary contact on the donation-linked bank account for three years. I speak from personal experience here.
It was, for a long-ish time, registered as a charity as the Peer-Directed Projects Center (PDPC). IIRC they dissolved that as the legal overhead was significant.
It’s worth noting though that (as of yet, hope it stays that way) you do not need to use their cloud offerings or even have an account with them. It may seem simpler for some use cases, but any cloud offering inherently involves you placing data under the control of a third party. They do try to push this on people though (as do most other vendors, too…), which I don’t like all that much. Being in control of my own data and infrastructure is important to me.
You can host your own controller, both on the internet or locally, or use their “setup app” which AFAIK emulates just enough of a controller on your phone to set up a single AP with a simple config. Once configured, you can even switch the controller off and let them run autonomously.
You should also not (I hope this does not need saying, but saying it anyway) expose your access points directly to the internet (allowing them to be accessed from outside of your network). UBNT also sells other HW (like security cameras) where this is more commonly done, but even there I’d advise against it. Use a VPN if you need direct access. I know of no single IoT/HW vendor with a “clean” or even “acceptable” security track record.
Just as a precaution I would recommend putting some form of additional authentication (eg. basic auth, which suprisingly even works with their L3 controller) before the controller when hosting it in a publicly accessible place though. It’s likely not the greatest piece of software either, as evidenced by it’s weird dependency requirements…
All in all I’m not saying UBNT can do no wrong, but there’s certainly worse offerings in the market regarding security as well as features.
You should also not (I hope this does not need saying, but saying it any way) expose your access points directly to the internet (allowing them to be accessed from outside of your network).
Yep absolutely!
So I saw in the HN discussion that you now need cloud authentication even for self hosted software now.
I will admit I was confused by this because I know others that haven’t had to do cloud auth for their self hosted setups so maybe it’s required with the latest update only?
I got a USG on discount to play around with it and it doesn’t seem like you need to do cloud auth even when I set it up a few weeks ago, though they really push you towards that. I think you have to click “Advanced Setup”, choose “Local account”, and not enable remote access or something like that.
I don’t know about the Cloud Key specifically (I mean, it says ‘cloud’ right on the box), but last I checked and updated you could still download the controller installation packages directly, both from their download site as well as their apt repositories. Running that did not require any cloud setup. Hope that didn’t change :(
The hardware is still really quite good. I’m using a pair of U6-LRs at home (one in the attic, one in the basement) with a couple of US-8-60W switches, and the controller on a Raspberry Pi, and the coverage and performance are fantastic. Router is an EdgeRouter-4, which is Ubiquiti but not UniFi, so it doesn’t play with the same configuration tools, but I’m really happy with what it gives me too (rock solid, the config tree stuff really works, but also it’s a “real” linux system) so I’m not touching it.
As with others here, I’m not using cloud management — it’s a pretty damn cool feature in some scenarios but I don’t have a need for it.
I have outfitted both my (appartment) as well as my family’s (multi-story house) place, as well as some customer sites with Ubiquiti AP’s. I use them with a “L3 controller” setup where the Ubiquiti Controller runs on a VM in my colocation rack, and all the AP’s report (via the internet) to there to get their updates, configuration, etc. As far as I know, the recent “UBNT is serving ads” discussion only concerns the “hosted controller” variant (where they host the controller for you).
The access points are grouped into “sites” by household, so I can create dedicated accounts for people that may need to change the Wifi password once in a while. They only do WiFi, routing is done separately (and with varying degrees of sophistication). The central management allows me to schedule and perform firmware upgrades which might otherwise get ignored (once installed, most people never look at their infrastructure again, nor do they want to). This is something most “controller-based” systems can do, though, not especially specific to UBNT.
For installation I did a basic site survey for AP placement, laid CAT6 to good locations for the AP’s and installed a central PoE switch to power them. Works fine so far.
The AC Lite hardware is decent for the price and manageability, the pricier models can be worth it if your clients support newer standards and your use-case supports/requires better transfer rates (ie., fast internet connection or lots of local traffic).
In general, I’d recommend running dedicated wiring for AP’s over any sort of mesh solution.
For me “repairability” (and availability of parts, both used and third-party) is one of the strongest arguments to use “older” hardware. Though “old” in this case is far from retrocomputing, more in the “not upgrading my 2010-2012 era systems anytime soon” sense. I also do like retrocomputing, but more for the fun and interest of it, not for production work.
My daily drivers are a set of T420’s which I’ve opened up, swapped parts from, exchanged things, flashed the BIOS, installed a new CPU (its one of the last models with a socketed CPU), etc. Parts (and full replacement units) are plentiful and cheap, and it’s actually possible to do things. Speed is alright for my workload (mostly writing/compiling C and “normal” business email/web stuff, some VMs), and in my opinion, changing to a newer model would lose me more in convenience and opportunities than it would gain me. Not to mention it also saves plenty money and, in a more remote sense, the environment by preventing e-waste.
I have little interest in my computers being “super-slim” at the cost of not being able to swap hard disks, RAM or batteries, and that seems to be what the majority of current offerings are optimizing for (at least in the mobile computing space)
Definitely agree. My “best” computer is a custom-built PC from around 2010. A decade later, it still plays most modern video games without many problems, and I have the freedom to upgrade it gradually in the future, if I ever need to.
Most things that represent me online and everything I don’t want to lose and/or don’t want to entrust any other third party.
I moved everything (well, except the redundancies) to a private rack in a colocation data center in the beginning of 2020, after years of hosting most of the same things on a bunch of rented VPS’s (some of which I still keep around for backup/redundancy purposes). Since my “daily driver” machines are on the same OS as all my servers, maintenance and interoperability is not really a problem (I update them all mostly in sync).
The important things in the setup are the mail (2 MXes) and web servers, in addition to source control (git) and backup storage as well as realtime communication (mostly IRC for me).
So an incomplete list would be
Communication
IRC (irssi in tmux)
Mail
VPN/SSH tunnels
Web
Personal homepage(s)
Inventory system
Link/Bookmark management
Image sharing
Text sharing / Pastebin
Business websites / Project websites
Source control (git)
Backup (rsync)
VMs for third party tools
Unifi controller (wifi)
Additionally, there’s a NAS at home for local video streaming and additional backup storage.
I might have a particularly serious case of NIH syndrome; a rather large amount of the tooling I self-host is also written by me.
I’ve been thinking about something being up with Qt for some time now, mostly due to the increasing focus on driving people into buying the corporate licenses…
When running the Qt installer, you’re asked to create an account. It used to be, there was a button (non-obviously positioned, but it was there) to skip that, allowing you to install Qt without requiring an account.
It seems that in recent “official” releases, that is no longer supported and you’re required to enter account details when installing.
I have taken to installing once and tar’ing up the resulting files for repeated installation, though that is probably against some point of the license agreement if done on wider scale.
I haven’t built with Qt on Windows since the Trolltech days. Last time I installed it on Mac via Homebrew (last summer, I think) that didn’t mention any account. (Sounds like that’d have been before they decided to force account creation upstream.) I’ve installed Fedora, Ubuntu and Manjaro packages from the respective system repositories for all of the open source Qt dev kit over the past 45 days. There was no mention of an account there either.
Is the installer you’re talking about mostly just used for Windows builds? Or is it packaging some closed source commercial trial version along with the open source edition?
I should add that I don’t do any of my own projects with Qt; there are just a few that I irregularly contribute to. So my experience comes only in the context of wanting to build/modify/debug/submit a patch every once in a while. Maybe people who do more regular/heavy OSS work with Qt need something in the upstream installer that I just wouldn’t.
It strikes me as a real head-scratcher that they’d take the publicity hit from requiring OSS users to create an account if most of them just use it from a package manager and never run across the requirement anyway, while at the same time I don’t see why an OSS user would need to touch the upstream installer, except possibly on Windows.
Is the installer you’re talking about mostly just used for Windows builds? Or is it packaging some closed source commercial trial version along with the open source edition?
Qt provides installers for the runtime including the open source components, headers, QTCreator, etc. I use them to provide specific versions that may not yet (or no more) be available via the system’s package repositories. The newer ones seem to require an account, the older ones didn’t (with my test points being 5.9.3 and 5.12.3)
Working on and hopefully finishing the first presentable release of an RTP-MIDI backend for MIDIMonster. It’s already mostly finished save for the mDNS (“Bonjour”) discovery, which is somewhat finicky…
If I get frustrated by implementing Apple’s weird stuff, I might get around to finally writing an article on flashing coreboot onto my T420 and outfitting it with a newer Ivy Bridge Quadcore.
If you want to spare yourself the effort of creating system users & a system group for allowing git access (which can quickly get tedious as a function of the number of users), but also don’t want the complexity of gitolite or more sophisticated access control systems, try fugit.
It’s a pretty simple bash script that gets set as ‘forced command’ for SSH keys and does simple access control on a push/pull basis.
Very interesting websocket proxy project. The README.md is a good overview of the problem. I’ve used websockify to proxy vnc (novnc) to my home network. Does anyone know of any other websocket proxies out there?
There are a few to be found with the Github “websocket-proxy” tag, but most are simple affairs that disregard the differences laid out in the README (which is why I started working on websocksy in the first place).
noVNC works around these differences by introducing a buffering WebSocket abstraction on the JavaScript side, which allows proxy implementers to be somewhat lax about how they do the forwarding - however, other applications may not do so :)
Oh, I do this too (and I’m also german) :D It really is a good name for it.
I usually stick to a kind of internal guideline that variable names should not have fewer characters than lines the variable is used in (with the rare exception of a loop index used so often it would make lines unreadably long), which served me well over a few projects.
In my experience, mathematically trained programmers tend to like/use either really short or greek names, most often directly adapted from the variable names used in papers.
Nice summary of a good README, one thing I’d add that really grinds my gears is projects not taking the time to add at least a single sentence explaining what the thing actually does, in a way that people that may not even be involved in the ecosystem at all can understand what problem they are trying to solve.
What most have is something along the lines of “Do what x does but with y”, with both blanks being libraries or concepts people not involved in eg. Kubernetes, OpenStack or the newest JavaScript stuff would maybe know by another name or which might be completely made up.
I try to do this with all my projects, just adding a succinct description I would tell eg. my parents as well as two usage examples does wonders for clarity, in my opinion.
Not to be overly criticial, but I’m not entirely sure what problems are solved by this - or, more accurately, why.
From the bullet points on the website I gather
GPU rendering for scrolling
This is not a problem I ever had with any terminal emulator. Anywhere (well, except maybe Windows). I’ve had scrollback buffers be too small, but never had problems actually scrolling. And I have a lot of long-running terminal sessions.
Threaded rendering to minimize input latency
In my experience, threads don’t really do anything for non-multiplexed sources such as keyboards. And even for multiplexed sources, they mostly do more harm than good.
Not sure this is a good thing. (Insert XKCD about (n+1) Standards here)
tiling multiple terminal windows side by side […] without […] tmux
Which would probably (haven’t checked) require me to learn new keybindings.
Can be controlled from scripts or the shell prompt […]
Uh, it’s a terminal emulator. I would expect it can be.
[…] even over SSH
OK, why though?
[…] Kittens, [….] for example, […] Unicode input, Hints and Side-by-side diff.
I have a strong dislike of introducing unnecessary home-grown terminology and being all cutesy about it, but even so, if Unicode input is a plugin, I’m not sure about this.
startup sessions, […] working directories and programs to run on startup.
So, a shell. I don’t know, I thought this was a terminal emulator
Allows you to open the scrollback buffer […] for browsing the history comfortably in a pager or editor.
I’m pretty sure my needs are met with history, C-R and .zsh_history
It is most certainly a good thing, terminal protocol has been stagnant and–until recently–few terminals got off the couch to implement even basic, ancient features like cursor shaping.
Teaching users to have higher standards, makes life easier for application developers who must target the lowest common denominator. Kitty, iTerm2, mintty, libvte (gnome-terminal, termite, etc.), and libvterm (nvim, vim, pangoterm) have been raising the bar, so the lowest common denominator is now… higher.
Just like Internet Explorer 6 held back the web, stagnant terminals hold back terminal applications. urxvt is the IE6 of terminals. Isn’t it a bit ironic that iTerm2/Terminal.app (macOS) and mintty (Windows) have more features than any of the terminals you find on desktop unix?
(Insert XKCD about (n+1) Standards here)
Doesn’t apply here. Thomas Dickey (xterm/ncurses) sets the standard, except there’s a loophole: the most popular behavior wins (and applications must swallow it, standard be damned).
The kitty/libvte/iTerm2/libvterm authors have been cooperating, which means “popular”. And I’ve seen successful efforts to “upstream” choices to Dickey.
libvterm author has compiled a spreadsheet of known terminal behaviors (contributions welcomed/encouraged).
I love these new behaviours! until fairly recently, I figured I just can’t do some things in a terminal emulator, and then someone went ahead and implemented proper right-to-left support in one of the mentioned terminals. It’s frustrating to think of how long that’s been broken for me.
I was sharing this with a group so excited to finally have this feature and someone mentioned like, “I learned to read backwards”. Why should we put up with these barriers in technology? I certainly used computers before I knew English.
It is most certainly a good thing, terminal protocol has been stagnant and–until recently–few terminals got off the couch to implement even basic, ancient features like cursor shaping.
I’d rather have those programs just start doing proper graphics, and stop trying to use the terminal as a bad graphics protocol.
I’ve always felt that the one advantage of the terminal was the ability to easily pipe data from a program to another (which I would consider a property of the shell rather than a property of the terminal) and that terminal UIs were the worst of both worlds: bad at piping and bad at rendering things. I’m sure you have way more experience with terminals than me though, so I’m wondering if you could expand on what you think the useful properties of a terminal are.
I agree that reading the feature list seems like a confusing mix of shell feature plugged into a tty. However,
Supports […] features: “Graphics (images)”
why?
Why not? Why isn’t there a “standard” yet to output graphics in my tty? Wouldn’t it be nice to ls --thumbnails and have a list of thumbnails for images? icat images? Preview graphviz result? Have a dropdown with a preview when I hover a path in Vim and hit a shortcut? I don’t see why we still can’t do any of this nowadays. I for one would welcome graphics in my terminal.
As a security minded user I would argue with separation of concerns. Image formats have long been a prime attack vector due to their internal complexities. I would not like a terminal emulator to be concerned with parsing image files, a task which specialized tools still get wrong sometimes. Sandboxing/Jailing the image rendering process would also mean confining the terminal emulator. I also would like a very good terminal emulator, not a pretty meh image viewer with terminal capabilities.
Continuing down this road, where do you stop? SVGs support animation and scripting, should my terminal emulator pull in V8 or an entire chromium instance to run it? What are the implications of that? (And I’m aware that sadly that is exactly what many developers already do).
It’s possible to implement graphic support in other ways. Looking at Kitty, the protocol make it so the terminal emulator only require bitmap capabilities. All image parsing is done by the process writing to the tty. SVG support can be done simply by rendering it as bitmap. Animation can be done the same way they are done with text. No need for complexity or trade security for something else when you properly design something.
I have a strong dislike of introducing unnecessary home-grown terminology and being all cutesy about it, but even so, if Unicode input is a plugin, I’m not sure about this.
I have found the kittens interface to be the best plugin/scripting interface of any terminal I’ve ever used.
The unicode input kitty allows you to insert by code point, and search/browse by character name. Think of it as vim’s digraph selector on steroids. It’s pretty cool.
The kitten system is also simple enough for me to write a quick password manager (using the OS keychain) in 150 lines of python (including a bunch of unnecessary stuff like “press any key to continue” and tab completion). I tried writing an iTerm plugin once, and I gave up quickly.
Kitty strikes a remarkably good balance between minimal (and low overhead) and fully-featured. I’m not sure it solves any unsolved problem per se, but it strikes a perfect balance for me. I think that’s because I’m not as interested in configuring my terminal as I’d need to be for urxvt to suit me perfectly, but I am interested in playing with very particular things.
I run Terminology for irssi, despite using XFCE4, because it can display images from URLs overlaid onto the terminal.
I don’t really love Terminology that much, or maybe it’s gotten otherwise better with eg. tabs since the ancient version I’m stuck with, so it is strange no one else is doing these cool things. Not even as an opt-in thing.
Not sure this is a good thing. (Insert XKCD about (n+1) Standards here)
Considering that the currently used standard for terminal input still relies on timing for the alt modifier and that it cannot do lossless input (see <tab> and <ctr+i> being the same for example), I’d argue that a new standard is very much needed.
I’m pretty sure my needs are met with history, C-R and .zsh_history
That has little to do with history which is only about the commands you’ve entered and not their output. With kitty I can open the entire contents of the scrollback in my favourite text editor (kakoune if you’re asking), to do powerful text manipulation, instead of being constrained by whatever primitives the terminal implemented (for example, termite’s limited vim mode). After all the scrollback is just a big buffer of text so opening it in a text editor is particularly suitable.
I mostly do what I need in order to progress on ‘hobby projects’, so I guess ‘Project work’ could be a hobby…
Doing this I’ve picked up a few hobbies in themselves ;)
Programming
Picked that one up really early and later decided that I don’t want to lose the fun of it by taking a job as software developer. Am now a systems/network administrator and very happy.
Photography
Seems to be pretty popular with the tech crowd ;) It can also be a huge money sink once you go for the more upscale DSLR systems and lenses (especially the lenses…. eyes Canon 70-200 2.8L IS)
Event lighting / Lighting design
I’ve done sound for a long time but decided to spare my ears in the future, so I stick to lighting now ;) It’s a very creative outlet with a huge technical component, and also allows you to go really deep into the matter (designing my own control interfaces and software (mainly because MA insists on stupid limitations), etc)
Amateur Electronics
Designing small circuits and PCBs for problems I run into (see above), programming microcontrollers for the fun of having limitations and hardware control, and of course fixing broken things where possible ;)
Creative / technical writing
Other than what I’ve heard from most people, I kind of enjoy writing documentation. For things that I like, that is, such as my own software or projects. For other people, meh.
Baking bread
Sometimes I just really crave the taste of fresh bread and one night decided to just start baking some. People liked it and it became a more or less regular thing for me. I like to get creative with the recipe and ingredients, which among others has led to Bhut Jolokia bread which only a select few could eat more than a bite of ;)
I also like to read, go on walking tours (pairs well with photography), like to work with animals, etc, but not to the point I’d consider it a hobby of mine. I’d like to get into metal casting (lost wax, etc), but I don’t have the space for it and handling molten metal in the inner city isn’t gonna make you popular with the police (or fire dept.), so I stick to 3D printing for the moment.
I sometimes write articles (which I kind of insist are not blog posts) about technical stuff at https://fabianstumpf.de/.
Some more articles are in the pipeline in various states of done-ness, most relating to computer networking or microcontroller programming, might see about publishing some more these days
This week will mostly be busy with job interviews, though I’d like to finish implementing an rtpMIDI backend implementation for MIDIMonster. Ideally with native support for the AppleMIDI session protocol, though the whole specification is…. not that good.
Other than that, my to-do list contains a whole host of items, including publishing a new article on my homepage that has been in the works for long enough now.
In preparation for tax season, I’m tweaking my plaintext accounting setup. I recorded all of 2017: every transaction that touched my bank or credit cards is queryable. I’m switching processors for 2018 to a setup that’s already saved me about two hours per month. I’d love to get to a point where I can do make irs1040 and it spits out the content of all of the boxes that my transaction data can reflect.
I’ve already got make networth and make cashflow working, complete with nice graphs in the terminal courtesy of iTerm2’s imgcat.
I’m hoping to generalize my setup so that I can release it eventually. I’ve found very few examples of workflows so I want to contribute mine to the world so that the plaintext accounting ecosystem is more approachable.
Some friends and me collected quite a few interesting ledger-cli graphs (using GNUplot and a pie-charting tool of mine) on Github: https://github.com/cbdevnet/ledger-reports
Though some of them break when using them with multiple commodities… Might have to get around fixing that some time after my exams :)
Good suggestion, thanks! I’ve just opened an issue for that.
As I’m not quite comfortable with using my own data for presentation purposes, this will include generating a somewhat-random example ledger file and generating the demo statistics from that :)
“The notion of using a CLI program for doing this sort of accounting intrigues me a bit.”
It was the default on most cash registers of big chains for a long time. The ones in my area I shop at just now all got the graphical apps. It was mainframe, terminal-style stuff on backend for some. So, CLI on each side. Here is one for UNIX I found when I looked up that stuff.
I actually think that the creation of many “new” languages in an effort to make programming more accessible to the “masses” backfired in a sense:
Yes, it has become harder to decide “which language to start with”, as everyone stands at the ready to put forth his or her own favorite toy of the moment.
The push for languages to be more easily learnable has also introduced a lot of abstraction.
This can be, on the one hand, a good thing: Things that “feel” easy to do (for example, getting the current exchange rate of Bitcoin), are just an import away. On the other hand, it hides a lot of the complexities that these things rely on, in some cases actively preventing people from understanding what it is these APIs and frameworks actually do.
In consequence, few of the people using these constructs would be able to implement them themselves, forcing them into a kind of learned helplessness: if there is no module for it, it’s obviously impossible. This finds its culmination in things like the famous left-pad package.
When people feel at home in an ecosystem, few tend to leave it. This eventually leads to languages and environments that were once meant for learning increasingly becoming more full-featured (there’s nothing more permanent than a temporary fix….), leading to things like full-fledged applications in, eg., scratch. The lack of formal education in things like software security and software engineering for most of these “new programmers” tends to further increase this territorial behavior.
This also creates a kind of class system, where programmers using “real” languages look down, for some reason or other, on people staying within these ecosystems. As it becomes less important to know exactly how the computer executes code in order to write it, fewer people care about how to actually work with their systems. This may be where the feeling that “Learning to program is getting harder” comes from. And the fact of the matter is, even though these abstracted environments (eg. the “cloud”) are important, someone is still going to have to create the tools that get you there: Operating systems, Browsers, Firmware on switches, routers, etc. Not everything can be taught with cloud-based REPLs.
If someone just wants to learn to program, they shouldn’t have to learn system administration first.
If someone just wants to learn to program, they shouldn’t have to learn operating system concepts first.
These are the core points of the article that I disagree with. I think that learning how to express thoughts in code (which is what is being adressed very well by the “new” languages) is only one part of programming. Learning how the system works, how to interact with eg. the command line in some form or other, how the environment finds its files, and even how file formats work are also important parts that often get left behind as the level of abstraction increases (up to the cloud, where nothing matters anymore and everything is an abstract resource).
Disclaimer: I may be wrong. These are just my feelings on the matter.
Currently preparing for my last two university exams and searching for a future job on the side.
As for software, recently implemented a miniature domain-specific language for our window-manager-manager (https://github.com/fsmi/rpcd), which is now being used to automate our signage display windows.
Up next is a custom wireless network infrastructure monitoring application (basically tracking WiFi clients via their access point association, regardless of vendor).
The obvious question is why freenode was never registered as a charity. Remember: never donate to organizations not regulated as charities in their place of incorporation.
libera, the new organisation, is registered under a swedish non-profit. For what it’s worth, neither libera nor freenode ever taken cash donations etc from normal users.
Freenode was Limited by Guarantee, which is English law jargon for a non-profit. A legal form guarantees nothing.
No it isn’t. Companies limited by guarantee are a common corporate choice for charities but being registered as a charity is a different thing.
Late answering, but not for profit isn’t the same thing as a charity.
This doesn’t sound like a sustainable model. Look at discord!?
How has this “charity only” crap prevailed when it’s been $ that funds infrastructure and development?
At least Patrick at Slackware takes my money (finally!!) but it’ll never be a Red Hat.
“never taken cash donations” does not mean “never taken donations”
Freenode has outlasted countless VC-backed chat startups, and Libera will outlast even more.
it sustained freenode for 20 years and was never the limiting factor. When you have dozens of large communities like fedora, gentoo, python, etc that you host, there are plenty of responsible and generous donors when it comes to getting the few, small servers that irc requires.
What would this have prevented? You can own and sell a charity just as well as you can sell any corporation.
Not in England you can’t.
Agreed. Let’s all take note, this is why we have nonprofits: so we can codify the sorts of arrangements people build to operate Communal Things Involving Money. Person Foo may start a fight and chase off Person Bar, and Person Baz may start neutral but then get pissed off and leave such a toxic environment… But a nonprofit provides a framework to make sure things can actually keep operating in a sane fashion. Otherwise you end up with Foo needing a lawyer to get the domain name, Bar needing to be hunted down and asked for the server passwords, and Baz accidentally being left as primary contact on the donation-linked bank account for three years. I speak from personal experience here.
It was, for a long-ish time, registered as a charity as the Peer-Directed Projects Center (PDPC). IIRC they dissolved that as the legal overhead was significant.
There’s no UK de registered charity with that name https://register-of-charities.charitycommission.gov.uk/charity-search/-/results/page/86/delta/20/keywords/Peer+directed+projects+center/sorted-by/charity-name/asc
There was a non profit registered in the US with that name.
As of at least the last five years, Freenode never accepted monetary donations; only donations of servers.
It was, but the charitable organization didn’t actually bring in enough money to maintain its own existence, so it folded several years ago.
Right but that means the charity was really a money collector not the operator or holder of assets.
Ubiquiti is the most recommended, but this happened recently: https://krebsonsecurity.com/2021/03/whistleblower-ubiquiti-breach-catastrophic/
I am not sure the others are much better, but just FYI in case you haven’t seen it yet.
It’s worth noting though that (as of yet, hope it stays that way) you do not need to use their cloud offerings or even have an account with them. It may seem simpler for some use cases, but any cloud offering inherently involves you placing data under the control of a third party. They do try to push this on people though (as do most other vendors, too…), which I don’t like all that much. Being in control of my own data and infrastructure is important to me.
You can host your own controller, both on the internet or locally, or use their “setup app” which AFAIK emulates just enough of a controller on your phone to set up a single AP with a simple config. Once configured, you can even switch the controller off and let them run autonomously.
You should also not (I hope this does not need saying, but saying it anyway) expose your access points directly to the internet (allowing them to be accessed from outside of your network). UBNT also sells other HW (like security cameras) where this is more commonly done, but even there I’d advise against it. Use a VPN if you need direct access. I know of no single IoT/HW vendor with a “clean” or even “acceptable” security track record.
Just as a precaution I would recommend putting some form of additional authentication (eg. basic auth, which suprisingly even works with their L3 controller) before the controller when hosting it in a publicly accessible place though. It’s likely not the greatest piece of software either, as evidenced by it’s weird dependency requirements…
All in all I’m not saying UBNT can do no wrong, but there’s certainly worse offerings in the market regarding security as well as features.
Yep absolutely!
So I saw in the HN discussion that you now need cloud authentication even for self hosted software now.
See the main discussion here: https://old.reddit.com/r/Ubiquiti/comments/kslyh9/cloud_key_local_account/
I will admit I was confused by this because I know others that haven’t had to do cloud auth for their self hosted setups so maybe it’s required with the latest update only?
I got a USG on discount to play around with it and it doesn’t seem like you need to do cloud auth even when I set it up a few weeks ago, though they really push you towards that. I think you have to click “Advanced Setup”, choose “Local account”, and not enable remote access or something like that.
I don’t cloud auth with them. Local accounts all day long on a dedicated vm operating as a controller.
I don’t know about the Cloud Key specifically (I mean, it says ‘cloud’ right on the box), but last I checked and updated you could still download the controller installation packages directly, both from their download site as well as their apt repositories. Running that did not require any cloud setup. Hope that didn’t change :(
tbh I’ve had one access point from them and their software package for management was so hilariously outdated, I returned it instantly
Yes, this is one of the big things that’s making me question using them. I added your link to the original post so there’s more context.
The hardware is still really quite good. I’m using a pair of U6-LRs at home (one in the attic, one in the basement) with a couple of US-8-60W switches, and the controller on a Raspberry Pi, and the coverage and performance are fantastic. Router is an EdgeRouter-4, which is Ubiquiti but not UniFi, so it doesn’t play with the same configuration tools, but I’m really happy with what it gives me too (rock solid, the config tree stuff really works, but also it’s a “real” linux system) so I’m not touching it.
As with others here, I’m not using cloud management — it’s a pretty damn cool feature in some scenarios but I don’t have a need for it.
I have outfitted both my (appartment) as well as my family’s (multi-story house) place, as well as some customer sites with Ubiquiti AP’s. I use them with a “L3 controller” setup where the Ubiquiti Controller runs on a VM in my colocation rack, and all the AP’s report (via the internet) to there to get their updates, configuration, etc. As far as I know, the recent “UBNT is serving ads” discussion only concerns the “hosted controller” variant (where they host the controller for you).
The access points are grouped into “sites” by household, so I can create dedicated accounts for people that may need to change the Wifi password once in a while. They only do WiFi, routing is done separately (and with varying degrees of sophistication). The central management allows me to schedule and perform firmware upgrades which might otherwise get ignored (once installed, most people never look at their infrastructure again, nor do they want to). This is something most “controller-based” systems can do, though, not especially specific to UBNT.
For installation I did a basic site survey for AP placement, laid CAT6 to good locations for the AP’s and installed a central PoE switch to power them. Works fine so far.
The AC Lite hardware is decent for the price and manageability, the pricier models can be worth it if your clients support newer standards and your use-case supports/requires better transfer rates (ie., fast internet connection or lots of local traffic).
In general, I’d recommend running dedicated wiring for AP’s over any sort of mesh solution.
For me “repairability” (and availability of parts, both used and third-party) is one of the strongest arguments to use “older” hardware. Though “old” in this case is far from retrocomputing, more in the “not upgrading my 2010-2012 era systems anytime soon” sense. I also do like retrocomputing, but more for the fun and interest of it, not for production work.
My daily drivers are a set of T420’s which I’ve opened up, swapped parts from, exchanged things, flashed the BIOS, installed a new CPU (its one of the last models with a socketed CPU), etc. Parts (and full replacement units) are plentiful and cheap, and it’s actually possible to do things. Speed is alright for my workload (mostly writing/compiling C and “normal” business email/web stuff, some VMs), and in my opinion, changing to a newer model would lose me more in convenience and opportunities than it would gain me. Not to mention it also saves plenty money and, in a more remote sense, the environment by preventing e-waste.
I have little interest in my computers being “super-slim” at the cost of not being able to swap hard disks, RAM or batteries, and that seems to be what the majority of current offerings are optimizing for (at least in the mobile computing space)
Definitely agree. My “best” computer is a custom-built PC from around 2010. A decade later, it still plays most modern video games without many problems, and I have the freedom to upgrade it gradually in the future, if I ever need to.
Most things that represent me online and everything I don’t want to lose and/or don’t want to entrust any other third party.
I moved everything (well, except the redundancies) to a private rack in a colocation data center in the beginning of 2020, after years of hosting most of the same things on a bunch of rented VPS’s (some of which I still keep around for backup/redundancy purposes). Since my “daily driver” machines are on the same OS as all my servers, maintenance and interoperability is not really a problem (I update them all mostly in sync).
The important things in the setup are the mail (2 MXes) and web servers, in addition to source control (git) and backup storage as well as realtime communication (mostly IRC for me).
So an incomplete list would be
Additionally, there’s a NAS at home for local video streaming and additional backup storage.
I might have a particularly serious case of NIH syndrome; a rather large amount of the tooling I self-host is also written by me.
I’ve been thinking about something being up with Qt for some time now, mostly due to the increasing focus on driving people into buying the corporate licenses…
When running the Qt installer, you’re asked to create an account. It used to be, there was a button (non-obviously positioned, but it was there) to skip that, allowing you to install Qt without requiring an account.
It seems that in recent “official” releases, that is no longer supported and you’re required to enter account details when installing.
I have taken to installing once and tar’ing up the resulting files for repeated installation, though that is probably against some point of the license agreement if done on wider scale.
yes, this was announced at the start of the year and they quickly followed suit to implement this
I haven’t built with Qt on Windows since the Trolltech days. Last time I installed it on Mac via Homebrew (last summer, I think) that didn’t mention any account. (Sounds like that’d have been before they decided to force account creation upstream.) I’ve installed Fedora, Ubuntu and Manjaro packages from the respective system repositories for all of the open source Qt dev kit over the past 45 days. There was no mention of an account there either.
Is the installer you’re talking about mostly just used for Windows builds? Or is it packaging some closed source commercial trial version along with the open source edition?
I should add that I don’t do any of my own projects with Qt; there are just a few that I irregularly contribute to. So my experience comes only in the context of wanting to build/modify/debug/submit a patch every once in a while. Maybe people who do more regular/heavy OSS work with Qt need something in the upstream installer that I just wouldn’t.
It strikes me as a real head-scratcher that they’d take the publicity hit from requiring OSS users to create an account if most of them just use it from a package manager and never run across the requirement anyway, while at the same time I don’t see why an OSS user would need to touch the upstream installer, except possibly on Windows.
Qt provides installers for the runtime including the open source components, headers, QTCreator, etc. I use them to provide specific versions that may not yet (or no more) be available via the system’s package repositories. The newer ones seem to require an account, the older ones didn’t (with my test points being 5.9.3 and 5.12.3)
Working on and hopefully finishing the first presentable release of an RTP-MIDI backend for MIDIMonster. It’s already mostly finished save for the mDNS (“Bonjour”) discovery, which is somewhat finicky…
If I get frustrated by implementing Apple’s weird stuff, I might get around to finally writing an article on flashing coreboot onto my T420 and outfitting it with a newer Ivy Bridge Quadcore.
If you want to spare yourself the effort of creating system users & a system group for allowing git access (which can quickly get tedious as a function of the number of users), but also don’t want the complexity of gitolite or more sophisticated access control systems, try fugit.
It’s a pretty simple bash script that gets set as ‘forced command’ for SSH keys and does simple access control on a push/pull basis.
Disclaimer: I’m the main developer of fugit.
Very interesting websocket proxy project. The README.md is a good overview of the problem. I’ve used websockify to proxy vnc (novnc) to my home network. Does anyone know of any other websocket proxies out there?
There are a few to be found with the Github “websocket-proxy” tag, but most are simple affairs that disregard the differences laid out in the README (which is why I started working on websocksy in the first place).
noVNC works around these differences by introducing a buffering WebSocket abstraction on the JavaScript side, which allows proxy implementers to be somewhat lax about how they do the forwarding - however, other applications may not do so :)
Oh, I do this too (and I’m also german) :D It really is a good name for it.
I usually stick to a kind of internal guideline that variable names should not have fewer characters than lines the variable is used in (with the rare exception of a loop index used so often it would make lines unreadably long), which served me well over a few projects.
In my experience, mathematically trained programmers tend to like/use either really short or greek names, most often directly adapted from the variable names used in papers.
Nice summary of a good README, one thing I’d add that really grinds my gears is projects not taking the time to add at least a single sentence explaining what the thing actually does, in a way that people that may not even be involved in the ecosystem at all can understand what problem they are trying to solve.
What most have is something along the lines of “Do what
x
does but withy
”, with both blanks being libraries or concepts people not involved in eg. Kubernetes, OpenStack or the newest JavaScript stuff would maybe know by another name or which might be completely made up.I try to do this with all my projects, just adding a succinct description I would tell eg. my parents as well as two usage examples does wonders for clarity, in my opinion.
Examples (shameless plug): https://github.com/cbdevnet/midimonster
Nice repo and great README, thank for sharing!
Not to be overly criticial, but I’m not entirely sure what problems are solved by this - or, more accurately, why.
From the bullet points on the website I gather
This is not a problem I ever had with any terminal emulator. Anywhere (well, except maybe Windows). I’ve had scrollback buffers be too small, but never had problems actually scrolling. And I have a lot of long-running terminal sessions.
In my experience, threads don’t really do anything for non-multiplexed sources such as keyboards. And even for multiplexed sources, they mostly do more harm than good.
why?
so does urxvt
Not sure this is a good thing. (Insert XKCD about (n+1) Standards here)
Which would probably (haven’t checked) require me to learn new keybindings.
Uh, it’s a terminal emulator. I would expect it can be.
OK, why though?
I have a strong dislike of introducing unnecessary home-grown terminology and being all cutesy about it, but even so, if Unicode input is a plugin, I’m not sure about this.
So, a shell. I don’t know, I thought this was a terminal emulator
I’m pretty sure my needs are met with
history
, C-R and.zsh_history
It is most certainly a good thing, terminal protocol has been stagnant and–until recently–few terminals got off the couch to implement even basic, ancient features like cursor shaping.
Teaching users to have higher standards, makes life easier for application developers who must target the lowest common denominator. Kitty, iTerm2, mintty, libvte (gnome-terminal, termite, etc.), and libvterm (nvim, vim, pangoterm) have been raising the bar, so the lowest common denominator is now… higher.
Just like Internet Explorer 6 held back the web, stagnant terminals hold back terminal applications. urxvt is the IE6 of terminals. Isn’t it a bit ironic that iTerm2/Terminal.app (macOS) and mintty (Windows) have more features than any of the terminals you find on desktop unix?
Doesn’t apply here. Thomas Dickey (xterm/ncurses) sets the standard, except there’s a loophole: the most popular behavior wins (and applications must swallow it, standard be damned).
The kitty/libvte/iTerm2/libvterm authors have been cooperating, which means “popular”. And I’ve seen successful efforts to “upstream” choices to Dickey.
libvterm author has compiled a spreadsheet of known terminal behaviors (contributions welcomed/encouraged).
I love these new behaviours! until fairly recently, I figured I just can’t do some things in a terminal emulator, and then someone went ahead and implemented proper right-to-left support in one of the mentioned terminals. It’s frustrating to think of how long that’s been broken for me.
I was sharing this with a group so excited to finally have this feature and someone mentioned like, “I learned to read backwards”. Why should we put up with these barriers in technology? I certainly used computers before I knew English.
I’d rather have those programs just start doing proper graphics, and stop trying to use the terminal as a bad graphics protocol.
Kitty has implemented several extensions, such as colored undercurl (which is now also implemented in libvte).
Not to mention “just start doing proper graphics” is meaningless and would discard the actually useful properties of a terminal.
I’ve always felt that the one advantage of the terminal was the ability to easily pipe data from a program to another (which I would consider a property of the shell rather than a property of the terminal) and that terminal UIs were the worst of both worlds: bad at piping and bad at rendering things. I’m sure you have way more experience with terminals than me though, so I’m wondering if you could expand on what you think the useful properties of a terminal are.
I agree that reading the feature list seems like a confusing mix of shell feature plugged into a tty. However,
Why not? Why isn’t there a “standard” yet to output graphics in my tty? Wouldn’t it be nice to
ls --thumbnails
and have a list of thumbnails for images?icat
images? Previewgraphviz
result? Have a dropdown with a preview when I hover a path in Vim and hit a shortcut? I don’t see why we still can’t do any of this nowadays. I for one would welcome graphics in my terminal.As a security minded user I would argue with separation of concerns. Image formats have long been a prime attack vector due to their internal complexities. I would not like a terminal emulator to be concerned with parsing image files, a task which specialized tools still get wrong sometimes. Sandboxing/Jailing the image rendering process would also mean confining the terminal emulator. I also would like a very good terminal emulator, not a pretty meh image viewer with terminal capabilities.
Continuing down this road, where do you stop? SVGs support animation and scripting, should my terminal emulator pull in V8 or an entire chromium instance to run it? What are the implications of that? (And I’m aware that sadly that is exactly what many developers already do).
It’s possible to implement graphic support in other ways. Looking at Kitty, the protocol make it so the terminal emulator only require bitmap capabilities. All image parsing is done by the process writing to the tty. SVG support can be done simply by rendering it as bitmap. Animation can be done the same way they are done with text. No need for complexity or trade security for something else when you properly design something.
That would be a good idea, yes.
Sadly, good (software) design has become something of an exception these days :)
I have found the kittens interface to be the best plugin/scripting interface of any terminal I’ve ever used.
The unicode input kitty allows you to insert by code point, and search/browse by character name. Think of it as vim’s digraph selector on steroids. It’s pretty cool.
The kitten system is also simple enough for me to write a quick password manager (using the OS keychain) in 150 lines of python (including a bunch of unnecessary stuff like “press any key to continue” and tab completion). I tried writing an iTerm plugin once, and I gave up quickly.
Kitty strikes a remarkably good balance between minimal (and low overhead) and fully-featured. I’m not sure it solves any unsolved problem per se, but it strikes a perfect balance for me. I think that’s because I’m not as interested in configuring my terminal as I’d need to be for urxvt to suit me perfectly, but I am interested in playing with very particular things.
I run Terminology for irssi, despite using XFCE4, because it can display images from URLs overlaid onto the terminal.
I don’t really love Terminology that much, or maybe it’s gotten otherwise better with eg. tabs since the ancient version I’m stuck with, so it is strange no one else is doing these cool things. Not even as an opt-in thing.
If you don’t understand, you don’t have to use it!
You should read more about what that means:
https://sw.kovidgoyal.net/kitty/remote-control.html
Considering that the currently used standard for terminal input still relies on timing for the alt modifier and that it cannot do lossless input (see
<tab>
and<ctr+i>
being the same for example), I’d argue that a new standard is very much needed.That has little to do with history which is only about the commands you’ve entered and not their output. With kitty I can open the entire contents of the scrollback in my favourite text editor (kakoune if you’re asking), to do powerful text manipulation, instead of being constrained by whatever primitives the terminal implemented (for example, termite’s limited vim mode). After all the scrollback is just a big buffer of text so opening it in a text editor is particularly suitable.
That probably refers to something fancy like an emoji picker or something. Definitely not to just typing non-ASCII characters from the keyboard.
I mostly do what I need in order to progress on ‘hobby projects’, so I guess ‘Project work’ could be a hobby…
Doing this I’ve picked up a few hobbies in themselves ;)
Programming Picked that one up really early and later decided that I don’t want to lose the fun of it by taking a job as software developer. Am now a systems/network administrator and very happy.
Photography Seems to be pretty popular with the tech crowd ;) It can also be a huge money sink once you go for the more upscale DSLR systems and lenses (especially the lenses…. eyes Canon 70-200 2.8L IS)
Event lighting / Lighting design I’ve done sound for a long time but decided to spare my ears in the future, so I stick to lighting now ;) It’s a very creative outlet with a huge technical component, and also allows you to go really deep into the matter (designing my own control interfaces and software (mainly because MA insists on stupid limitations), etc)
Amateur Electronics Designing small circuits and PCBs for problems I run into (see above), programming microcontrollers for the fun of having limitations and hardware control, and of course fixing broken things where possible ;)
Creative / technical writing Other than what I’ve heard from most people, I kind of enjoy writing documentation. For things that I like, that is, such as my own software or projects. For other people, meh.
Baking bread Sometimes I just really crave the taste of fresh bread and one night decided to just start baking some. People liked it and it became a more or less regular thing for me. I like to get creative with the recipe and ingredients, which among others has led to Bhut Jolokia bread which only a select few could eat more than a bite of ;)
I also like to read, go on walking tours (pairs well with photography), like to work with animals, etc, but not to the point I’d consider it a hobby of mine. I’d like to get into metal casting (lost wax, etc), but I don’t have the space for it and handling molten metal in the inner city isn’t gonna make you popular with the police (or fire dept.), so I stick to 3D printing for the moment.
I sometimes write articles (which I kind of insist are not blog posts) about technical stuff at https://fabianstumpf.de/.
Some more articles are in the pipeline in various states of done-ness, most relating to computer networking or microcontroller programming, might see about publishing some more these days
This week will mostly be busy with job interviews, though I’d like to finish implementing an rtpMIDI backend implementation for MIDIMonster. Ideally with native support for the AppleMIDI session protocol, though the whole specification is…. not that good.
Other than that, my to-do list contains a whole host of items, including publishing a new article on my homepage that has been in the works for long enough now.
In preparation for tax season, I’m tweaking my plaintext accounting setup. I recorded all of 2017: every transaction that touched my bank or credit cards is queryable. I’m switching processors for 2018 to a setup that’s already saved me about two hours per month. I’d love to get to a point where I can do
make irs1040
and it spits out the content of all of the boxes that my transaction data can reflect.I’ve already got
make networth
andmake cashflow
working, complete with nice graphs in the terminal courtesy of iTerm2’simgcat
.I’m hoping to generalize my setup so that I can release it eventually. I’ve found very few examples of workflows so I want to contribute mine to the world so that the plaintext accounting ecosystem is more approachable.
Some friends and me collected quite a few interesting ledger-cli graphs (using GNUplot and a pie-charting tool of mine) on Github: https://github.com/cbdevnet/ledger-reports
Though some of them break when using them with multiple commodities… Might have to get around fixing that some time after my exams :)
This looks really awesome! Any chance you could throw up some example images?
Good suggestion, thanks! I’ve just opened an issue for that. As I’m not quite comfortable with using my own data for presentation purposes, this will include generating a somewhat-random example ledger file and generating the demo statistics from that :)
I’d be very interested in seeing your setup when you’re done. The notion of using a CLI program for doing this sort of accounting intrigues me a bit.
A good start is to watch a recording of one of my talks and those of some others who have produced some content about it: https://www.youtube.com/results?search_query=plaintext+accounting&page=&utm_source=opensearch
I’m probably still a few months away from having my stuff sufficiently abstracted.
“The notion of using a CLI program for doing this sort of accounting intrigues me a bit.”
It was the default on most cash registers of big chains for a long time. The ones in my area I shop at just now all got the graphical apps. It was mainframe, terminal-style stuff on backend for some. So, CLI on each side. Here is one for UNIX I found when I looked up that stuff.
There’s a difference between CLI and TUI. Most cash registers I’ve seen were TUI, IIRC.
Still, there’s a certain Spartan flavor to TUIs, such that they can be quite efficient for their tasks.
Ok, yeah, TUI was more what I was thinking of. The ones I dealt with were usually more efficient than their GUI replacements. Way more reliable, too.
I’ve uploaded a few videos of the tool in operation to Twitter:
MIDI->evdev https://twitter.com/twitter/statuses/965384962848456704
evdev->ArtNet https://twitter.com/twitter/statuses/965337284982837248
MIDI<>OSC https://twitter.com/twitter/statuses/882368440811606020
I actually think that the creation of many “new” languages in an effort to make programming more accessible to the “masses” backfired in a sense:
Yes, it has become harder to decide “which language to start with”, as everyone stands at the ready to put forth his or her own favorite toy of the moment. The push for languages to be more easily learnable has also introduced a lot of abstraction.
This can be, on the one hand, a good thing: Things that “feel” easy to do (for example, getting the current exchange rate of Bitcoin), are just an
import
away. On the other hand, it hides a lot of the complexities that these things rely on, in some cases actively preventing people from understanding what it is these APIs and frameworks actually do.In consequence, few of the people using these constructs would be able to implement them themselves, forcing them into a kind of learned helplessness: if there is no module for it, it’s obviously impossible. This finds its culmination in things like the famous
left-pad
package.When people feel at home in an ecosystem, few tend to leave it. This eventually leads to languages and environments that were once meant for learning increasingly becoming more full-featured (there’s nothing more permanent than a temporary fix….), leading to things like full-fledged applications in, eg.,
scratch
. The lack of formal education in things like software security and software engineering for most of these “new programmers” tends to further increase this territorial behavior.This also creates a kind of class system, where programmers using “real” languages look down, for some reason or other, on people staying within these ecosystems. As it becomes less important to know exactly how the computer executes code in order to write it, fewer people care about how to actually work with their systems. This may be where the feeling that “Learning to program is getting harder” comes from. And the fact of the matter is, even though these abstracted environments (eg. the “cloud”) are important, someone is still going to have to create the tools that get you there: Operating systems, Browsers, Firmware on switches, routers, etc. Not everything can be taught with cloud-based REPLs.
These are the core points of the article that I disagree with. I think that learning how to express thoughts in code (which is what is being adressed very well by the “new” languages) is only one part of programming. Learning how the system works, how to interact with eg. the command line in some form or other, how the environment finds its files, and even how file formats work are also important parts that often get left behind as the level of abstraction increases (up to the cloud, where nothing matters anymore and everything is an abstract resource).
Disclaimer: I may be wrong. These are just my feelings on the matter.
Currently preparing for my last two university exams and searching for a future job on the side.
As for software, recently implemented a miniature domain-specific language for our window-manager-manager (https://github.com/fsmi/rpcd), which is now being used to automate our signage display windows. Up next is a custom wireless network infrastructure monitoring application (basically tracking WiFi clients via their access point association, regardless of vendor).