I use zsh and it’s suite of completion mostly. I would possibly use bash’s more often if various servers I administrate installed the completion package (which is usually broken out into a separate package), but they don’t and I don’t care to push that. (Again, happy with Zsh’s built-in library.)
Fish does a neat trick to supplement it’s completion – it parses the output of man! Pretty good for most commands, but some weird manpages will throw it for a loop. Those are far and few between IIRC, but I haven’t touched fish for a long while.
I use zsh which has the more comprehensive coverage with completions. Especially when you consider things like support for system commands on non-Linux systems. Note that zsh itself includes most of them and the separate zsh-completions project is only a small collection and of lower quality.
Zsh’s is much the superior system but you’d have to emulate a whole lot more to support them. Completion matches can have descriptions which makes it vastly more useful. The process of matching what is on the command-line against the candidates is much more flexible and is not limited to dividing the command-line up by shell arguments - any arbitrary point can be the start and finish point for each completion candidate. And as that implies, what is to the right of the cursor can also be significant.
My advice would be to take the zsh compadd builtin approach which is more flexible and extensible than compgen/complete, do your own implementation of _arguments (which covers 90% of most completions) and similarly your own _files etc. It’d then be straightforward for people to write completions targetting both oilshell and zsh.
Hm interesting, yeah I am looking around the Completion/ dir in the zsh source now and it looks pretty rich and comprehensive.
I also just tried out zsh and I didn’t realize it had all the descriptions, which is useful too. Don’t they get out of date though? I guess most commands don’t change that much?
I recall recall skimming through parts of the zsh manual like a year ago, and from what I remember there are 2 different completion systems, and it seemed like there was a “froth” of bugs, or at least special cases.
I will take another look, maybe that impression is wrong.
I think the better strategy might be to get decent bash-like completion for OSH, and then convince someone to contribute ZSH emulation :)
I guess I am mainly interested in the shell system that has the best existing corpus of completion scripts. Because I don’t want to boil the ocen and duplicate that logic in yet another system. zsh does seem like a good candidate for that. But I don’t understand yet how it works. Any pointers are appreciated.
I’ll look into _arguments… it might cover 90% of cases, but it’s not clear what it would take to run 90% of completions scripts unmodified.
The zsh descriptions do get out of date. The strings are copied by the completion script author, so if the --help text changes, the script will need to be updated too.
Zsh’s completion system is vast and old, the best combination. That’s why the 2 engines exist still today, as there are a number of completion scripts that are in the old style. I believe that most of those underneath Completion/ are using the newer system.
Of the 2 systems, the old, compctl system was deprecated 20 years ago. Everything under Completion/ uses the new system. I wouldn’t say there’s a “froth” of bugs - it is just that there is a lot to it.
It isn’t the descriptions so much as the options themselves that can get out of date. The task of keeping them up-to-date is semi-automated based on sources such as --help output and they are mostly well maintained.
After feeling some pain about how complicated all software, IT and Enterprise architecture toolsets are, I got this idea of a dead simple architecture sketching tool, that’s based on just nodes and named lines. The input for this tool is a text file that contains triples in the form of subject-verb-object, like so:
# comments via shebang
# declare nouns (for better typo recovery)
internet, web front, app server, DB, Redis
# declare verbs (perhaps not necessary and may be dropped)
flows, proxies, reads/writes
# subject-verb-object are separated by more than 1 whitespace (" " or "\t")
# somewhat like Robot Framework does it
# prepositions (to, from, at, etc.) seemed redundant, so not using them
internet flows web front
web front proxies app server
app server reads/writes DB
app server reads/writes Redis
I’m not married to the syntax yet, but it seems fine after a few iterations of experimentation. A tool could then read this and produce a nice graph via graphwiz. And you don’t really need a tool, you could do these sketches on a piece of paper in no time. But when you want to store and share the design, a text file + tool is a good combo.
Tried to write a parser for this language in Rust’s pest, but that was a bit of a pain – pest’s error modes are quite odd. Seems like when my parser has a problem, Pest blows up on position 0 every time. Perhaps I’ll just do a dumb line-by-line parser by hand instead. And I’d like to implement in Crystal rather than Rust. Also thought about doing it in Swift, but that doesn’t seem to work so well outside of MacOS/Linux (I’m thinking BSDs here) yet so nope.
Best part? The tool’s name is Architect Sketch.
Reminds me of mermaid. It has shortcomings but has worked for me most of the times I’ve had to diagram something over the past few years.
I’ve been working on something similar myself, with the goal of actually deploying infrastructure on it. I think this is a good avenue to explore, and pairs really well with rule engines and datalog systems.
This is something I’ve been thinking about a lot as well.
For my use case, I just want to dump, document, or understand how my multitude of web apps, daemons, and IoT devices all interact with one another.
I had built a rough prototype with ruby and shell scripts using similar syntax, but I couldn’t ever get Graphviz to generate a “pretty” layout for the whole thing.
For $WORK, we have just deployed a new version of our entire technology platform! Our clients are doing their first major migration starting on the 2nd (today, for most folks reading this), adding 1,000 users per week throughout the entire summer. This is migrating from a physical colo’d C# monolith stack to a hybrid Rails/C# microservice architecture spread across AWS and Azure (with some of it calling back to the older stack). It’s the culmination of many months of efforts by every team in the company.
I’m nervous, but I’ve been dedicated to load testing it using Selenium, Firefox, and AWS instances – 1,300 of them to be precise. It’s handled that simultaneous load well enough. I hope to finish collecting all of the details and publish a blog post on how I set up this pseudo-bot-net of mine.
My personal infrastructure at home and other personal projects have been put on hold since I lived and breathed this preparation work for the past couple months. This week I hope that between refreshing every monitoring page in the network I can spend some time kicking the development back into gear.
For people not familiar with Eclipse release convention: Eclipse releases every year in June. Previous releases were Neon in 2016 and Oxygen in 2017.
Ah, the website does a poor job mentioning that Photon is a release of the Eclipse IDE, and not some sub-project under the foundation’s umbrella. Thanks!
Winds looks awesome, but the dependency on a bunch of cloud hosted, closed-source PaaS doesn’t seem so great. There doesn’t seem to be any way for someone to completely self-host Winds.
Bitwarden is my tool of choice for this. I haven’t been a fan of other more CLI-centric password managers as they usually don’t have browser integration. The usability of using an in-browser UI to generate a random password and the prompts to save it when I submit forms are very important IMO. Nothing has come close to that while also being open source.
One thing that irks me about Bitwarden is having to provide an email address and getting an installation id & key if I’d like to self host it for myself. Please correct me if I’m wrong but from what I understand, even for using it without the “premium” features one still needs to perform this step.
If so, I think I’ll stick with my pass + rofi-pass + Password Store for Android combo for now.
This is true, there are ways around it, if you work a little, since it is OSS. However, there are a few 3rd party tools, 2 of which are server implementations: bitwarden-go(https://github.com/VictorNine/bitwarden-go) and bitwarden-ruby(https://github.com/jcs/bitwarden-ruby).
There is also a CLI tool (https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/index.html)
Are you self-hosting it or using the hosted version? I’m somehow always sceptical of having hosted password storage, even if it’s encrypted and everything.
If it’s not encrypted, they see your secrets. If it is encrypted, they’re in control of your secrets. In self-hosted setup, you are in control of your secrets. If encrypted, you might loose them. If sync’d to third party (preferably multiple), you still might loose key. If on scattered paper copies, each in safe place, you probably won’t. For some failures, write-once (i.e. CD-R) or append-only storage can help where a clean copy can be reproduced from the pieces.
That’s pretty much my style of doing this. It’s not as easy as 1Password or something, though. There’s the real tradeoff.
It is encrypted, here is a link on how the crypto works in english: https://fossil.birl.ca/bitwarden-cli/doc/trunk/docs/build/html/crypto.html
I agree Bitwarden is not quite as user friendly(or as secure if using local vaults) as 1Password, but for an OSS app, it’s definitely at the top of the list on user friendliness of password managers.
I run a server locally on my LAN, and my phone/etc sync to it. I definitely don’t want my secrets out in the cloud somewhere, no matter how encrypted they might be.
Mentioned in the comments of the post, they have also fixed enabling word wrap and showing the status bar at the same time. I was always confused as to why those two settings were intermingled with each other.
I find it a little ironic that after using the open-web browser that I am not able to inspect the sessionstore-backups/recovery.jsonlz4 file after a crash to recover some textfield data, as Mozilla Firefox is using a non-standard compression format, which cannot be examined with lzcat nor even with lz4cat from ports.
The bug report about this lack of open formats has been filed 3 years ago, and suggests lz4 has actually been standardised long ago, yet this is still unfixed in Mozilla.
Sad state of affairs, TBH. The whole choice of a non-standard format for user’s data is troubling; the lack of progress on this bug, after several years, no less, is even more so.
https://bugzilla.mozilla.org/show_bug.cgi?id=1209390#c10 states that when Mozilla adopted using LZ4 compression there wasn’t a standard to begin with. Yeah, no one has migrated the format to the standard variant, which sucks, but it isn’t like they went out of their way in order to hide things from the user.
It was probably unwise for Mozilla to shift to using that compression algorithm when it wasn’t fully baked, though I trust that the benefits outweighed the risks back then.
This will sound disappointing to you, but your case is as edge-caseish as it gets.
It’s hard to prioritize those things over things that affect more users. Note that other browser makers have security teams larger than all of Mozilla’s staff. Mozilla has to make those hard decisions.
These jsonlz4 data structure are meant to be internal (but your still welcome to use the open source implementation within Firefox to mess with it).
I got downvoted twice for “incorrect” though I tried my best to be neutral and objective. Please let me know, what I should change to make these statements more correct and why. I’m happy to have this conversation.
Priorities can be criticized.
Mozilla obviously has more than enough money that they could pay devs to fix this — just sell Mozilla’s investment in the CliqZ GmbH and there would be enough to do so.
But no, Mozilla sets its priorities as limiting what users can do, adding more analytics and tracking, and more cross promotions.
Third party cookie isolation still isn’t fully done, while at the same time money is spent on adding more analytics to AMO, on CliqZ, on the Mr Robot addon, and even on Pocket. Which still isn’t ooen source.
Mozilla has betrayed every single value of its manifesto, and has set priorities opposite of what it once stood for.
That can be criticized.
Wow, that escalated quickly :)
It sounds to me that you’re already arguing in bad faith, but I think I’ll be able to respond to each of your points individually in a meaningful and polite way. Maybe we can uplift this conversation a tiny bit?
However, I’ll do this with my Mozilla hat off, as this is purely based on public information and I don’t work on Cliqz or Pocket or any of those things you mention. Here we go:
As someone who also got into 1-3 arguments against firefox I guess you’ll always have to deal with criticism that is nit picking, because you’ve written “OSS, privacy respecting, open web” on your chest. Still it is obvious you won’t implement an lz4 file upgrade mechanism (oh boy is that funny when it’s only some tiny app and it’s sqlite tables). Because there are much more important things than two users not being able to use their default tools to inspect the internals of firefox.
Sure, but it’s obvious that somehow Mozilla has enough money to buy shares in one of the largest Advertisement and Tracking companies’ subsidiaries (Burda, the company most known for shitty ads and its Tabloids, owns CliqZ), where Burda retains majority control.
And yet, there’s not enough left to actually fix the rest.
And no, I’m not talking about Telemetry — I’m talking about the fact that about:addons and addons.mozilla.org use proprietary analytics from Google, and send all page interactions to Google. If I wanted Google to know what I do, I’d use Chrome.
Yet somehow Mozilla also had enough money to convert all its tracking from the old, self-hosted Piwik instance to this.
None of your arguments fix the problem that Mozilla somehow sees it as higher priority to track its users and invest in tracking companies than to fix its bugs or promote open standards. None of your arguments even address that.
about:addons code using Google analytics has been fixed and is now using telemetry APIs, adhering to the global control toggle. Will update with the link, when I’m not on a phone.
If your tinfoil hat is still unimpressed, you’ll have to block those addresses via /etc/hosts (no offense.. I do too).
I won’t comment on the rest of your comment, but this is really a pretty tiny issue. If you really want to read your sessionstore as a JSON file, it’s as easy as git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4. (that package isn’t in the NPM repos for some reason, even though the readme claims it is, but looking at the source code it seems pretty legit)
git clone https://github.com/Thrilleratplay/node-jsonlz4-decompress && cd node-jsonlz4-decompress && npm install && node index.js /path/to/your/sessionstore.jsonlz4
Sure, this isn’t perfect, but dude, it’s just an internal datastructure which uses a format which is slightly non-standard, but which still has open-source tools to easily read it - and looking at the source code, the format is only slightly different from regular lz4.
A lot of my recommendations have been stated elsewhere in this thread, so I won’t repeat those.
I very much like Puzzles for some quick gaming on the go. It’s a port of Simon Tatham’s collection of games which is already on multiple platforms.
This is not in F-Droid’s repository, but is fully OSS; The Lichess android client. I play “correspondence” chess with friends and family fairly regularly and we do so over Lichess.
Frozen Bubble, more lightweight gaming on the go.
Solitaire CG a collection of solitaire card games.
OctoDroid for accessing Github on the go.
In theory, if someone were accurate and patient enough to input all of the sequences, this could actually happen on a real game boy. I’m amazed at everything involved with this.
Usually this is done by wiring up the console’s buttons to a microcontroller (e.g. TASBot). If someone can input this with hands, they’re probably not human :)
I’m not familiar with the Life community’s terminology, could someone give a primer?
Well, there is a lobster spaceship
While these are welcome and substantial improvements, I find myself continually baffled by the trend of putting messenger functionality in everything. App fatigue is real and I feel like we’re just perpetuating it.
You’re not wrong, but the value here isn’t an attempt to add “me-too” features to Nextcloud, from my understanding. The goal with Nextcloud Talk is to be able to have that messenger functionality in an entirely self-hosted place without relying on third parties. And Nextcloud is starting to develop a network effect significant enough that tying the messenger to Nextcloud is also valuable, instead of embedding XMPP/IRC/Matrix. (Though there is work being done to bridge Nextcloud and XMPP that I’m looking forward to.)
So who wants to adopt the lobster for lobste.rs?
why not zoidberg?
I’m up for donating to a pool for this.
Agreed with /u/gerikson, I’m up for a donation pool! Who wants to spearhead it?
I could put together a pool to try to hit the Silver or Gold level. The link would point back to a note on the about page. There would be no reward for donating besides the warm glow of knowing you’ve helped support an organization that is the source of so much error handling in our code.
Please take this ad-hoc poll by upvoting the single highest amount you’d donate towards this. Enough support and I’ll put something together. (If you made judicious use of your GPU a few years ago and have cryptocurrency to donate, please select the amount of USD you’d convert it into before sending it because I’m game for a fun lark, not a major project.) (Edit: tweeted)
This is in progress.
I’m using RedHat’s other virtualization product, oVirt, for my personal homelab. It’s quite smooth and very well made.
I looked into OpenStack as well, but there’s so many disparate components it cases the installation instructions to be way too complex. I’ve been turned off by the sheer complexity of all the components. (Many of which are optional, so that’s even extra complexity.)
I plan on deploying OpenShift soon too, to learn me some Kubernetes. :)
@pushcx, or whomever it applies, I wonder how much memory Lobsters is using?
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
lobsters 30353 0.7 3.0 399412 124800 ? Sl 00:00 6:20 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30359 0.8 4.2 449504 173164 ? Sl 00:00 6:39 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30364 0.7 3.0 396484 122156 ? Sl 00:00 6:24 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30368 0.7 3.0 398528 123688 ? Sl 00:00 6:16 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30372 0.7 3.1 400852 126092 ? Sl 00:00 6:15 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30376 0.7 3.0 397540 123052 ? Sl 00:00 6:19 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30380 0.7 4.6 465020 189584 ? Sl 00:00 6:25 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30384 0.7 3.0 398656 122660 ? Sl 00:00 6:15 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30388 0.8 3.0 399388 124456 ? Sl 00:00 6:33 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30392 0.7 3.0 399352 124268 ? Sl 00:00 6:05 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30396 0.8 3.0 396560 122720 ? Sl 00:00 6:35 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 30422 0.7 3.0 400140 124288 ? Sl 00:00 6:23 unicorn_rails worker -c config/unicorn.conf.rb -E production -D config.ru
lobsters 32327 0.0 2.4 235416 97696 ? Sl Jan10 0:12 unicorn_rails master -c config/unicorn.conf.rb -E production -D config.ru
It’s been two weeks since the service was bounced, so this is stable usage. I know there are issues with ps; if you have a preferred alternate measurement I can check it.
I don’t know what specific problem of ps you are referring to, but if you want to check real memory cost of process (USS/PSS) under linux smem might be good tool.
This is mostly a press release/advertisement of a mobile app. An interesting one perhaps, but I think this is pretty much marketing spam.
I like this. It’s a good idea in theory but the practical deployment of it is surprisingly difficult (tooling is missing) and any mistakes means your website is blocked by the browser’s interstitial with no way around. This concept needed more time to bake, perhaps some tools/patches/plugins written first.
It’s also an attack vector. If a site is compromised an attacker can pin their own cert.
I’m not sure what kind of integration each of the sites would include. I think it would be nice to see a list of other Lobsters-powered websites, perhaps more like a wiki page than anything managed by the admins. A little bit of federation would be cool to port/collate the user’s profile across instances, but that’s about as far as I’d expect such integration to go.
This week I have an in-person interview as a follow up to a phone screening. I hope I get the job! Along with that, I will be submitting my resume to a few more tech companies (both remote positions and a couple I found in the Phoenix valley) for system administration positions. Fingers crossed!
In programming news, I’ll be creating a WebRTC signaling server and TURN REST API proxy. I’m developing what amounts to a serverless P2P coder’s notepad, but unfortunately that’s not possible. You still need a couple of servers in today’s internet – the signaling server and the TURN server. (Yay NAT.) So I already have a TURN server, but with just a few hardcoded credentials; not something I can send out along with the JS application code. So using this RFC and coturn I’ll be able to generate ephemeral credentials for each WebRTC session. Hopefully, anyways.