While its not a single platform, you might interested in tridactyl. Its a firefox extension, some of its greatest hits:
Vim style keybinds (link navigation. element selection, scrolling, tab/buffer switching)
Allows integration with the underlying system using a native messenger. You can send a youtube video to mpv, or pass in a region of text to an text to speach engine.
Keybinds are user definable and can be composed. E.g. do x then y pipe that to z.
Ever want to edit a text block (like this) in the editor of your choice? Just hit ctrl + i
custom color themes
Has a scriptable way to define new functions, albeit a little messy. e.g. alias tabsort jsb browser.tabs.query({}).then(tabs => tabs.sort((t1, t2) => t1.url.localeCompare(t2.url)).forEach((tab, index) => browser.tabs.move(tab.id, {index})))
will define an alias for tabsort that, well sorts tabs by domain name.
I’m not related to the project, but I do sing its praised every change I get.
Thanks; I’ve heard of this but A) I don’t use vim or want to learn it and B) it runs on a browser whose rendering engine just had its entire team get fired and C) it’s also a google-funded browser.
Technically, Google did not do any work on WebKit, it just added V8, then later forked the thing. So it’s all descendants of KHTML → WebKit, with Google being of no use for WebKit proper. And Gecko, which I guess can also plausibly be called Google-funded…
[the problem, of course, that with people saying «Living Standard» without noticing it is an oxymoron, any browser will either have large compatibility issues, even if the good side of the Web is viewable even in Links2, or be a horrible mess because it chases a moving target that moves too fast to have reflection on consistent design]
I’m still not sure what to do about QtWebKit support - right now, I’m still waiting for something to happen (ideally a new QtWebKit release rebased on a newer upstream WebKit) given that it’s still in active (even though slow) development: https://github.com/qtwebkit/qtwebkit/commits/qtwebkit-dev
I want a Wikipedia for time and space. I want people to collaborate on a globe, marking territories of states, migrations of peoples, important events, temperatures, crop yields, trade flows. I want to pick which layers I want to see at one time. This sort of thing would give us a clearer, more integrated view of history, and show how connected we are all. A volcano eruption in Mexico led to crops failing around the world. New World crops led to an explosion of people in southern Chinap, a previously less inhabited place. A Viking woman met natives in present-day Canada and also visited Rome on pilgrimage.
Also thinking about one from time to time - in my view kinda like Wikimapia for history, usually thinking how country borders and armies could be represented.
“systematically collects what is currently known about the social and political organization of human societies and how civilizations have evolved over time”
Animated diagrams. Something like Visio or Omnigraffle, but with the ability to easily show messages flying around, instances appearing and disappearing, clusters moving, etc.
People usually reach for PointPoint or Keynote for this, which drives me up a wall. I’d rather have something that can directly create a video file or animated gif.
There was an CS undergraduate thesis project that did this at AppState in December 2019. I’m not able to find it right now, but it was pretty cool. Similar to how 3blue1brown’s stuff looks/works
I often want to use animation to explain the dynamics of software architecture, either in terms of the interaction of parts during runtime, or the evolution of the architecture itself over time.
Lucidcharts is definitely not perfect, but has layers and a presentation mode that should come close to what you want, except for the video out.
But maybe If you screencap the presentation?
A simpler WebRTC alternative supported by major browsers.
Transfering video/audio between client-server or even p2p using NAT traversal is not rocket science. The problem with WebRTC is that rather than making one spec solving the specific problem at hand, it combines a bunch of older specs harping back to the 90s, but only partially implementing them (SDP, STUN, DTLS, RTP, SRTP, SCTP). Seriously it’s like some old telco like Ericsson having a wet dream thinking their old research turds are OK in 2020.
Each standard comes an RFC, but all WebRTC implementations in the wild (the 3-4 we have in the browsers), use an undocumented subset of each. To implement a WebRTC stack there is no definite way – look at existing code, read specs, test, fail, decode packets, investigate.
SDP just isn’t an ok way to transfer structured data (even XML would be better!). It also comes in flavors like “plan-b” and “unified” and neither is a sure way of supporting all browsers. Unified is apparently the modern way, but it’s amazingly verbose for no reason (the organisation of media around bidirectional “transceivers” is just absurd).
RTP is totally done in an age not even remotely like 2020. Lots of words and effort is spent on multicast, unencrypted transfers and imaginary A/V infrastructure that doesn’t exist.
The WebRTC as adopted by W3C is not a spec, cause rather than specifying anything, it’s more like someone took the original WebRTC C++ code wholesale and put the code into english words – and it’s not correct. It says implementations should do X, but in practice everyone does Y.
WebRTC is a bit trying to do a bit too many things. I’d like to split the spec in (at least) two parts. One that just concerns getting audio/video from point A to B (including NAT traversal), but doesn’t really care about multiple participant calls, p2p or god forbid proxying/routing of A/V.
SDP needs to be obliterated forever. Pretty much any information exchange format would be better than this.
STUN/TURN is ok for NAT traversal, but the spec can be reduced A lot to support only the problem at hand.
Getting video from A to B need some feedback mechanism for dropped packets, ask for keyframes etc. This is what RTP/RTCP, does, but there’s too much darn legacy.
Take RTP. Here’s a fun detail. It has a “header extension” mechanism to allow for experimentation. https://tools.ietf.org/html/rfc3550#section-5.3.1 – the spec specifically says it can be ignored and most things should be transferred in the payload instead. But the payload is encrypted under SRTP, and some metadata is needed unencrypted in the header. Arrive https://tools.ietf.org/html/rfc5285 where this experimental and ignorable header is now specified and carries absolutely crucial information for WebRTC. And it does it together with… (drumroll)… SDP! Where each extension key is coded to a URI that carries significant meaning which often is outdated/not documented but very important.
An Emacs for the web – browser primitives, but with hooks and definitions that allow full user control over the entire experience, integrated with a good extension language, to allow for exploratory development. Bonus points if it can be integrated into Emacs;
a full stack language development environment from hardware initialization to user interface that derives its principles (user transparency, hackability) from Smalltalk or LISP machines, instead of from the legacy of Unix.
Re 2: Sounds like Mezzano https://github.com/froggey/mezzano apparently. Actually running on arbitrary hardware is even harder, of course, because all the hardware is always lying…
Really, you’d bootstrap on QEMU or something, and then slowly slowly expand h/w support. If you did this, you could “publish” a hardened image as a unikernel, which would be the basis of a deployment story that is closer to modern.
ETA: I’m not sure I’d use Common Lisp as the language, but it’s certainly a worthwhile effort. The whole dream is something entirely bespoke that worked exactly as I want.
Well, Mezzano does publish a Qemu image, judging from discussions in #lisp it is quite nice to inspect from within, and judging from the code it has drivers for some speicifc live hardware… A cautionary tale, of course, is that in Linux kernel most of the code is drivers…
Vacietis is actually the first step in the Common Lisp operating system project. I’d like to have a C runtime onto which I can port hardware drivers from OpenBSD with the minimal amount of hand coding
Like, I’ve used w3 in the past, but I’m thinking more like xwidgets-webkit, which embeds a webkit instance in Emacs. I should start hacking on it in my copious free time.
That makes a lot of sense. This makes me think of XEmacs of old, ISTR it had some of those widget integrations built in and accessible from elisp.
Come to think of it, didn’t most of that functionality get folded into main line emacs?
I love emacs, a little TOO much, which is why I went cold turkey 4-5 years back and re-embraced vi. That was the right choice for me, having nothing at all to do with emacs, and everything to do with the fact that it represents an infinitely deep bright shiny rabbit hole for me to be distracted by :)
“If I can JUST get this helm-mode customization to work the way I want!” and then it’s 3 AM and I see that I’ve missed 3 text messages from my wife saying WHEN ARE YOU COMING TO BED ARE YOU INSANE? :)
I feel seen. Yeah, I basically live in Emacs; it informs both of my answers above; basically, I want the explorability of Emacs writ large across the entirely of my computing.
Pretty much agree with your post. Removing the distinction between shell and terminal emulator would allow new and interesting modes of operation. One of them could be pausable and introspectable pipes. Another one could be remote SSH sessions that have access to the same tools as the local one.
First paragraph of the post explains that I am not looking for powershell. It indeed is a big improvement over bash, but in areas I personally don’t care about.
If you read the post this isn’t what the OP is going for. Powershell brings some excellent new capabilities to the table with object pipelines, and has some nice new ideas around things like cmdlets and extensability, but his post goes into much more detail about user experience aspects Powershell doesn’t even come close to providing.
And ironically because I’m “cutting” the interactive shell, it will should be more possible than with bash or other shells, because we’re forced to provide an API rather than writing it ourselves.
I had a discussion with a few people about that, including on lobste.rs and HN. The API isn’t very close now, but I think Oil is the best option. It can be completely decoupled from a terminal, and only run child processes in a terminal, whereas most shells can only run in a terminal for interactive mode.
Basically a new “application container” is very complementary to Oil. It’s not part of the project, but both projects would need each other. bash likely doesn’t have the hooks for it. (Oil doesn’t either yet, but it’s has a modular codebase like LLVM, where parts can be reused for different purposes. In particular the parser has to be reused for history and completion.)
Amusingly, using :terminal in neovim changed a lot of things for me. I could then go to normal mode and go select text further up in the ‘terminal’. Awesome!
Yeah, this speaks to some of the power he references in his post that emacs brings to the table. IMO one of the things that makes neovim so impressive is that it takes the vim model but adds emacs class process control.
I’d love it if people would do more with the front end / back end capabilities neovim offers, beyond just using it for IDE integrations and the like.
A trimmed down soft fork of firefox. Removing everything not necessary. Including everything marketing/saas related like sync, pocket, studies, random advertising, and so on. All the unnecessary web api stuff like “payment requests”, “battery”, “basic authentication”. Removing APIs that don’t need support anymore like ftp.
My goal here is to move towards something that doesn’t have behavior I don’t like (nagware and privacy leaks), and has a smaller surface area for security vulnerabilities.
The hardest problem with this is keeping it easy to sync up with firefox for the code that you haven’t deleted (using an old version isn’t an option because of security). My initial work on this (which I don’t anticipate having time to pick back up) was a system that stored “patches” in a version control system, were a patch was a directory that contained a list of files to delete/create. Diffs to apply. And scripts to run. This meant it was substantially less of a problem dealing with the inevitable “merge conflicts” since the system could be substantially smarter than a diff tool (e.g. instead of applying a regex once and making a diff, I could make a script that said “apply this regex everywhere”).
In terms of reducing surface area, I’d love to see a build without WebRTC, or even websockets. Things like the recent report about ebay doing local machine portscans seemed to be viewed as poor behavior on their part rather than a fundamental firewall breach allowed by overzealous incorporation of poorly thought out standards. The web is rapidly becoming insecure by design, but the whole reason to use the web is to provide a sandbox for untrusted remote content.
If I was developing it, I would be pretty worried that a change like that would break too much of the web to be useful. A “better” browser is only useful if it actually has users.
You’d have to actually measure how many sites that would break to be sure. One option might be (if it was reasonably easy to maintain, not sure) to throw it all behind a webcam like permission that needs to be given to each site and is requested upon use.
There is a fork called Pale Moon. The issue is that the code base is too large for the maintainers and it probably suffers from very old memory hazard bugs that don’t apply to Firefox anymore (some recent CVE still apply to Pale Moon). Also it doesn’t implement many new APIs (WebExtensions, WebRTC, etc…), which is a design choice you suggest.
Something actually good for photo management and editing..
RAW processing must be done on the GPU, as in vkdt, but like, write it in Rust, using wgpu or something. There is a raw loader in Rust already, use it
don’t use imgui type things for UI noooooo just use GTK please and make it look nice and simple and GNOME-ish
what is infuriating about Lightroom and every clone is that they treat exports as fire-and-forget. If you browse the library, it re-renders from RAW all the time, maybe using some kind of cache (badly). I do want to manage the JPEGs! In fact I want to see the JPEGs most of the time but if RAW exists for a photo I want a “redevelop” button to exist of course. And the discovery of the RAW origin of the JPEG must be really “bulletproof”, no matter how I move stuff around the FS as long as it’s below the directories the software knows about.
conversely, software like digiKam that aims to organize all the things is usually not great at RAW.
why does everything suck over NFS. I store photos in NFS shares, please take it into account and access everything optimally. Don’t mmap the photos. Don’t use SQLite databases that don’t work over NFS. Argh.
All the putative Lightroom replacements are terrible. I would happily pay what I pay to Adobe to literally anyone else for software that did the 60% of what Lightroom does that I use.
Organizating, developing, importing and exporting. And I don’t do that much developing in Lightroom, but I do use Photoshop, which opens a whole other kettle of fish.
why does everything suck over NFS. I store photos in NFS shares, please take it into account and access everything optimally. Don’t mmap the photos. Don’t use SQLite databases that don’t work over NFS. Argh.
I feel your pain!
I’ve been continually disappointed at how poorly many Linux apps perform when run on an NFS file share.
Also, I do not speak for my employer, but I work for the AWS EFS team and my service presents itself as an NFS file share, so we see a fair bit of customer pain around this as well.
It surprises me how little innovation I see in the remote filesystem space these days given that we have pervasive gig speed networking growing on trees and computers so fast that the processing load of the NFS protocol is largely inconsequential for many use cases.
I work in the data space and everything is moving to “the cloud” and that typically means object stores. People have wasted too much money on things like HDFS, they just want to store files somewhere and not have to think about it. (The pain that these are not really file systems is largely ignored)
Yeah it sometimes surprises people how many use case really work well when implemented on a file system, especially one that has reasonable capabilities for locking and the like.
That’s true everywhere from the back end where people are storing raw seething data all the way up to the consumer user experience where Apple tried DESPERATELY for the first decade of its life to hide the fact that there was a UNIX-ish userland behind the curtain with a bog standard traditional filesystem, and even they’ve gone back on that by providing their “Files” app and interface in recent releases.
Files are just an incredibly useful way to think about data from just about every angle. Certainly not THE ONLY way or THE BEST way because such descriptors are pointless when talking about abstractions.
In a sense, files are a reasonable (of course almost nothing is perfect, and files are not) implementation of a core and almost unavoidable notion of compartmentalisation, or a Bunch Of Stuff, together with the clearly desirable idea of using various tools on the same Bunch Of Stuff… Hard to avoid that completely!
Second on every LightRoom clone being… not great. Sadly I’ve stuck with actually paying for LightRoom, but I would gladly pay $80+ one time for an adequate replacement.
I do want to manage the JPEGs! In fact I want to see the JPEGs most of the time but if RAW exists for a photo I want a “redevelop” button to exist of course
You’ve probably tried it, and it has its own warts, but digikam does track the jpeg and allows you to re-import raw images. Just letting you know if you haven’t tried it before :)
Of course. The remarks about Lightroom and its clones definitely apply. Also its GPU support is incomplete and based on OpenCL – with some advanced usage of OpenCL that’s not supported by Mesa Clover (the “images” feature especially).
Things I want and/or have worked a little on–that hopefully already exist and that I just haven’t heard of yet. :)
A proper client-server map tool for pen-and-paper RPGs, that supports quick sketching of environments or detailed art.Maptool is what we used to use at a friend’s house but it’s kinda clunky…the setup with a projector on the table was not though. :)
A personalized web spider rancher and search engine that seeds itself based on your browsing history. I’d also want to be able to federate it or allow collaboration with other individuals groups.
An open-source VR engine that isn’t Unity or Unreal (or webshit). I want something I can quickly prototype ideas on, like lovr, but that isn’t a walled garden and isn’t pushing me to using the current dominant model of game development tooling.
A self-hosted IFTTT alternative. I’ve already written one, and while an interesting step it isn’t what I want.
A self-hosted geocoding database. Something like Pelias but like I’m super lazy and just want it done somehow.
A good time-series database that can do explicit interpolation, extrapolation, and give ‘quality’ estimates to the data returned. I’ve worked on one or two of these, and everybody makes them too complicated (using elk or cassandra or Big Data shit or whatever) but also somehow manages to miss the usecase of an engineer who is taking signal measurements and needs to do reliable math on them. I just want Redis but for boring scalar and vector measurements that fit into a schema. I have a whole rant about this–and I’m also probably ignoring some solution that already does this!
A tool I can point at a directory of music and just say “sort out all of this shit and please remove obvious redundancies”. I have a bunch of files of music collected over two decades and the one thing I can guarantee is that: none of the ID3 tags are consistent, none of the filenames are consistent, none of the directory structures are consistent…I just want a program to go do the needful somehow. Machine Learning. GPT. AI. BIG DATA. idgaf.
A good replacement/rewrite of Balsamiq Markup that is documented for other people and runs on Linux and isn’t on the Cloud or some gnarly Adobe platform. I really miss this tool…I paid for it and I don’t really have any systems that run it easily anymore. :(
I am a huge fan of godot for fast prototyping. I haven’t experimented with the VR support much but it is open source and extensible so even if it has issues now, they should be resolved in the future.
A tool I can point at a directory of music and just say “sort out all of this shit and please remove obvious redundancies”
Have you taken a look at beets yet? It uses the MusicBrainz database (community created and edited) and works great for me. If you want a gui, I have heard good things about MusicBrainz Picard.
+1 for Beets. It’s built for detail obsessed control freaks like us who want to specify EXACTLY how we want our music categorized, de-duped and stored.
I use and love it to organize my music for eventual indexing and playback with Plex.
Oh, thanks for the reminder, I wanted to have another look at Huginn. I actually started to build something like this nearly 10 years ago but it didn’t get very far because I lost interest very quickly. And I noticed it didn’t actually benefit me a lot.
My idea was more something to be described in current terms as a mix of cronjobs, IFTTT, and Alexa - but all text-based, like a very fancy IRCBot. Not sure if Huginn 100% maps to that, but I think it’s close.
Oh, I don’t mean to sound like I’m griping! I’m going to be using it soon. :)
It just is a little small in what it supports, which is to be expected. I’ll also have to talk to it over enet for my purposes, instead of like websockets. I’m a total n00b at lua so there may be an obvious solution here I’m just missing.
There is not an abundance of alternatives - I am not particularly fond of the API model (same goes for löve though). I have not done anything substantial in LuVR myself, mostly played with its internals, using it as a test platform for how VR compositing independent clients would work and what the needs would be API wise in a VR desktop like setting.
Is that even possible? Is there a sufficiently evolved protocol specification and even maybe test suites to allow you to create your own implementation without sharing source code?
In the absence of a spec the easiest way to do clean room reversing is to have someone read the code and write a spec. Then, only people who have never seen the code read this spec and write the new code.
There are some other relevant reversing strategies, but the above is likely to be sufficient for ZFS. You get a second implemntation and a spec – well on the way to being a standard at that point.
My ignorance is showing here but this FEELS like an incredibly difficult problem space for that approach. Just from reading and talking to others I get the sense that there are a huge number of subtleties around the various subsystems of ZFS and how they interact.
I currently use MyFitnessPal for calorie-tracking and BigOven for recipe storage, and I cordially detest them both. They’re both buggy, slow, lacking a lot of features that I’d like to have, and “unmaintained” in that special SaaS way where vendors just stop fixing bugs or adding features once they have cash flow going. They also both cover like 80% of the same ground - BigOven has a way to calculate the nutrition numbers for a recipe but no daily calorie log, while MFP has no way to store recipes as anything more than a list of ingredients.
I want to replace the pair of them with a lightning-fast local-first app, but I just never seem to get around to it.
I second detesting MFP, but I really haven’t found anything better. It’s one of the few services I use where the mobile app is really the only way to use it (they have a web app, which is completely unusable IMO). Also very frustrating that they don’t have a public API. I spent some time reverse engineering their mobile sync API, but gave up after a while…
I did some research into making something similar, and found https://world.openfoodfacts.org/ as an interesting data source, but haven’t had the time/motivation to build anything yet.
I’ve tried a whole bunch of MFP alternatives and I keep coming back to MFP (which I also do not like) because it just has more stuff in its database - I can scan almost anything I buy (in the UK) and MFP will know the nutritional details.
If you think it might be useful, but found a dedicated CRM app a bit much, have you tried using the notes field in your phone’s address book? I use it to jot down names of kids & spouses and things like “vegan”, “teetotal”, “pronounced […]” etc. They’re synced everywhere automatically and they’re searchable in a hurry from your phone.
I think it may seem creepy because of associations with corporations and marketing.
However, when I actually think about it… Would my life be richer and better if I was more consistent about staying in touch with people? Almost certainly!
I tried this but had difficulty getting the self hosted version to work. As far as creepy, I think of it as just a memory extension. It isn’t anything someone with a good memory couldn’t do, just helps mortals to remember birthdays, peoples’ interests, etc.
Alternative code formatter for Rust, which works gently and cooperatively like gofmt.
rustfmt is a blunt tool with lots of edge cases that destroy readability on purpose, because rustfmt always chooses its own heuristics over any human input.
(e.g. gofmt preserves author’s decision whether a construct should be 1-liner or in multiple lines. rustfmt will explode expressions when it feels like it, and make spaghetti lines if it estimates they’ll fit).
rustfmt is a blunt tool with lots of edge cases that destroy readability on purpose, because rustfmt always chooses its own heuristics over any human input.
I found that if rustfmt makes my code less legible, it often wasn’t the formatting that was the problem but my code.
If your code isn’t great, then improving it is a job of compiler warnings and clippy lints that should clearly explain how to make it better. Formatter isn’t a linter, and it’s not supposed to be give you some vague negative hints by making your code less readable.
Formatters are supposed to make code more readable, and free users from worrying about these details. If you need to rewrite your code to work around formatter’s issues, the formatter is failing on both counts.
An Oberon compiler targeting WASM that runs in the browser and emulates the Oberon System’s unique user interface.
A total rewrite of sam.
A reimplementation of Inferno using WASM instead of Dis and Limbo.
An implementation of a small VM hypervisor exporting “standard” VirtIO devices, and then a small theoretically-runnable-on-bare-hardware-but-really-just-virtualized single-user operating system for it. Basically a modern reimagining of VM/CMS.
Actually Kinda Working On But Without Any Sense of Urgency:
A ray-tracer in Rust
A terrain generator, also in Rust, that outputs terrain in a format that can be consumed by the aforementioned ray-tracer
A new text editor (seriously I have a problem)
Things That Are On The Back Burner But As Soon As Work Either Endorses Them Or Calms Down Enough To Give Me Time To Work On Them:
An expert system shell in Python
A super top-secret (in the conversational sense, not actually involving clearance) project that is going to be incredible but I can’t say anything more about right now involving extremely high-speed pattern matching
Reworking the TCP reassembly portion of our product to be even faster (it’s already one of the faster ones out there, if I may be so bold)
An implementation of ‘/dev/compute’ to expose GPUs and similar coprocessors on 9front. The goal would be to drop all the legacy baggage around graphics in the interface, and simply expose the GPU cores directly to the OS, providing a way to pass the GPU a chunk of code and a block of memory, and letting it do its thing.
You’d get high performance compute then, and fast graphics could then be done via pure software rendering on top of that.
GPUs are getting sufficiently general purpose that this is likely to work well.
More automated security auditing tools for crawling public code repositories such as npm or crates.io, looking for anomalies and warning people if they use suspicious packages. I tried doing this myself but there’s other stuff I want to do too.
A biiiiiig pile of Linux+Rust graphics infrastructure stuff. Lots of low-hanging fruit to be plucked there still.
Something to explore the design space of “more minimal web”. Not quite as minimal as Gemini, probably something that uses HTTP but better/different text format and more minimal API.
Something to explore the design space of “human-writable encoding format better than JSON”. There’s plenty of machine-writable ones, and TOML is very nice for configuration files, but “easy to produce deep nested structures and less screwy than JSON” is still TBD.
Something that solves the same problems that systemd does but is more minimal.
Something that solves the same problems that pulseaudio does but is lower level.
It seems like the Bevy project has shown that wgpu is a nice way to interface with graphics APIs in a high-level way.
What do you think still needs work in the graphics ecosystem in Rust? Personally, I’d like to see more development on some of the GUI crates like Druid and Iced. Do you think there’s still a lot missing on the game front? Curious to hear your thoughts as someone who maintains a game crate.
The current de-facto windowing crate is winit, which is very good but causes some friction in fames sometimes, at least the sorts of games I want to write. It makes design compromises around events that are kind of inconvenient, some for the sake of GUI applications, some for the sake of as close to zero overhead as possible. I think it would be interesting to try to make something with 80% of the functionality and 20% of the size. miniquad includes bindings to the Sokol library which does exactly this sort of thing, but it’s C.
It’s not actually graphics, but cpal and rodio are the defacto portable audio output crates, and they could use more love than they get.
A good GUI crate would be a potentially killer feature. Iced is as close as we have, but it could use more maintainers.
With regards to more minimal web, I invite you to check out my project, which also touches upon human-writable data structures.
It is a forum building system which works in Mosaic, Netscape, Lynx, Opera, and IE, among others. Newer browsers get nifty enhancements such as actions which don’t reload the page, but accessibility and compatibility is a high priority.
The “API” is based on several text-based tokens, like referencing posts with >> and hashtags with bindings.
Ah, the fun question :) I have several, but here’s my notes on the Todo/Email/Calendar of my dreams:
“Thought experiment 1: What if I dumped all my inbox into my current todo software (Todoist)? This would suck because it’s a crappy email client.
Thought experiment 2: What if I dumped all my todos into Inbox? This would also kinda suck, because it’s missing a few features that a good todo should have…
That second one told me what I need: Inbox has “do not show me this thing until a date” i.e the first date something should be on your radar and Todoist has “item is due on this date”. For a good todo I need both.
Imagine a tool whose goal is “what am I doing in the future?”, with each item having a block of text associated with it, and a model that lets you send items to other people. Emails become new items in your “today” list, with a start date of right now and an unspecified end date. You can do the Inbox-style “put off this item until it’s actual usable date”. Most traditional todo items are either “do this by this date”, or “do this at some point”. The former have an end date but no start date, and the latter have neither date, and so need some other interface for exactly where they get put.
Other notes:
You possibly have some local rules for tagging, prioritisation and other info added to incoming email.
All calendar items can be a todo with a defined start/end date
Being able to relate todo items in a DAG way (many separate graphs allowed, but no cycles within a single todo item graph) to allow for all forms of project sequencing is also awesome.
“
Try Emacs! org-mode can manage a todo list, including showing all of your future tasks/events in a calendar. It also has some great searching and tagging features. You can set up a mail client like mu4e to file incoming mails into an org file. I’m not sure about sending items to other people, org can export to HTML or plaintext so I’m sure you could set something up.
The only downside is that it’s Emacs, and Emacs has a learning spiral instead of a learning curve.
What I want is the ability to treat application components like building blocks that I can address, control, and most importantly string together with a common data interchange between them, allowing me to compose complex customized workflows.
The Amiga did it with AREXX, Apple does it with the Open Scripting Architecture and AppleEvents, and Windows does it with Powershell and its several other more ancient mechanisms like OLE and even DDE.
But UNIX folks get this blank look when I try to talk about this and inevitably come back with “Uh. Pipelines?”.
Pipelines are amazingly powerful and I’ve built my career on UNIX so I’ll be one of the first to gush about the power of the “everything is a string of bytes” philosophy, but there are some things that are very hard to express with that one simple data structure.
D-Bus looks like it would handle the message passing bits. It’s not clear to me how that would handle the whole idea of applications exporting composable verbs though.
Well, D-Bus seems to have a notion of providing a service, and some built-in mechanisms for access control; and it does look that some applications declares D-Bus endpoints for communication between parts of the application.
It might be that D-Bus is easier to wrap for quick scripts than preceding (somewhat comparable to DDE) KDE DCOP, CORBA and Gnome Bonobo on top of it, and KParts.
like A/B = C then I can drag B over to RHS and it becomes A = BC.
Say I can define BC= Z and if I select BC then I can choose to substitute BC with Z yielding A= Z.
The rules of the algebra are intricate in systems like tensor algebra etc, so the use should be able to define rules for algebraic manipulation. all of them can be access via drag and drop.
E.g in matrix
Ax + b = y
you can do
vcat(A, 1) * vcat(x, b) = y
Now newA*newb = y
newA’ newA*newb = newA’y
and we can say that newA’ newA is invertible then
newb = (newA’ newA)^-1 newA’y
Normally this can be done in non-matrix algebra. But in matrix algebra you need additional rules. But all of that can be done via a GUI!
Does this mean a check-in/check-out system? I guess for the scanning part nobody can help you with physical setup… But you can attach an Android smartphone with Binary Eye or something, scan everything incoming, and enable «forward all scan data to URL» with whatever you have around to receive and collect the data.
I guess you could scan receipt before and after each batch of bought things to show these are incoming, and have a different marker to scan with things running out.
Sounds like the receiver might be a reasonably simple script pushing everything into an SQL database — or doing nothing, if you prefer parsing the logs. Maybe having webserver logs with data would make getting around to actual processing easier…
Then maybe indeed install Binary Eye and start scanning? Once you have some data, the barrier of entry to actually processing it will become lower… (and even if unprocessed data doesn’t help you find expiring items, it will later help you estimate the range of consumption rates of various items)
Cooked items are kind of less of a problem, as once you have a barcode for each rought type (which can be Data Matrix or something — yay multiformat scanning), the overhead of check-in/check-out is not large compared to cooking. I guess for fruit you could check-in each batch…
There are a few, but all of them for some reason choose to leave the same key-mappings, even when they make little sense. I think a lot of Vim defaults are there because of backwards compatibility. Kakoune would have been nice, except they messed up the workflow by going with the selection-first approach.
File annotation tool.
I am not sure what form this should take, but often I want to write notes about various pdf files I am reading and store the pdf and notes in the same place. There is papis but it does a bit too much in my opinion, and is more for storing meta-data (which is also important) instead of actual notes.
Terminal based email client.
There are a few, but they are a bit too complicated. mutt for example is designed to work with various other tools that you have to set up first. And then you have to set up mime extension handlings, and you need yet another software to hold your contacts in. Some simple (from users perspective) alternative would be nice in my opinion, even if less powerful.
A git-like tool (or a git like interface) to replace make
Right now with Make you have to declare dependencies between scripts within the Make file. I am not sure if possible, but would be nice to have a separate tool that handles dependencies. In this vision you would simply have some scripts that execute. The tool will then save dates about when is the last time the script was executed. And in addition would allow to declare dependencies via the command line (like tool add script1 script2). And of course other commands to display the DAG, list targets that are out of date, etc.
Re: 2: I think if you say the existing tools do too much, it would be interesting if you explained why having filename.pdf.txt with notes is doing too little.
Re: 4: Hm, I am not sure, if two different scripts modify some file, how is this handled from the point of view of the file’s staleness (then it turns out that the file is a log file so the operations are not even idempotent, but that surely needs special-casing)?
Re 2: filename.pdf.txt is slightly too little in terms of organisation. It is similar to what I am doing now (I have two files, one for notes and another one for metadata besides pdf). The problems for me start when I start adding some fields (lines really) in notes or meta data. Then older files have missing information and when I grep for something I am not sure if some results are not returned because they didn’t match, or simply because I forgot to add that specific type of information.
I guess a template with standardised fields would be enough. I am not too sure on the specifics. All I know is that my current approach is a bit clunky. And papis seemed overkill and not-enough at the same time. I suppose one part of creating new software is figuring out details like that.
Re 4: In case of a log file I guess you would write to it any time the script is executed. In that case IMO there is no need to add a dependency in the system at all. Whenever something is rebuilt the log will be appended. But probably do not need to state to execute some script if log is updated.
If two scripts write to the same output (i.e. append lines one after the other) then it might be tricky. I haven’t solved all the cases in my head, just wondering if it’s feasible or not. But I guess maybe you can introduce something here, like not declaring that output file to be out-of-date until the whole pipeline of a re-run is finished?
Re: missing fields — template indeed won’t save you as it gets updated, maybe you need a pass that would check what fields from the current template are missing in the old metadata files?
Re: dependencies: I think the system should discover such dependency, no? Otherwise it’s just a CLI to edit a Makefile… And so the question is what it will discover for multiple non-idempotent scripts modifying the same file (but let’s say the user doesn’t add an ignore because semantically there is idempotence)
STL Library. I have a whole ton of STL files from the various patreons I follow. I’d love to be able to collect and tag the various STLs so I can find what I’m looking for quickly. For instance, I’ve got one stl in one subfolder of a subfolder that is a dragonborn ranger. I may say one day, hey I need a ranger, well good luck finding that in “June Atrisans guild 2020/Dragonborns/dbr02.stl” or whatever. Tagging the individual models is the real trick here. Linking to multiple files is needed as some of them are split, and some even have pre-supported versions. I’d love to have a preview as well.
Same goes with Audiobooks. I have audiobooks in several places around my various drives. I’d love to be able to collect them into an app and catalog them in situ. Just allow me to gather metadata about the book, organize in the program and not have to collate my files.
Sounds like what you want is actually a way of handling the file tagging.
What I tried doing at some point is have files live in whatever places they live, then have an SQL database with paths and tags, then have a virtual filesystem so I can just cd into a query looking up some tags, then use whatever basic thing-handling tool that can understand the basic idea «this directory, let me browse the stuff in it»
I do have the virtual FS part now (QueryFS), but I never moved beyond very basic metadata extraction, as I learned to put files into consistent places and find them there faster than I learned to build a consistent tagging structure. But this is a function of what files I store and how I handle them, of course.
(I still use that setup for streams of files, be it emails or Lobste.rs discussions — fetch, index into DB, cd into selection, handle with a fitting tool/script, rm to dismiss — which just marks as read)
Yeah! That’s a cool idea, it’s sort of an abstracted tool set for these ideas.While this would be ok for me to use, it would be hard to get my fiancee sold on this method. Maybe step one is setting something based on this. Or, maybe a generic library application where it takes care of things like “watch this directory, add new files to sort queue” and you can then define metadata fields for whichever library you’re trying to create. It then takes care of the querying and linking for you.
Both inotify and multiple configurable FS indexers come to mind…
I guess if you setup something you like, it might be feasible to distill what you need to get more of your family sold, and wrap it in a minimalistic GUI?
Basically: finding things should be done simply by browsing the virtual FS in a file manager or the tool’s own directory picker. At least that’s what I do, except in shell. I think if you can sell people on the approach, such UI solution will be deemed acceptable — you are not guaranteed to convince people that tag-setting, regardless of UI, is worth the benefit, of course…
I assume the initial background setup can stay your exclusive task for a long time, hopefully you don’t reinstall the system from zero weekly.
So the question is tagging. I believe that you can find an indexer with mixed inotify and crawling, which achieves good balance between fixing the omissions in the knowledge after problems and not hogging the entire IO throughput… As you, by definition, have an SQL table with all the tagged files, and hopefully you get an SQL table of all the found files, it should be easy to get all the files in need of hand-tagging. And feed them to a virtual directory, naturally. Generally a GUI for going through a directory and recording what tags the user wants to assign to each entry should be fesible if you know already there is a workflow that is worth that effort… A virtual directory helps here as a viewer tool can be launched side by side in case of a doubt.
PS. QueryFS is in a state of being in the state of being very useful to me but without clear plan to make it useful to others; in particular that means that feature requests have a good chance of being implemented.
3: A hard-realtime UI that is allowed to make you wait for things, but only where it actually makes sense. Nothing’s allowed to run in the background and affect latency in your text editor / shell / window management / whatever.
I am afraid that applying a readability-equivalent is not so straightforward…
I am consuming most of the web content via a pipeline of: parse HTML → make a Readability-lite copy and original copy → HTML-to-text in a specific way I have chosen for my personal comfort. So I am not incredibly far from living in the world you want. In some cases the readability-lite copy is a nice help, but in many many cases it is a complete failure, and also it is clear that it failed at a choice that is actually hard.
As a very primitive example, on some pages collapsing the comment-related fluff is the first thing to do, and on some others it’s the real value of the page.
3D Mechanical CAD that is good enough, like Kicad is good enough for electronics. I don’t necessarily need it to handle a 747 or car factory, but a kit plane or robot would be nice. The lack of a good foss cad package is what’s stopping me switching full time to linux, and why we have windows laptops at work to go with ubuntu desktops and why I have a personal macbook pro with the this-is-definitely-going-to-burn-me-one-day cloud storage Fusion 360. I would like to lose this mess completely and just live in Linux full time. I just think it’s still a bit to far over the critical distance from the pain points programmers have to be receive the attention required to solve it well. Maybe the 3D printer boom will help things along.
Have you tried Solvespace? It’s not as sophisticated as Fusion360, but it’s in the same paradigm, easy to learn, and I’ve found it useful for designing robot assemblies.
A Namecoin-like software that generates HTTPS certificates, so HTTPS Everywhere can be backed by proof-of-work as opposed to the goodwill of the HTTPS Everywhere foundation
A new framework for cross-renderengine (OpenGL/Vulkan/Metal) and cross-platform (Mac/Windows/Linux) GPU-accelerated video effects. Basically the next generation of https://github.com/resolume/ffgl/.
A good binary file diff viewer, that is optimized for viewing flash memory, such as dumps of microcontroller memory.
Arrange the bytes in a grid, and ‘OR’ all the bytes in columns together. This will indicate if a single byte or pattern scribbled over the top of existing data.
Different colors for changes bytes with extra bits set or cleared. This is a good indication if you are looking at the aftermath of a partial erase, or a memory cell that is losing its ability to hold data.
Understand the block size, to indicate which blocks are changed / corrupted, which blocks are unchanged, which blocks are blank.
I’ve written the tool myself using ANSI color escapes, then piping through aha for HTML output. But it’s much slower than it should be.
https://luna-lang.org - I tried to get hired by them but we couldn’t find common points with my expertise areas unfortunately.
A non-Electron lean WYSIWYG Markdown editor with outliner features, with plugins support, for stuff like ASCIIMath, ditaa, ASCII UML diagrams, runnable code snippets, etc. (I know, I could learn emacs and org-mode… problem is, I am already fairly advanced in vim, and I don’t suspect evil mode has all the features I use…) Ideally with WordStar keyboard shortcuts.
A non-Electron GUI email client, with similar features as notmuch, with easy tagging of emails using emoticons/icons and instant filtering by tag combinations (ideally all icons visible at once) and allowing me to easily edit received messages so that I can keep only the crucial parts (but the rest of the whole email text could still be shown “grayed out”).
A git GUI allowing easy rebasing and splitting of commits via drag and drop on a tree visualization. Also with easy browsing of history and blame-digging (a.k.a. how did this bug/suspicious code get to look like it does now?).
A car driving simulator using Panini projection and Minecraft-like world editing, possibly on hex grid, shared in wiki-like way so that people could map their cities and train driving in them. With ability to represent non-flat roads, slightly uphill/downhill, up to steep narrow streets of Italian towns.
A microkernel-based OS working on Raspberry Pi 4 (possibly a set of missing drivers for Genode OS).
A REPL for Nim similar in power and features to OCaml’s utop.
With regards to a non-electron GUI email client, take a look at https://github.com/astroidmail/astroid. It is essentially a frontend for notmuch and the developer is very responsive.
I am not saying my dreams are widely shared, or good projects to take up… but I will answer the questions as stated.
Like spreadsheet, only to data-block-first. Multidimensional arrays come first, then they are layed out to show them best (unlike spreadsheets, where huge 2D sheet is primary, then arrays are kind of clumsily specified). Of course the ranges are also named; operations are likely to be a mix of how spreadsheets work nowadays, how normal code is written in Julia/Python/R/… and some things close to APL/J. No idea whether this can be made more useful (for someone) than just the existing structured-iteration libraries, maybe with a bit better output/visualisation code…
A DVCS that does not regress compared to Subversion. I want shallow (last month) and narrow (just this directory) checkouts supported on the level that the workflow that makes sense is just versioning entire $HOME and then sometimes extracting a subset as a project to push. Although no idea if the next iteration of Pijul will approach that.
A hackable 2D game with proper orbital mechanics take-off-to-landing (including aerodynamic flight with a possibility of stalling before landing etc.). Orbiter definitely does more than I want, but for me a 2D version would feel a nice more casual thing. And probably 2D has better chances of not needing Wine…
Writers have tools for sketching out and reshuffling the story; for proofs and for code documentation there is more weight on the notion of what depends on what; and sometimes one can reverse the dependency, or replace with a forward declaration; sketching and experimenting around all that can probably be aided by some kind of a tool, but no idea how it would look like. I guess it would have something in common with Tufts VUE…
A DVCS that does not regress compared to Subversion. I want shallow (last month) and narrow (just this directory) checkouts supported on the level that the workflow that makes sense is just versioning entire $HOME and then sometimes extracting a subset as a project to push
git can technically do this (using worktrees, subtrees, sparse checkouts etc.) - but the UI for it … does not exist. It seems like a low-hanging fruit to implement this (and one which some friends with whom I collaborate on monorepo tooling may end up picking at some point).
The thing that git fails completely on data-model level, is that it insists a branch is a pointer. In fact it is more of a property of a commit, which leads to much much better handling of history, and as a curious (but convenient) implication also brings possibility of multiple local heads for a single branch…
Of course all-$HOME versioning is likely to benefit from a more careful approach to branches, and maybe treating not just content but also changes as more hierarchical structures with possibility to swap a subtree of changes in place, but I really do not believe in anything starting from git here…
Your spreadsheet concept basically already exists in Apple Numbers. Spreadsheets there don’t take up the whole page but instead are placed individually as a subset of the page.
To your point on DVCS, there are big companies that do have this kind of thing available, but I’m not sure how much of it is open-sourced.
Re: Apple Numbers: Hm, interesting (not interesting enough to touch macOS, but I should look up whether they support more dimensions in all that etc.). Although I would expect the background computational logic to be annoyingly restrictive, but that could be independent of the layout.
Re: DVCS: what I hear is very restrictive actually, more about how to handle an effectively-monorepo without paying the worst-case performance cost than something thinking in terms of how to structure the workflow to be able to extract a natural subproject as a separate project retroactively.
I mean, data flows are cool, sure, but I am fine writing them in one of the ton of ways in text, though.
They don’t solve the data entry + presentation issue per se (layout of computation structure and layout of a data set are different issues), and structuring a proof looks way out of scope for such a tool.
ETA: of course a data flow language well done is cool (and any laguage paradigm well done is cool), I just don’t have a use case.
With Luna the idea is that you can write in text if you want, then jump to graphical instantly and tweak, then jump back to text, etc. with no loss of information.
As to the rest, I guess I don’t know the domains well enough to really grasp your needs & pain points :) just wanted to share FWIW, in case it could get you interested. Cheers!
Sure, I understood that capability to switch between representations losslessly, I just need a reason to do significant mouse-focused work (which, indeed, is not said anywhere in my comment) so using this capability would always be a net loss for me personally.
A cloud-free IoT device framework/os. There’s so many cheap Chinese IoT devices out there that are just taking some off the shelf software and tossing it on lightly customized hardware. If there were some software that didn’t require a server to operate I have to imagine there’d be some that would pick it up and could slowly start to change consumer IoT from a privacy & security nightmare to what it was originally supposed to be.
Unfortunately, managed to finagle my dream project at my day job into existence so all of my mental energy has been going into that. (Which coincidentally, is making a cloud-focused IoT platform a little less cloud-focused.)
Have you heard of/used Homebridge? I think its main thing is HomeKit-specific (so, Apple products), which works for me, but it also has a web UI available where you can manage your IoT devices too.
I have an odd collection of Philips and Xiaomi smart devices and am able to keep them all safely off the internet and controllable through all our devices at home, it’s nice!
Offline, local control is one of the big selling points for BLE, especially with the mesh spec finalized and (at least starting to) be more and more common. Getting consistent hardware/implementations/performance, on the other hand, still feels way too difficult. Similar can be said for Weave - makes a ton of sense but is genuinely not a fun thing to work with.
I’m not sure why but I find the DIY systems (Home Assistant, openHAB) abrasive and, for me at least, flaky.
My dream web app would be an ability to pay open source maintainers some amount of money, whatever they feel is appropriate, to either ask questions to or even have them review an idea. My work, like most people here I would imagine, is entirely reliant on open-source projects maintained by often great people. However getting help in these projects is a nightmare. It’s either a mailing list that gets ignored, or a slack where its often the blind leading the blind.
I would love some sort of GitHub integration that says “for $500 one of the maintainers will look at your bug report and give you some ideas on why you might be seeing this problems”. I don’t want some sort of feature bounty, because I really like that new features are PRs. However sometimes I just need someone who has spent a lot of time with the software to take a look and let me know if I missed something, or if I need to make a PR where it might make sense to start. Or even to say “look what you are doing is terrible and we don’t recommend our software to do that”. I think it’s a missing link between traditional “enterprise support” and the current state of development.
The software industry it too geared towards Google-scale. Self-hosted software is often PHP with some database that needs to keep running in the background. Backup is never mentioned in the installation manual.
If I had the time I would:
Build a new kind of flat-file database that handles streaming updates and syncs to S3.
Build a suite of tools that run on top of it and can scale to zero when not used.
As far as I know, some HDD firmware will buffer & reorder writes for performance reasons; meaning a power outage can (rarely) cause an ext4 “journaled write complete” to get written without the actual content.
Well, at least most of them mostly honour write barriers, and at least we have grounds to call drives that lie about barriers lying garbage that they are (no-barrier sequences of writes are fair game to reorder, though). Lying is an important part here, of course. With S3 the normal mode of operation is officially expected to have temporary inconsistencies even in the best case, which is fine for many use cases, but maybe even more annoying than the modern hard drive behaviour in this specific one.
That may be, I have no idea. I’ve definitely read that writing a database backend on modern file systems and drives is a nightmare.
On S3, you wouldn’t even need a power outage in order cause consistency issues, though. Hence why I suggest not trying write a database that stores data in it. It’s really not designed for that.
A window manager that I can save window layouts and “summon” them onto screens (this is important: my workstation has two heads, but I often work on my laptop in the garden and I want to have the same screens there).
I’ve got something well-gross “working” but it’s an unhealthy blend of autoexpect and stashing stuff in window properties that I can pick up in i3-cmd+xdotool+xwd scripts, a little x11vnc with a clipped window (left screen) and I think with better integration with the window manager I could do better, but I’m busy.
I’d like an operating system I can run on a laptop that I can:
leave plugged in without the battery swelling
run google hangouts, zoom, goto meeting, and whatever other video conferencing tools I need for work and family
run slack
run 1password (or another non-lastpass password manager)
use a real package manager on
program in ruby, go, java, python, c, etc without horrendous amounts of pain
I guess Ubuntu is my best bet? I still have yet to find a single linux-using coworker who can reliably join google hangouts video calls though (and I have no control over what video tool I use for work, but I CAN control my OS)
I’m specifically frustrated with my nearly $3000 2018 macbook pro which has a swollen battery and cannot simultaneously handle a video call, slack and my IDE. I do recognize I’m being grumpy though.
To be fair, that’s probably an issue with the video calling software, Slack, and your IDE. Though IDEs at least have an excuse for eating some resources.
My tip is to use Slack in a browser tab if you can, though this doesn’t allow you do call through it.
I run Fedora on a Thinkpad and carry an iPad for slack and other text/video conferencing tools. Oddly enough, I landed here out of frustration with Apple laptops released since 2016. I’d been happily using Mac laptops since the ’90s, prior to that.
The only place Fedora falls down for me (for the items on your list) is the reliability of conferencing tools. I’d say the ones you list are fine for me about 80% of the time, but soaking up the rest is worth the cost of the low-end iPad I use to do it. Hangouts/Meet, Zoom, Goto Meeting, Teams, etc. all work well there. Plus the camera is better and slack sucks less on iPad than anywhere else I used it. Slack and Discord don’t chew battery there they did on Mac and Linux, either.
My non-lastpass password manager of choice is Bitwarden, fwiw. It did a decent job importing my large, very heavily used, 11-years-old (all my passwords since 2007!) 1password database, but there were quite a few empty entries to clean up after initial import.
I still have not found any app for my notes. It should be a simple, fast and responsive, and beautiful, app that syncs my notes and works on Linux, MacOs, iOs and has a web-client.
There are two million notes apps out there, why doesn’t anyone one of them get it right?
I’d really like a HTTP proxy inside Emacs so that I can navigate and edit requests and responses with all of the usual text editing tricks. It would be something like a cross between mitmproxy and magit.
A programmatic diagram / graphic builder, like Processing that also lets you edit the diagram with a mouse, like OmniGraffle, and the source updates as you do.
I’ve had to draw diagrams with lots of repeating pieces in omnigraffle, which gets tedious to edit if you need to make a change that isn’t just a property change. It should be a function that can be repeated in any location, and then updates live everywhere when I change it.
And of course most editors aren’t nearly as great at helping you select things - I miss being able to select e.g. all dotted lines, when I’m not in omnigraffle.
In the other direction, drawing lines between boxes, and making selections are much better with a mouse.
While there’s lots of examples of graphics languages, there are clearly some interesting issues about how to translate common mouse operations through into source. I’d like to dig into it, but really I just want to use it - I came up with the idea because I want to make more better diagrams about the things I’m actually working on.
A simple terminal task manager / time tracker with curses based interface with tagging, contexts, quick per-task notes and automatic priority calculation. TaskWarrior is close, but not really there.
A Peer To Peer strike app to coordinate workers along an international supply chain to perform a Machine-Learning optimized chessboard strike.
It would work like this: users can map the logistic of the supply chains (production times, travel times, buffers, salaries, contract types and so on) through a simple app. When enough people along the supply chain are on-board, a strike can be proposed. If enough people/organizations/unions agree, the app will call the strike and maximize the disruption on the supply chain and minimize the hours of strike, weighted by the vulnerability of the workers along the supply chain.
Gigantic open problems:
how to fairly assess vulnerability across a global supply chain with a shared measure
how to give rewards/punishments for groups/individuals that declare they will strike and then they won’t (both a problem of measurability and incentives)
Electron without electron. I can see the benefits that electron has had in terms of making desktop applications easier to build for a whole lot of people. But I hate the cruft and janky dependencies of NodeJS and the whole dependence on embedding chromium. I want a native desktop framework that has a good solid HTML/CSS interpreter and an embedded UI programming language which has more type-safety and doesn’t end up being a resource hog while being somewhat aligned with the language used for the lower-level programming of the application. It should compile down to a manageable static binary size. CSS, LESS and JS concepts of UI development work for people, people understand them so it has to be something parallel to that, but strip off all the extra cruft that should be handled by the underlying language that the framework is built in as well as exposing a GL context into the HTML/Views to render highly customized graphics / bypass the HTML rendering engine from the backend.
Also an out of the box CRDT (or similar) based embeddable database library with a gossip/viral-like syncing protocol for deploying desktop, embedded and mobile applications in places where connectivity is typically pretty bad and it’s easier to shunt data along through other devices until connectivity is stable.
Qt5 is nice, but if you go the QML way it’s sort of a bastardization of HTML/CSS and when I’ve used it, it doesn’t feel as easy to use as it could be, I’d like to be able to have a frontend developer be able to build out the UI with something that they can easily pick up, and has parallels to the browser context.
Also Qt5 licensing is awkward to understand, which I think is why it doesn’t get as much uptake as it should, it’s a great framework/application building platform, but I think the license is just too confusing for people that want to dip their toe in. When you hold up GUI frameworks beside each other, you look at Qt5 and see a really robust platform for cross-platform development, but you see the license terms and a lot of people get scared away by them, there’s not a lot Qt can do about that as it’s because of the libraries they use, and the reason the framework is as good as it is is because they’ve had commercial sales to underpin the work.
But there are a lot of semantics about how C++ works in there, and how the whole QML + JSON/JS hybrid thing works that just irk me, I want to be able to hire a frontend developer to work on an application from the pool of available web develoeprs and have minimal friction for them transitioning to a desktop context (this is one of the main factors why electron is as popular to use as it is). There are some awesome frontend devs/designers out there and I think we sort of shat the bed on making usable / understandable tools for them on native platforms to build UIs.
Qt has QWebEngine but that just embeds chromium again, so you’re back to basically the same as electron.
An intelligent tiling window manager that can automatically re-arrange (on demand) based on the content of its windows (i.e. it would be aware where your browser / terminal has no significant content).
An open educational computing system with OS & compiler (+ simple hardware) that can be understood in its entirety by a single human being. http://www.projectoberon.com comes close, but a bit more modern and less archaic.
A reimagined web browser that builds on itself (i.e. starts with some SVG/PDF level primitives and creates higher level webcomponents with increasing complexity from that, all with a single coherent logical syntax (like s-expressions f.i.)).
A UI framework targeted to power users, like a modern age ncurses that isn’t restricted to the terminal with excellent support for extensible and scriptable apps, supports (but also limits) graphics, lot’s of focus on keybindings an layered VI-like modes.
Most of them are games, which, even if I could program, I will probably need help in other areas. Pikmin clon, X-COM style game for robbing banks, Theme Hospital like but for a whole town, some small puzzles…
As for software what I really want is to improve Haiku & Scryer Prolog projects.
I did it once! The end result was nice but almost always you find yourself needing something more, and adding it by yourself ruins the style completely. Also, most of the packs seem to be focused on fantasy RPGs or another cliché, which is OK but I don’t like it very much. For now, the best solution I think is minimalism (not pixel art, more like vectorized art, with just two or three colors in high res figures), which in games is not very popular, but I can defend myself.
An alternative twitter in which all current posts are displayed on the screen at the same time, using a kind of tree layout.. (Well, I imagine bending it around visually to make a wheel layout, but that’s just display-level fluff.) All posted messages that start with “You know,” share a common root. When some starting letters are common to a lot of posts (in the last 24 hours?) then those letters are displayed with a larger font.
So, trending topics and hash tags and such would arise naturally from the medium.. so long as people put their hashtags in the front. (I thought about arranging it by longest common substring anywhere within the message, but I wasn’t sure what that would look like.)
If some really long common starting string exists, then the whole message is displayed on the home page in a large font.
Users would navigate by typing. If you type “you know,”, that’s a filter.. You’d only see messages that start with that.
Something that makes writing/understanding very large amounts of YAML easy.
Bonus points if I can feed it something like… all Kubernetes options and available annotations and their types, and get some form of auto-complete and validation.
I currently use different software and services to get news in different topics:
Tiny Tiny RSS, an RSS/Atom aggregator, with many subscriptions (single developer blogs, company blogs, news aggregators, comics)
social networks (Twitter, Reddit, Mastodon)
services to share links (this website, a private Discord server with friends…)
This is too much, I often miss important news or read things I could avoid reading at all.
There are also sources I follow that could be used to trigger actions automatically:
release notes of software could be used to trigger rebuild and deploy to my servers
new comic releases could be republished to some places
I realize that all this stuff could be assembled to build a framework that I could use to retrieve information from different sources, normalize it, sort it by importance, trigger actions. Some things I’d like to do:
be able to programmatically add a source which produces certain types of content (this could be important daily news, an article on a topic, a tutorial, a comic, a software release, a message sent by someone, etc…)
detect that multiple contents refers to the same piece of information, which happens frequently when following multiple news aggregators
be notified of some important things like an uncommon topic becoming hot on different social platforms (e.g. related to world tensions, etc…)
have a single front-end to follow news (Tiny Tiny RSS suffers from a few issues)
centralize everything I’ve read in a single place (history on different browsers, social media links and likes, etc…)
I’ve looked at different software that could provide some of needed features (e.g. Weboob), but my conclusion is that I need to write specifications for a type system for applications to be based on, so that I could adapt existing libraries and front-ends to build a larger project (since I will never be able to build everything from scratch).
Recently, I’ve been thinking that for implementing content aggregators and convertors, the FAAS “paradigm” seems promising so I’m looking at knative although this project isn’t stable yet. Coupled with GitLab CI, I feel like I could easily deploy working code and finally start making something (it’s been at least 4 years I’m writing ideas around and talking about it).
A programming language that:
a) Allows me to express side effects
b) Compiles with a static syscall sandbox based on those side effects
c) Is actor-oriented, with first class support for spawning actors in separate, isolated processes
d) Doesn’t have a traditional main, but instead has first class, static dependency injection
e) Has move semantics
Pony and Rust get super close to this in some capacity. For me this would just be a dream. It would give me what I love from Java (yes, I love things from Java), Rust, Pony, Erlang, etc, in one place.
A cloud storage solution that maintains xattr tags (for use in search, etc).
Along with that, something like WinFS - a better way to query a filesystem
A replacement for APRS “SmartBeaconing” that makes more efficient use of bandwidth and provides more accurate projected positions.
SmartBeaconing is an improvement over “just transmit position every 5 minutes”, but in my opinion it’s both too complicated and not smart enough. The idea is to control the difference between the last-transmitted position and the current position using two basic rules: 1) transmit more often when going faster, and less often when moving slowly or not moving at all, and 2) transmit early when making a turn, if the product of the turn angle and the speed exceeds some threshold.
The problems with it are that it takes seven tunable parameters to do that, with different values required to get reasonable operation at different typical speeds (e.g. walking vs. biking vs. driving), and that both of its core algorithms are dirty hacks that only almost do what they’re supposed to. The speed part has a completely unnecessary parameter (the “slow speed”) that results in a discontinuity in the plot of distance-between-beacons vs. speed, the preference for beaconing at high speed biases projected positions, and the corner pegging mis-handles slow turns especially if it decides to beacon before a turn is complete.
There are only five parameters here (A, B, C, Tmin, and Tmax), and not all of them necessarily have to be exposed to the user. A, Tmin, and Tmax have units of time; C has units of distance, and B is dimensionless. The idea is that a beacon should go out if:
Your position has diverged far from your projected track, in any direction (sensitivity controlled by C)
You’ve made a course change that will soon cause your position to diverge (sensitivity controlled by A and C)
You’ve traveled a long enough distance on a constant track that you’re probably in range of some new stations (sensitivity controlled by B and C)
You’ve done nothing but you need to beacon anyway to stay on people’s screens (Tmax)
And of course Tmin is a hard rate limit to prevent abusing the network.
The parts that I don’t have time for these days:
Find some GPS datasets from cars, bicyclists, and hikers and write a program to optimize the parameters over those tracks, minimizing sum of squared errors subject to constraints on allowable packet rate. Find plausible values, find out how sensitive they are, and find out if there are some that can just be hardcoded and hidden from the user without doing much harm.
Build the algorithm into Direwolf.
Convince Kenwood or someone that they should use it.
A good desktop app for storing, analyzing, and annotating chess games.
While there are plenty of things that clear the bar of functionality, there are none that are really nice to use; their user interfaces tend to be cluttered, cryptic, or both. Most of the necessary components – PGN parsing/writing/annotation, UCI engine interface, move validation and board generation – are even available under open-source licenses, but I just don’t have the time to pick out the right set and wire them together.
Some sort of native widgets library that actually works cross-platform (GTK, Qt, Cocoa, Windows), works well, and is still maintained. Electron doesn’t count.
I’m not super familiar with LCL (or Pascal for that matter), but this does look similar to what I’m looking for. I’ve also seen libui thrown out there, but last I heard, it was in maintenance mode and not very complete.
Well, Pascal is very readable, and FPC dialect includes the modern conveniences of imperative / object oriented programming with some basic generics support, and C-like FFI is literally just a declaration of a function + name of the implementing library.
So even if you do not want to write the entire application in Pascal, you can just write the immediate GUI-handling code in Pascal (LCL uses a class hierarchy, so wrapping it is not a completely trivial task) and FFI the real logic.
I would say LCL has quite a nice library of UI elements; and there are some third-party components, too.
File synchronization service with on-demand local file loading.
I always wanted a network file synchronization mechanism where all files, directories, symbolic links etc. are locally visible but the data of the files is only loaded on demand. There would be a command (and context menu item for graphical desktops) to load and unload specific files or directory trees for the local system. Once loaded locally, it should transparently and continuously synchronize the files with the server.
With traditional remote disk mounts there’s no local storage space wasted, but usage experience suffers from the network dependency. File synchronization services are more pleasant to use since all files are local, but waste locale storage space. This would combine the advantages of both.
libprojfs might help you on Linux? (disclaimer: wrote a bunch of it) You can build it without the C# extension points and make a responsive, virtualised filesystem mount.
That looks promising. After a quick look into the project description, the project seems to revolve around the crucial part providing the necessary generalized APIs/libs to build such a synchronization mechanism. I’m gonna need to find time to dive into this. Thx.
I took it for a spin apporximately 3 years ago. I did not notice that since then it now has the exact feature that I described. Thx for your hint. Would be extra nice, if it also ran on OpenBSD.
A distributed store-and-forward network protocol to replace email/Dropbox, but with API hooks for negotiating real time P2P sessions (slack/hangouts/etc), and community moderation (a la mastodon server federation) to address spam.
Bit late to the thread, but I just want to let this out.
I want a graphical code editor, one that is not electron, has extensive plugin support (so that it can even be called a “light IDE” with the right plugins installed), is cross platform, and most importantly, is cross CPU architecture.
Sublime Text does all of those except for “cross CPU architecture”, and honestly ever since I got into ARM computers I didn’t even genuinely feel the need for something that can replace it, but as it stands now, ST runs a bit slow with qemu-user-static on my PBP, and nothing much can hold its place. I use vim for the most part, but I need something graphical for better productivity.
A simple usable RADIUS server that takes a tiny config file, if any. Aim for 60-300 seconds from downloading/building the binary to running service.
An IRCd with the same goals.
Both in a compiled language.
A website with data and charts. Take data series and plot them on the same charts, easy to remix existing datasets and plot new information on other datasets. So you can point out e.g. political events with commit rates in open source projects, etc.
a serious graph file system, ideally on NFS.
Eg, I want to CD into the same folder from many locations. I know perhaps something similar can be built via (Sym)Links but i feel they just aren’t the same.
An event pipeline of all events, in my life and in the world. I want to correlate wake up times with external temperature and news and school closings, etc etc
a whiteboard software that is actually similar to the real thing
a graph architectural decision tree that shows selections and alternatives and trade offs and allows to compute quantities across traversals
something like a “Kirchhoff circuit laws” builder that computes maximum theoretical uptime/latency for distributed architectures
Re: graph file system — what do you really want from it? After all, symlinks are cool… Of course, there is this issue of reachability and garbage collection otherwise.
If you just want a graph — maybe have an mkdir wrapper that creates uniquely named directories in a stash, and the name you give is immediately the name of a symlink, maybe that symmetry would help?
(I kind of have something remotely similar with a query-based filesystem, but there graph structure is irrelevant and queries to stuff stored e.g. in SQL are the focus, so I cannot easily interpolate what you want…)
I’m sure it’s a terrible idea, just thinking about permissions makes my head spin. In part I’d be curious in the academics of it, on the other side we already have a lot of linking going around so why not go bold with it and make it a first class thing ? Perhaps edges can have properties too..
Well, symlinks just apply to directories the logic already applied to files. There are things-stored (inodes), they have permissions. There are navigational links (hardlinks for files, symlinks for directories), they have names and locations. The reason first-class indistinguishable-from-first hardlinks for directories are not widely used is reachability: for files it is enough to count references, for directories you need a true GC to handle unreachable loops.
So if you just do that store-directories-separately-and-symlink, the permissions applied to the real targets would work just fine.
If you want edge attributes and stuff like that, then I guess you need to start by finding a graph database you really like w.r.t. its data model, then specify the FS representation that that could actually work with unaware tools, then it is a question of a virtual FS for browsing such a DB. But graph DB comes first in this case.
An independent web browser that isn’t based on Google-funded code and has full keyboard control.
While its not a single platform, you might interested in tridactyl. Its a firefox extension, some of its greatest hits:
ctrl + i
alias tabsort jsb browser.tabs.query({}).then(tabs => tabs.sort((t1, t2) => t1.url.localeCompare(t2.url)).forEach((tab, index) => browser.tabs.move(tab.id, {index})))
will define an alias for tabsort that, well sorts tabs by domain name.
I’m not related to the project, but I do sing its praised every change I get.
Thanks; I’ve heard of this but A) I don’t use vim or want to learn it and B) it runs on a browser whose rendering engine just had its entire team get fired and C) it’s also a google-funded browser.
Your wording allows currently Apple-funded WebKit (and I think there are some WebKit wrappers that qualify already), is it intentional or not?
Given that webkit was the original rendering engine for Chrome I would not count it as being independent, no.
I mean, I still use it, but I would prefer to have a healthy selection of engines rather than just a bunch of Chrome descendants.
Technically, Google did not do any work on WebKit, it just added V8, then later forked the thing. So it’s all descendants of KHTML → WebKit, with Google being of no use for WebKit proper. And Gecko, which I guess can also plausibly be called Google-funded…
[the problem, of course, that with people saying «Living Standard» without noticing it is an oxymoron, any browser will either have large compatibility issues, even if the good side of the Web is viewable even in Links2, or be a horrible mess because it chases a moving target that moves too fast to have reflection on consistent design]
qutebrowser
It was about a 3-day learning curve for me, but I’ve loved it ever since.
Full keyboard control, but some things you can’t do with the mouse :)
Qt WebEngine, which is what Qutebrowser runs on, is essentially just Chrome’s layout engine. Nearly everything is Chome nowadays.
You’re right about the layout engine, but it’s much better than Chrome all-around. No phoning home, built-in adblock, respect for keyboard user.
It can also run on QtWebKit, but I think they’re going to phase that out, as it’s quite outdated.
https://github.com/qutebrowser/qutebrowser/issues/4039
I’m still not sure what to do about QtWebKit support - right now, I’m still waiting for something to happen (ideally a new QtWebKit release rebased on a newer upstream WebKit) given that it’s still in active (even though slow) development: https://github.com/qtwebkit/qtwebkit/commits/qtwebkit-dev
I want a Wikipedia for time and space. I want people to collaborate on a globe, marking territories of states, migrations of peoples, important events, temperatures, crop yields, trade flows. I want to pick which layers I want to see at one time. This sort of thing would give us a clearer, more integrated view of history, and show how connected we are all. A volcano eruption in Mexico led to crops failing around the world. New World crops led to an explosion of people in southern Chinap, a previously less inhabited place. A Viking woman met natives in present-day Canada and also visited Rome on pilgrimage.
Excellent idea. I have an idea for something similar but less… featured. My idea is about time and space tagged news.
Also thinking about one from time to time - in my view kinda like Wikimapia for history, usually thinking how country borders and armies could be represented.
The Seshat global history databank is a bit similar to this (great) idea.
Animated diagrams. Something like Visio or Omnigraffle, but with the ability to easily show messages flying around, instances appearing and disappearing, clusters moving, etc.
People usually reach for PointPoint or Keynote for this, which drives me up a wall. I’d rather have something that can directly create a video file or animated gif.
Bonus points if the storage format is plain text and plays nicely with version control.
With such preferences I would consider whether there are enough modules for Asymptote to write the drawing/animation code there efficiently…
Perhaps something built around Mermaid.js could work. Or a graph library for D3
My sibling comments have mentioned it, but here is the link to 3blue1brown’s manim.
https://reanimate.github.io/ perhaps?
Thanks for that link, Reanimate looks amazing.
There was an CS undergraduate thesis project that did this at AppState in December 2019. I’m not able to find it right now, but it was pretty cool. Similar to how 3blue1brown’s stuff looks/works
I looked at the code that 3blue1brown released. It’s perfect for his needs.
SVG can easily do this.
Keynote can export both video and gif formats.
Do you mean something like Canva? https://www.canva.com/graphs/
Curious, what is your use case? Is this the model systems or to animate text or?
I often want to use animation to explain the dynamics of software architecture, either in terms of the interaction of parts during runtime, or the evolution of the architecture itself over time.
Although it’s total overkill, if I had to make animated diagrams I would use Blender.
Probably not exactly what you want but your description made me think of LOOPY. I’ve not tried it myself at all either, just found it interesting.
Lucidcharts is definitely not perfect, but has layers and a presentation mode that should come close to what you want, except for the video out. But maybe If you screencap the presentation?
A simpler WebRTC alternative supported by major browsers.
Transfering video/audio between client-server or even p2p using NAT traversal is not rocket science. The problem with WebRTC is that rather than making one spec solving the specific problem at hand, it combines a bunch of older specs harping back to the 90s, but only partially implementing them (SDP, STUN, DTLS, RTP, SRTP, SCTP). Seriously it’s like some old telco like Ericsson having a wet dream thinking their old research turds are OK in 2020.
Each standard comes an RFC, but all WebRTC implementations in the wild (the 3-4 we have in the browsers), use an undocumented subset of each. To implement a WebRTC stack there is no definite way – look at existing code, read specs, test, fail, decode packets, investigate.
SDP just isn’t an ok way to transfer structured data (even XML would be better!). It also comes in flavors like “plan-b” and “unified” and neither is a sure way of supporting all browsers. Unified is apparently the modern way, but it’s amazingly verbose for no reason (the organisation of media around bidirectional “transceivers” is just absurd).
RTP is totally done in an age not even remotely like 2020. Lots of words and effort is spent on multicast, unencrypted transfers and imaginary A/V infrastructure that doesn’t exist.
The WebRTC as adopted by W3C is not a spec, cause rather than specifying anything, it’s more like someone took the original WebRTC C++ code wholesale and put the code into english words – and it’s not correct. It says implementations should do X, but in practice everyone does Y.
Why not dream of a simple video chat, period? No need for browsers in this equation.
True true… i’m being realistic :)
Laughed so hard on this one! :D.
I am really interested in what you think or do you know any better way to implement what WebRTC considering general use case scenario.
WebRTC is a bit trying to do a bit too many things. I’d like to split the spec in (at least) two parts. One that just concerns getting audio/video from point A to B (including NAT traversal), but doesn’t really care about multiple participant calls, p2p or god forbid proxying/routing of A/V.
SDP needs to be obliterated forever. Pretty much any information exchange format would be better than this.
STUN/TURN is ok for NAT traversal, but the spec can be reduced A lot to support only the problem at hand.
Getting video from A to B need some feedback mechanism for dropped packets, ask for keyframes etc. This is what RTP/RTCP, does, but there’s too much darn legacy.
Take RTP. Here’s a fun detail. It has a “header extension” mechanism to allow for experimentation. https://tools.ietf.org/html/rfc3550#section-5.3.1 – the spec specifically says it can be ignored and most things should be transferred in the payload instead. But the payload is encrypted under SRTP, and some metadata is needed unencrypted in the header. Arrive https://tools.ietf.org/html/rfc5285 where this experimental and ignorable header is now specified and carries absolutely crucial information for WebRTC. And it does it together with… (drumroll)… SDP! Where each extension key is coded to a URI that carries significant meaning which often is outdated/not documented but very important.
Two things.
An Emacs for the web – browser primitives, but with hooks and definitions that allow full user control over the entire experience, integrated with a good extension language, to allow for exploratory development. Bonus points if it can be integrated into Emacs;
a full stack language development environment from hardware initialization to user interface that derives its principles (user transparency, hackability) from Smalltalk or LISP machines, instead of from the legacy of Unix.
Nyxt maybe what you are looking for. More info here & here.
Oooh, indeed. That is significantly closer to what I want.
Re 2: Sounds like Mezzano https://github.com/froggey/mezzano apparently. Actually running on arbitrary hardware is even harder, of course, because all the hardware is always lying…
That seems interesting!
Really, you’d bootstrap on QEMU or something, and then slowly slowly expand h/w support. If you did this, you could “publish” a hardened image as a unikernel, which would be the basis of a deployment story that is closer to modern.
ETA: I’m not sure I’d use Common Lisp as the language, but it’s certainly a worthwhile effort. The whole dream is something entirely bespoke that worked exactly as I want.
Well, Mezzano does publish a Qemu image, judging from discussions in #lisp it is quite nice to inspect from within, and judging from the code it has drivers for some speicifc live hardware… A cautionary tale, of course, is that in Linux kernel most of the code is drivers…
Not something that Mezzano is currently trying to do afaik but there was a project, Vacietis to compile C to CL with the idea idea to be able to re-use BSD drivers that use the bus_dma API. From http://lisp-univ-etc.blogspot.com/2013/03/lisp-hackers-vladimir-sedach.html :
#1 emacs forever.
Would something like w3.el be a starting point for this, or are you envisioning something that doesn’t really fit with any existing elisp package?
Like, I’ve used w3 in the past, but I’m thinking more like xwidgets-webkit, which embeds a webkit instance in Emacs. I should start hacking on it in my copious free time.
That makes a lot of sense. This makes me think of XEmacs of old, ISTR it had some of those widget integrations built in and accessible from elisp.
Come to think of it, didn’t most of that functionality get folded into main line emacs?
I love emacs, a little TOO much, which is why I went cold turkey 4-5 years back and re-embraced vi. That was the right choice for me, having nothing at all to do with emacs, and everything to do with the fact that it represents an infinitely deep bright shiny rabbit hole for me to be distracted by :)
“If I can JUST get this helm-mode customization to work the way I want!” and then it’s 3 AM and I see that I’ve missed 3 text messages from my wife saying WHEN ARE YOU COMING TO BED ARE YOU INSANE? :)
I feel seen. Yeah, I basically live in Emacs; it informs both of my answers above; basically, I want the explorability of Emacs writ large across the entirely of my computing.
A mostly text-based shell Interface to my computer, which is not stuck in the last century: https://matklad.github.io/2019/11/16/a-better-shell.html
Interesting things happen with arcan-tui and userland. Powershell and powershell-a-likes are not the answer.
Userland is really nifty and definitely breaks some new ground in UINX user/computer interaction.
Pretty much agree with your post. Removing the distinction between shell and terminal emulator would allow new and interesting modes of operation. One of them could be pausable and introspectable pipes. Another one could be remote SSH sessions that have access to the same tools as the local one.
Try power shell
First paragraph of the post explains that I am not looking for powershell. It indeed is a big improvement over bash, but in areas I personally don’t care about.
If you read the post this isn’t what the OP is going for. Powershell brings some excellent new capabilities to the table with object pipelines, and has some nice new ideas around things like cmdlets and extensability, but his post goes into much more detail about user experience aspects Powershell doesn’t even come close to providing.
I would like Oil to be able to support this kind of thing, and at least in theory it’s one of the most promising options.
And ironically because I’m “cutting” the interactive shell, it will should be more possible than with bash or other shells, because we’re forced to provide an API rather than writing it ourselves.
I had a discussion with a few people about that, including on lobste.rs and HN. The API isn’t very close now, but I think Oil is the best option. It can be completely decoupled from a terminal, and only run child processes in a terminal, whereas most shells can only run in a terminal for interactive mode.
Related comment in this thread: https://lobste.rs/s/8aiw6g/what_software_do_you_dream_about_do_not#c_fpmlmo
Basically a new “application container” is very complementary to Oil. It’s not part of the project, but both projects would need each other. bash likely doesn’t have the hooks for it. (Oil doesn’t either yet, but it’s has a modular codebase like LLVM, where parts can be reused for different purposes. In particular the parser has to be reused for history and completion.)
Amusingly, using
:terminal
inneovim
changed a lot of things for me. I could then go to normal mode and go select text further up in the ‘terminal’. Awesome!I mapped it to CTRL-Z to get a consistent behaviour between terminal and non-terminal Neovim
Yeah, this speaks to some of the power he references in his post that emacs brings to the table. IMO one of the things that makes neovim so impressive is that it takes the vim model but adds emacs class process control.
I’d love it if people would do more with the front end / back end capabilities neovim offers, beyond just using it for IDE integrations and the like.
You’re basically describing a regular computing environment.
Sounds like your idea and my idea have some interesting possibilities when combined :)
A trimmed down soft fork of firefox. Removing everything not necessary. Including everything marketing/saas related like sync, pocket, studies, random advertising, and so on. All the unnecessary web api stuff like “payment requests”, “battery”, “basic authentication”. Removing APIs that don’t need support anymore like ftp.
My goal here is to move towards something that doesn’t have behavior I don’t like (nagware and privacy leaks), and has a smaller surface area for security vulnerabilities.
The hardest problem with this is keeping it easy to sync up with firefox for the code that you haven’t deleted (using an old version isn’t an option because of security). My initial work on this (which I don’t anticipate having time to pick back up) was a system that stored “patches” in a version control system, were a patch was a directory that contained a list of files to delete/create. Diffs to apply. And scripts to run. This meant it was substantially less of a problem dealing with the inevitable “merge conflicts” since the system could be substantially smarter than a diff tool (e.g. instead of applying a regex once and making a diff, I could make a script that said “apply this regex everywhere”).
In terms of reducing surface area, I’d love to see a build without WebRTC, or even websockets. Things like the recent report about ebay doing local machine portscans seemed to be viewed as poor behavior on their part rather than a fundamental firewall breach allowed by overzealous incorporation of poorly thought out standards. The web is rapidly becoming insecure by design, but the whole reason to use the web is to provide a sandbox for untrusted remote content.
If I was developing it, I would be pretty worried that a change like that would break too much of the web to be useful. A “better” browser is only useful if it actually has users.
You’d have to actually measure how many sites that would break to be sure. One option might be (if it was reasonably easy to maintain, not sure) to throw it all behind a webcam like permission that needs to be given to each site and is requested upon use.
There is a fork called Pale Moon. The issue is that the code base is too large for the maintainers and it probably suffers from very old memory hazard bugs that don’t apply to Firefox anymore (some recent CVE still apply to Pale Moon). Also it doesn’t implement many new APIs (WebExtensions, WebRTC, etc…), which is a design choice you suggest.
Something actually good for photo management and editing..
All the putative Lightroom replacements are terrible. I would happily pay what I pay to Adobe to literally anyone else for software that did the 60% of what Lightroom does that I use.
Can you expand on the things in Lightroom you find most useful?
Organizating, developing, importing and exporting. And I don’t do that much developing in Lightroom, but I do use Photoshop, which opens a whole other kettle of fish.
Do you / others hate capture one? Seemed pretty good from my limited use but not mentioned in any of these replies.
This is not the answer you want, but
… is because NFS itself sucks.
I feel your pain!
I’ve been continually disappointed at how poorly many Linux apps perform when run on an NFS file share.
Also, I do not speak for my employer, but I work for the AWS EFS team and my service presents itself as an NFS file share, so we see a fair bit of customer pain around this as well.
It surprises me how little innovation I see in the remote filesystem space these days given that we have pervasive gig speed networking growing on trees and computers so fast that the processing load of the NFS protocol is largely inconsequential for many use cases.
I work in the data space and everything is moving to “the cloud” and that typically means object stores. People have wasted too much money on things like HDFS, they just want to store files somewhere and not have to think about it. (The pain that these are not really file systems is largely ignored)
Yeah it sometimes surprises people how many use case really work well when implemented on a file system, especially one that has reasonable capabilities for locking and the like.
That’s true everywhere from the back end where people are storing raw seething data all the way up to the consumer user experience where Apple tried DESPERATELY for the first decade of its life to hide the fact that there was a UNIX-ish userland behind the curtain with a bog standard traditional filesystem, and even they’ve gone back on that by providing their “Files” app and interface in recent releases.
Files are just an incredibly useful way to think about data from just about every angle. Certainly not THE ONLY way or THE BEST way because such descriptors are pointless when talking about abstractions.
In a sense, files are a reasonable (of course almost nothing is perfect, and files are not) implementation of a core and almost unavoidable notion of compartmentalisation, or a Bunch Of Stuff, together with the clearly desirable idea of using various tools on the same Bunch Of Stuff… Hard to avoid that completely!
Second on every LightRoom clone being… not great. Sadly I’ve stuck with actually paying for LightRoom, but I would gladly pay $80+ one time for an adequate replacement.
No, my complaints apply to Lightroom itself too — at least back when I used it, export worked exactly the same way there.
I actually like the editing part of RawTherapee more than Lightroom :P
I’m just getting started. The auto button is much better in LightRoom :’)
You’ve probably tried it, and it has its own warts, but digikam does track the jpeg and allows you to re-import raw images. Just letting you know if you haven’t tried it before :)
Have you seen Darktable? I think it covers most of your bases.
Of course. The remarks about Lightroom and its clones definitely apply. Also its GPU support is incomplete and based on OpenCL – with some advanced usage of OpenCL that’s not supported by Mesa Clover (the “images” feature especially).
Things I want and/or have worked a little on–that hopefully already exist and that I just haven’t heard of yet. :)
A proper client-server map tool for pen-and-paper RPGs, that supports quick sketching of environments or detailed art. Maptool is what we used to use at a friend’s house but it’s kinda clunky…the setup with a projector on the table was not though. :)
A personalized web spider rancher and search engine that seeds itself based on your browsing history. I’d also want to be able to federate it or allow collaboration with other individuals groups.
An open-source VR engine that isn’t Unity or Unreal (or webshit). I want something I can quickly prototype ideas on, like lovr, but that isn’t a walled garden and isn’t pushing me to using the current dominant model of game development tooling.
A self-hosted IFTTT alternative. I’ve already written one, and while an interesting step it isn’t what I want.
A self-hosted geocoding database. Something like Pelias but like I’m super lazy and just want it done somehow.
A good time-series database that can do explicit interpolation, extrapolation, and give ‘quality’ estimates to the data returned. I’ve worked on one or two of these, and everybody makes them too complicated (using elk or cassandra or Big Data shit or whatever) but also somehow manages to miss the usecase of an engineer who is taking signal measurements and needs to do reliable math on them. I just want Redis but for boring scalar and vector measurements that fit into a schema. I have a whole rant about this–and I’m also probably ignoring some solution that already does this!
A tool I can point at a directory of music and just say “sort out all of this shit and please remove obvious redundancies”. I have a bunch of files of music collected over two decades and the one thing I can guarantee is that: none of the ID3 tags are consistent, none of the filenames are consistent, none of the directory structures are consistent…I just want a program to go do the needful somehow. Machine Learning. GPT. AI. BIG DATA. idgaf.
A good replacement/rewrite of Balsamiq Markup that is documented for other people and runs on Linux and isn’t on the Cloud or some gnarly Adobe platform. I really miss this tool…I paid for it and I don’t really have any systems that run it easily anymore. :(
…there are a bunch more, but yeah.
Have you formed opinions on the godot VR editor? It’s a little rough and still needs the libraries from Valve to work nicely.
I found the debug builds had too much tearing and it sets off my motion sickness, so I haven’t kept at it.
I am a huge fan of godot for fast prototyping. I haven’t experimented with the VR support much but it is open source and extensible so even if it has issues now, they should be resolved in the future.
I have not! I’m still getting my apartment and devspace setup after a move.
Have you taken a look at beets yet? It uses the MusicBrainz database (community created and edited) and works great for me. If you want a gui, I have heard good things about MusicBrainz Picard.
+1 for Beets. It’s built for detail obsessed control freaks like us who want to specify EXACTLY how we want our music categorized, de-duped and stored.
I use and love it to organize my music for eventual indexing and playback with Plex.
What about Huginn [1]?
I’m a huge fan of that, and will probably give it a shot. But then again, I really wish it was Elixir. Another project to do.
:-\
Elixir would be ideal to build this at scale (high frequency or many users).
Oh, thanks for the reminder, I wanted to have another look at Huginn. I actually started to build something like this nearly 10 years ago but it didn’t get very far because I lost interest very quickly. And I noticed it didn’t actually benefit me a lot.
My idea was more something to be described in current terms as a mix of cronjobs, IFTTT, and Alexa - but all text-based, like a very fancy IRCBot. Not sure if Huginn 100% maps to that, but I think it’s close.
https://github.com/huginn/huginn ?
What are your gripes with LoVR?
Oh, I don’t mean to sound like I’m griping! I’m going to be using it soon. :)
It just is a little small in what it supports, which is to be expected. I’ll also have to talk to it over enet for my purposes, instead of like websockets. I’m a total n00b at lua so there may be an obvious solution here I’m just missing.
There is not an abundance of alternatives - I am not particularly fond of the API model (same goes for löve though). I have not done anything substantial in LuVR myself, mostly played with its internals, using it as a test platform for how VR compositing independent clients would work and what the needs would be API wise in a VR desktop like setting.
Clean room ZFS reimplementation in public domain
Is that even possible? Is there a sufficiently evolved protocol specification and even maybe test suites to allow you to create your own implementation without sharing source code?
In the absence of a spec the easiest way to do clean room reversing is to have someone read the code and write a spec. Then, only people who have never seen the code read this spec and write the new code.
There are some other relevant reversing strategies, but the above is likely to be sufficient for ZFS. You get a second implemntation and a spec – well on the way to being a standard at that point.
My ignorance is showing here but this FEELS like an incredibly difficult problem space for that approach. Just from reading and talking to others I get the sense that there are a huge number of subtleties around the various subsystems of ZFS and how they interact.
Sounds like a mammoth effort, but a worthy one?
Oh yeah, I didn’t say it would be easy :) I did put it in a list of things I don’t have time for…
I think it could be done, but it’s not a summer project.
I currently use MyFitnessPal for calorie-tracking and BigOven for recipe storage, and I cordially detest them both. They’re both buggy, slow, lacking a lot of features that I’d like to have, and “unmaintained” in that special SaaS way where vendors just stop fixing bugs or adding features once they have cash flow going. They also both cover like 80% of the same ground - BigOven has a way to calculate the nutrition numbers for a recipe but no daily calorie log, while MFP has no way to store recipes as anything more than a list of ingredients.
I want to replace the pair of them with a lightning-fast local-first app, but I just never seem to get around to it.
We have Fitatu here and it seems to do both, but yeah, it’s local really.
I second detesting MFP, but I really haven’t found anything better. It’s one of the few services I use where the mobile app is really the only way to use it (they have a web app, which is completely unusable IMO). Also very frustrating that they don’t have a public API. I spent some time reverse engineering their mobile sync API, but gave up after a while…
I did some research into making something similar, and found https://world.openfoodfacts.org/ as an interesting data source, but haven’t had the time/motivation to build anything yet.
I’ve tried a whole bunch of MFP alternatives and I keep coming back to MFP (which I also do not like) because it just has more stuff in its database - I can scan almost anything I buy (in the UK) and MFP will know the nutritional details.
A CRM for personal relationships
I have read about https://www.monicahq.com/ as an example. Never tried it. Have you tried it?
Personally I find the concept a bit … autistic/creepy, but still have considered it as possibly useful tool.
Well, yeah, that’s me. Thanks for the link.
If you think it might be useful, but found a dedicated CRM app a bit much, have you tried using the notes field in your phone’s address book? I use it to jot down names of kids & spouses and things like “vegan”, “teetotal”, “pronounced […]” etc. They’re synced everywhere automatically and they’re searchable in a hurry from your phone.
I think it may seem creepy because of associations with corporations and marketing.
However, when I actually think about it… Would my life be richer and better if I was more consistent about staying in touch with people? Almost certainly!
I tried this but had difficulty getting the self hosted version to work. As far as creepy, I think of it as just a memory extension. It isn’t anything someone with a good memory couldn’t do, just helps mortals to remember birthdays, peoples’ interests, etc.
…the more I think about this the more I want it
I found this one a while ago: https://www.monicahq.com/ (not affiliated)
It needs a lot more automation to become useful IMO.
Thanks
Why do you need this, if I my ask?
Help me follow up with my friends and family
Alternative code formatter for Rust, which works gently and cooperatively like
gofmt
.rustfmt
is a blunt tool with lots of edge cases that destroy readability on purpose, becauserustfmt
always chooses its own heuristics over any human input.(e.g.
gofmt
preserves author’s decision whether a construct should be 1-liner or in multiple lines.rustfmt
will explode expressions when it feels like it, and make spaghetti lines if it estimates they’ll fit).I found that if rustfmt makes my code less legible, it often wasn’t the formatting that was the problem but my code.
I’m sure that’s not always the case, though.
If your code isn’t great, then improving it is a job of compiler warnings and clippy lints that should clearly explain how to make it better. Formatter isn’t a linter, and it’s not supposed to be give you some vague negative hints by making your code less readable.
Formatters are supposed to make code more readable, and free users from worrying about these details. If you need to rewrite your code to work around formatter’s issues, the formatter is failing on both counts.
Pie-in-the-Sky:
Actually Kinda Working On But Without Any Sense of Urgency:
Things That Are On The Back Burner But As Soon As Work Either Endorses Them Or Calms Down Enough To Give Me Time To Work On Them:
An implementation of ‘/dev/compute’ to expose GPUs and similar coprocessors on 9front. The goal would be to drop all the legacy baggage around graphics in the interface, and simply expose the GPU cores directly to the OS, providing a way to pass the GPU a chunk of code and a block of memory, and letting it do its thing.
You’d get high performance compute then, and fast graphics could then be done via pure software rendering on top of that.
GPUs are getting sufficiently general purpose that this is likely to work well.
Look at Vulkan if you haven’t already, it’s pretty close to the “code + chunk of memory” model so it might be interesting.
More automated security auditing tools for crawling public code repositories such as npm or crates.io, looking for anomalies and warning people if they use suspicious packages. I tried doing this myself but there’s other stuff I want to do too.
A biiiiiig pile of Linux+Rust graphics infrastructure stuff. Lots of low-hanging fruit to be plucked there still.
Something to explore the design space of “more minimal web”. Not quite as minimal as Gemini, probably something that uses HTTP but better/different text format and more minimal API.
Something to explore the design space of “human-writable encoding format better than JSON”. There’s plenty of machine-writable ones, and TOML is very nice for configuration files, but “easy to produce deep nested structures and less screwy than JSON” is still TBD.
Something that solves the same problems that systemd does but is more minimal.
Something that solves the same problems that pulseaudio does but is lower level.
Lots of other things.
It seems like the Bevy project has shown that wgpu is a nice way to interface with graphics APIs in a high-level way.
What do you think still needs work in the graphics ecosystem in Rust? Personally, I’d like to see more development on some of the GUI crates like Druid and Iced. Do you think there’s still a lot missing on the game front? Curious to hear your thoughts as someone who maintains a game crate.
winit
, which is very good but causes some friction in fames sometimes, at least the sorts of games I want to write. It makes design compromises around events that are kind of inconvenient, some for the sake of GUI applications, some for the sake of as close to zero overhead as possible. I think it would be interesting to try to make something with 80% of the functionality and 20% of the size.miniquad
includes bindings to the Sokol library which does exactly this sort of thing, but it’s C.cpal
androdio
are the defacto portable audio output crates, and they could use more love than they get.With regards to more minimal web, I invite you to check out my project, which also touches upon human-writable data structures.
It is a forum building system which works in Mosaic, Netscape, Lynx, Opera, and IE, among others. Newer browsers get nifty enhancements such as actions which don’t reload the page, but accessibility and compatibility is a high priority.
The “API” is based on several text-based tokens, like referencing posts with >> and hashtags with bindings.
The base data format is txt.
A link to it would be interesting!
It’s in my profile :)
Ah, the fun question :) I have several, but here’s my notes on the Todo/Email/Calendar of my dreams:
“Thought experiment 1: What if I dumped all my inbox into my current todo software (Todoist)? This would suck because it’s a crappy email client. Thought experiment 2: What if I dumped all my todos into Inbox? This would also kinda suck, because it’s missing a few features that a good todo should have…
That second one told me what I need: Inbox has “do not show me this thing until a date” i.e the first date something should be on your radar and Todoist has “item is due on this date”. For a good todo I need both.
Imagine a tool whose goal is “what am I doing in the future?”, with each item having a block of text associated with it, and a model that lets you send items to other people. Emails become new items in your “today” list, with a start date of right now and an unspecified end date. You can do the Inbox-style “put off this item until it’s actual usable date”. Most traditional todo items are either “do this by this date”, or “do this at some point”. The former have an end date but no start date, and the latter have neither date, and so need some other interface for exactly where they get put.
Other notes:
Try Emacs! org-mode can manage a todo list, including showing all of your future tasks/events in a calendar. It also has some great searching and tagging features. You can set up a mail client like mu4e to file incoming mails into an org file. I’m not sure about sending items to other people, org can export to HTML or plaintext so I’m sure you could set something up.
The only downside is that it’s Emacs, and Emacs has a learning spiral instead of a learning curve.
Application scripting for UNIX.
What I want is the ability to treat application components like building blocks that I can address, control, and most importantly string together with a common data interchange between them, allowing me to compose complex customized workflows.
The Amiga did it with AREXX, Apple does it with the Open Scripting Architecture and AppleEvents, and Windows does it with Powershell and its several other more ancient mechanisms like OLE and even DDE.
But UNIX folks get this blank look when I try to talk about this and inevitably come back with “Uh. Pipelines?”.
Pipelines are amazingly powerful and I’ve built my career on UNIX so I’ll be one of the first to gush about the power of the “everything is a string of bytes” philosophy, but there are some things that are very hard to express with that one simple data structure.
How far does D-Bus go towards what you’re looking for?
D-Bus looks like it would handle the message passing bits. It’s not clear to me how that would handle the whole idea of applications exporting composable verbs though.
Well, D-Bus seems to have a notion of providing a service, and some built-in mechanisms for access control; and it does look that some applications declares D-Bus endpoints for communication between parts of the application.
It might be that D-Bus is easier to wrap for quick scripts than preceding (somewhat comparable to DDE) KDE DCOP, CORBA and Gnome Bonobo on top of it, and KParts.
Thanks I’ll dig into it more.
I do like that there are bindings for every language under the sun.
a software for manipulating math equations with drag and drop and also can define your own algebraic rules like group theory.
Can you expand in this?
like A/B = C then I can drag B over to RHS and it becomes A = BC.
Say I can define BC= Z and if I select BC then I can choose to substitute BC with Z yielding A= Z.
The rules of the algebra are intricate in systems like tensor algebra etc, so the use should be able to define rules for algebraic manipulation. all of them can be access via drag and drop.
E.g in matrix
Ax + b = y
you can do
vcat(A, 1) * vcat(x, b) = y
Now newA*newb = y
newA’ newA*newb = newA’y
and we can say that newA’ newA is invertible then
newb = (newA’ newA)^-1 newA’y
Normally this can be done in non-matrix algebra. But in matrix algebra you need additional rules. But all of that can be done via a GUI!
very interesting! Might play around with this idea
This is more than just software, but I’d love a scanner on my fridge/pantry that I could scan all my food into.
Then I could see what is in the fridge, and, more importantly, what’s about to go bad, without looking through it.
It’d also make making a grocery list a snap, since a big chunk of my food is staples.
Check out https://grocy.info/
Oooh, thanks!
Does this mean a check-in/check-out system? I guess for the scanning part nobody can help you with physical setup… But you can attach an Android smartphone with Binary Eye or something, scan everything incoming, and enable «forward all scan data to URL» with whatever you have around to receive and collect the data.
I guess you could scan receipt before and after each batch of bought things to show these are incoming, and have a different marker to scan with things running out.
Sounds like the receiver might be a reasonably simple script pushing everything into an SQL database — or doing nothing, if you prefer parsing the logs. Maybe having webserver logs with data would make getting around to actual processing easier…
(of course, good luck with loose fruit here)
Yes, check in check out. I wouldn’t mind doing the scanning, frankly. I guess I would have to scan things going out.
Yes, loose fruit or cooked items would be problematic.
Then maybe indeed install Binary Eye and start scanning? Once you have some data, the barrier of entry to actually processing it will become lower… (and even if unprocessed data doesn’t help you find expiring items, it will later help you estimate the range of consumption rates of various items)
Cooked items are kind of less of a problem, as once you have a barcode for each rought type (which can be Data Matrix or something — yay multiformat scanning), the overhead of check-in/check-out is not large compared to cooking. I guess for fruit you could check-in each batch…
There are a few, but all of them for some reason choose to leave the same key-mappings, even when they make little sense. I think a lot of Vim defaults are there because of backwards compatibility. Kakoune would have been nice, except they messed up the workflow by going with the selection-first approach.
I am not sure what form this should take, but often I want to write notes about various pdf files I am reading and store the pdf and notes in the same place. There is
papis
but it does a bit too much in my opinion, and is more for storing meta-data (which is also important) instead of actual notes.There are a few, but they are a bit too complicated.
mutt
for example is designed to work with various other tools that you have to set up first. And then you have to set up mime extension handlings, and you need yet another software to hold your contacts in. Some simple (from users perspective) alternative would be nice in my opinion, even if less powerful.Right now with Make you have to declare dependencies between scripts within the Make file. I am not sure if possible, but would be nice to have a separate tool that handles dependencies. In this vision you would simply have some scripts that execute. The tool will then save dates about when is the last time the script was executed. And in addition would allow to declare dependencies via the command line (like
tool add script1 script2
). And of course other commands to display the DAG, list targets that are out of date, etc.Can you elaborate? What’s the downside to this?
There is this nice write-up that reflects my experience pretty well, so if you don’t mind I will just link to it: https://github.com/noctuid/dotfiles/blob/master/emacs/editing.org#why-not-kakoune
Re: 2: I think if you say the existing tools do too much, it would be interesting if you explained why having filename.pdf.txt with notes is doing too little.
Re: 4: Hm, I am not sure, if two different scripts modify some file, how is this handled from the point of view of the file’s staleness (then it turns out that the file is a log file so the operations are not even idempotent, but that surely needs special-casing)?
Re 2: filename.pdf.txt is slightly too little in terms of organisation. It is similar to what I am doing now (I have two files, one for notes and another one for metadata besides pdf). The problems for me start when I start adding some fields (lines really) in notes or meta data. Then older files have missing information and when I grep for something I am not sure if some results are not returned because they didn’t match, or simply because I forgot to add that specific type of information.
I guess a template with standardised fields would be enough. I am not too sure on the specifics. All I know is that my current approach is a bit clunky. And papis seemed overkill and not-enough at the same time. I suppose one part of creating new software is figuring out details like that.
Re 4: In case of a log file I guess you would write to it any time the script is executed. In that case IMO there is no need to add a dependency in the system at all. Whenever something is rebuilt the log will be appended. But probably do not need to state to execute some script if log is updated.
If two scripts write to the same output (i.e. append lines one after the other) then it might be tricky. I haven’t solved all the cases in my head, just wondering if it’s feasible or not. But I guess maybe you can introduce something here, like not declaring that output file to be out-of-date until the whole pipeline of a re-run is finished?
Re: missing fields — template indeed won’t save you as it gets updated, maybe you need a pass that would check what fields from the current template are missing in the old metadata files?
Re: dependencies: I think the system should discover such dependency, no? Otherwise it’s just a CLI to edit a Makefile… And so the question is what it will discover for multiple non-idempotent scripts modifying the same file (but let’s say the user doesn’t add an ignore because semantically there is idempotence)
I think aerc might be a good fit.
Evince and Okkular can annotate PDFs:
Where do I start.
STL Library. I have a whole ton of STL files from the various patreons I follow. I’d love to be able to collect and tag the various STLs so I can find what I’m looking for quickly. For instance, I’ve got one stl in one subfolder of a subfolder that is a dragonborn ranger. I may say one day, hey I need a ranger, well good luck finding that in “June Atrisans guild 2020/Dragonborns/dbr02.stl” or whatever. Tagging the individual models is the real trick here. Linking to multiple files is needed as some of them are split, and some even have pre-supported versions. I’d love to have a preview as well.
Same goes with Audiobooks. I have audiobooks in several places around my various drives. I’d love to be able to collect them into an app and catalog them in situ. Just allow me to gather metadata about the book, organize in the program and not have to collate my files.
STLvault http://stlvault.com/ is aiming at this.
There’s a nautilus plugin for that :>
I love you.
Sounds like what you want is actually a way of handling the file tagging.
What I tried doing at some point is have files live in whatever places they live, then have an SQL database with paths and tags, then have a virtual filesystem so I can just cd into a query looking up some tags, then use whatever basic thing-handling tool that can understand the basic idea «this directory, let me browse the stuff in it»
I do have the virtual FS part now (QueryFS), but I never moved beyond very basic metadata extraction, as I learned to put files into consistent places and find them there faster than I learned to build a consistent tagging structure. But this is a function of what files I store and how I handle them, of course.
(I still use that setup for streams of files, be it emails or Lobste.rs discussions — fetch, index into DB, cd into selection, handle with a fitting tool/script, rm to dismiss — which just marks as read)
Yeah! That’s a cool idea, it’s sort of an abstracted tool set for these ideas.While this would be ok for me to use, it would be hard to get my fiancee sold on this method. Maybe step one is setting something based on this. Or, maybe a generic library application where it takes care of things like “watch this directory, add new files to sort queue” and you can then define metadata fields for whichever library you’re trying to create. It then takes care of the querying and linking for you.
Both inotify and multiple configurable FS indexers come to mind…
I guess if you setup something you like, it might be feasible to distill what you need to get more of your family sold, and wrap it in a minimalistic GUI?
Basically: finding things should be done simply by browsing the virtual FS in a file manager or the tool’s own directory picker. At least that’s what I do, except in shell. I think if you can sell people on the approach, such UI solution will be deemed acceptable — you are not guaranteed to convince people that tag-setting, regardless of UI, is worth the benefit, of course…
I assume the initial background setup can stay your exclusive task for a long time, hopefully you don’t reinstall the system from zero weekly.
So the question is tagging. I believe that you can find an indexer with mixed inotify and crawling, which achieves good balance between fixing the omissions in the knowledge after problems and not hogging the entire IO throughput… As you, by definition, have an SQL table with all the tagged files, and hopefully you get an SQL table of all the found files, it should be easy to get all the files in need of hand-tagging. And feed them to a virtual directory, naturally. Generally a GUI for going through a directory and recording what tags the user wants to assign to each entry should be fesible if you know already there is a workflow that is worth that effort… A virtual directory helps here as a viewer tool can be launched side by side in case of a doubt.
PS. QueryFS is in a state of being in the state of being very useful to me but without clear plan to make it useful to others; in particular that means that feature requests have a good chance of being implemented.
Looking into this now as well, going to take me a bit to grok it properly. FUSE is something I have never taken the time to look into
As a feature of desktop enviroments: Focus follows mind.
2: A web browser that:
3: A hard-realtime UI that is allowed to make you wait for things, but only where it actually makes sense. Nothing’s allowed to run in the background and affect latency in your text editor / shell / window management / whatever.
I am afraid that applying a readability-equivalent is not so straightforward…
I am consuming most of the web content via a pipeline of: parse HTML → make a Readability-lite copy and original copy → HTML-to-text in a specific way I have chosen for my personal comfort. So I am not incredibly far from living in the world you want. In some cases the readability-lite copy is a nice help, but in many many cases it is a complete failure, and also it is clear that it failed at a choice that is actually hard.
As a very primitive example, on some pages collapsing the comment-related fluff is the first thing to do, and on some others it’s the real value of the page.
3D Mechanical CAD that is good enough, like Kicad is good enough for electronics. I don’t necessarily need it to handle a 747 or car factory, but a kit plane or robot would be nice. The lack of a good foss cad package is what’s stopping me switching full time to linux, and why we have windows laptops at work to go with ubuntu desktops and why I have a personal macbook pro with the this-is-definitely-going-to-burn-me-one-day cloud storage Fusion 360. I would like to lose this mess completely and just live in Linux full time. I just think it’s still a bit to far over the critical distance from the pain points programmers have to be receive the attention required to solve it well. Maybe the 3D printer boom will help things along.
Have you tried Solvespace? It’s not as sophisticated as Fusion360, but it’s in the same paradigm, easy to learn, and I’ve found it useful for designing robot assemblies.
Have you tried FreeCAD? I have played around with it and for small models it seems to be sufficient.
I want to work with Core War but have not time to do so.
I’ve had a ton of fun with r2wars recently. It’s the radare2 equivalent of CoreWar.
A Namecoin-like software that generates HTTPS certificates, so HTTPS Everywhere can be backed by proof-of-work as opposed to the goodwill of the HTTPS Everywhere foundation
A new framework for cross-renderengine (OpenGL/Vulkan/Metal) and cross-platform (Mac/Windows/Linux) GPU-accelerated video effects. Basically the next generation of https://github.com/resolume/ffgl/.
A good binary file diff viewer, that is optimized for viewing flash memory, such as dumps of microcontroller memory.
Arrange the bytes in a grid, and ‘OR’ all the bytes in columns together. This will indicate if a single byte or pattern scribbled over the top of existing data.
Different colors for changes bytes with extra bits set or cleared. This is a good indication if you are looking at the aftermath of a partial erase, or a memory cell that is losing its ability to hold data.
Understand the block size, to indicate which blocks are changed / corrupted, which blocks are unchanged, which blocks are blank.
I’ve written the tool myself using ANSI color escapes, then piping through
aha
for HTML output. But it’s much slower than it should be.https://luna-lang.org - I tried to get hired by them but we couldn’t find common points with my expertise areas unfortunately.
A non-Electron lean WYSIWYG Markdown editor with outliner features, with plugins support, for stuff like ASCIIMath, ditaa, ASCII UML diagrams, runnable code snippets, etc. (I know, I could learn emacs and org-mode… problem is, I am already fairly advanced in vim, and I don’t suspect evil mode has all the features I use…) Ideally with WordStar keyboard shortcuts.
A non-Electron GUI email client, with similar features as notmuch, with easy tagging of emails using emoticons/icons and instant filtering by tag combinations (ideally all icons visible at once) and allowing me to easily edit received messages so that I can keep only the crucial parts (but the rest of the whole email text could still be shown “grayed out”).
A git GUI allowing easy rebasing and splitting of commits via drag and drop on a tree visualization. Also with easy browsing of history and blame-digging (a.k.a. how did this bug/suspicious code get to look like it does now?).
A car driving simulator using Panini projection and Minecraft-like world editing, possibly on hex grid, shared in wiki-like way so that people could map their cities and train driving in them. With ability to represent non-flat roads, slightly uphill/downhill, up to steep narrow streets of Italian towns.
A microkernel-based OS working on Raspberry Pi 4 (possibly a set of missing drivers for Genode OS).
A REPL for Nim similar in power and features to OCaml’s utop.
Also, did I mention https://luna-lang.org?
With regards to a non-electron GUI email client, take a look at https://github.com/astroidmail/astroid. It is essentially a frontend for notmuch and the developer is very responsive.
I am not saying my dreams are widely shared, or good projects to take up… but I will answer the questions as stated.
Like spreadsheet, only to data-block-first. Multidimensional arrays come first, then they are layed out to show them best (unlike spreadsheets, where huge 2D sheet is primary, then arrays are kind of clumsily specified). Of course the ranges are also named; operations are likely to be a mix of how spreadsheets work nowadays, how normal code is written in Julia/Python/R/… and some things close to APL/J. No idea whether this can be made more useful (for someone) than just the existing structured-iteration libraries, maybe with a bit better output/visualisation code…
A DVCS that does not regress compared to Subversion. I want shallow (last month) and narrow (just this directory) checkouts supported on the level that the workflow that makes sense is just versioning entire $HOME and then sometimes extracting a subset as a project to push. Although no idea if the next iteration of Pijul will approach that.
A hackable 2D game with proper orbital mechanics take-off-to-landing (including aerodynamic flight with a possibility of stalling before landing etc.). Orbiter definitely does more than I want, but for me a 2D version would feel a nice more casual thing. And probably 2D has better chances of not needing Wine…
Writers have tools for sketching out and reshuffling the story; for proofs and for code documentation there is more weight on the notion of what depends on what; and sometimes one can reverse the dependency, or replace with a forward declaration; sketching and experimenting around all that can probably be aided by some kind of a tool, but no idea how it would look like. I guess it would have something in common with Tufts VUE…
git can technically do this (using worktrees, subtrees, sparse checkouts etc.) - but the UI for it … does not exist. It seems like a low-hanging fruit to implement this (and one which some friends with whom I collaborate on monorepo tooling may end up picking at some point).
The thing that git fails completely on data-model level, is that it insists a branch is a pointer. In fact it is more of a property of a commit, which leads to much much better handling of history, and as a curious (but convenient) implication also brings possibility of multiple local heads for a single branch…
Of course all-$HOME versioning is likely to benefit from a more careful approach to branches, and maybe treating not just content but also changes as more hierarchical structures with possibility to swap a subtree of changes in place, but I really do not believe in anything starting from git here…
Your spreadsheet concept basically already exists in Apple Numbers. Spreadsheets there don’t take up the whole page but instead are placed individually as a subset of the page.
To your point on DVCS, there are big companies that do have this kind of thing available, but I’m not sure how much of it is open-sourced.
Thanks!
Re: Apple Numbers: Hm, interesting (not interesting enough to touch macOS, but I should look up whether they support more dimensions in all that etc.). Although I would expect the background computational logic to be annoyingly restrictive, but that could be independent of the layout.
Re: DVCS: what I hear is very restrictive actually, more about how to handle an effectively-monorepo without paying the worst-case performance cost than something thinking in terms of how to structure the workflow to be able to extract a natural subproject as a separate project retroactively.
As to the first and last points, maybe https://luna-lang.org would be interesting to you? (I am a huge fanboi of them.)
One more data flow language?
I mean, data flows are cool, sure, but I am fine writing them in one of the ton of ways in text, though.
They don’t solve the data entry + presentation issue per se (layout of computation structure and layout of a data set are different issues), and structuring a proof looks way out of scope for such a tool.
ETA: of course a data flow language well done is cool (and any laguage paradigm well done is cool), I just don’t have a use case.
With Luna the idea is that you can write in text if you want, then jump to graphical instantly and tweak, then jump back to text, etc. with no loss of information.
As to the rest, I guess I don’t know the domains well enough to really grasp your needs & pain points :) just wanted to share FWIW, in case it could get you interested. Cheers!
Sure, I understood that capability to switch between representations losslessly, I just need a reason to do significant mouse-focused work (which, indeed, is not said anywhere in my comment) so using this capability would always be a net loss for me personally.
A cloud-free IoT device framework/os. There’s so many cheap Chinese IoT devices out there that are just taking some off the shelf software and tossing it on lightly customized hardware. If there were some software that didn’t require a server to operate I have to imagine there’d be some that would pick it up and could slowly start to change consumer IoT from a privacy & security nightmare to what it was originally supposed to be.
Unfortunately, managed to finagle my dream project at my day job into existence so all of my mental energy has been going into that. (Which coincidentally, is making a cloud-focused IoT platform a little less cloud-focused.)
Have you heard of/used Homebridge? I think its main thing is HomeKit-specific (so, Apple products), which works for me, but it also has a web UI available where you can manage your IoT devices too.
I have an odd collection of Philips and Xiaomi smart devices and am able to keep them all safely off the internet and controllable through all our devices at home, it’s nice!
I absolutely agree with this.
Offline, local control is one of the big selling points for BLE, especially with the mesh spec finalized and (at least starting to) be more and more common. Getting consistent hardware/implementations/performance, on the other hand, still feels way too difficult. Similar can be said for Weave - makes a ton of sense but is genuinely not a fun thing to work with.
I’m not sure why but I find the DIY systems (Home Assistant, openHAB) abrasive and, for me at least, flaky.
A semantically aware diff tool for Python that would work in tandem with git to aid refactoring work.
Instead of a load of line changes you would get eg. “moved function get_foo from bar.py to baz.py”
My dream web app would be an ability to pay open source maintainers some amount of money, whatever they feel is appropriate, to either ask questions to or even have them review an idea. My work, like most people here I would imagine, is entirely reliant on open-source projects maintained by often great people. However getting help in these projects is a nightmare. It’s either a mailing list that gets ignored, or a slack where its often the blind leading the blind.
I would love some sort of GitHub integration that says “for $500 one of the maintainers will look at your bug report and give you some ideas on why you might be seeing this problems”. I don’t want some sort of feature bounty, because I really like that new features are PRs. However sometimes I just need someone who has spent a lot of time with the software to take a look and let me know if I missed something, or if I need to make a PR where it might make sense to start. Or even to say “look what you are doing is terrible and we don’t recommend our software to do that”. I think it’s a missing link between traditional “enterprise support” and the current state of development.
Single-tenant SaaS software that scales to zero.
The software industry it too geared towards Google-scale. Self-hosted software is often PHP with some database that needs to keep running in the background. Backup is never mentioned in the installation manual.
If I had the time I would:
I don’t recommend trying to build a database on top of something without transactions and with only eventual consistency.
Like a hard disk drive with its own firmware?
Hard disk drives should provide immediate read after write consistency of all data. S3 does not.
As far as I know, some HDD firmware will buffer & reorder writes for performance reasons; meaning a power outage can (rarely) cause an ext4 “journaled write complete” to get written without the actual content.
Well, at least most of them mostly honour write barriers, and at least we have grounds to call drives that lie about barriers lying garbage that they are (no-barrier sequences of writes are fair game to reorder, though). Lying is an important part here, of course. With S3 the normal mode of operation is officially expected to have temporary inconsistencies even in the best case, which is fine for many use cases, but maybe even more annoying than the modern hard drive behaviour in this specific one.
That may be, I have no idea. I’ve definitely read that writing a database backend on modern file systems and drives is a nightmare.
On S3, you wouldn’t even need a power outage in order cause consistency issues, though. Hence why I suggest not trying write a database that stores data in it. It’s really not designed for that.
An interactive shell for Oil :) It exists in Python and works well, but I don’t have time to translate it to C++, because of the issues laid out here:
http://www.oilshell.org/blog/2020/08/risks.html
All the algorithms are figured out:
I would either like help porting it, or I propose in the blog post it could be moved up to the “user level”
latest release: https://www.oilshell.org/release/0.8.pre10/
A window manager that I can save window layouts and “summon” them onto screens (this is important: my workstation has two heads, but I often work on my laptop in the garden and I want to have the same screens there).
I’ve got something well-gross “working” but it’s an unhealthy blend of autoexpect and stashing stuff in window properties that I can pick up in i3-cmd+xdotool+xwd scripts, a little x11vnc with a clipped window (left screen) and I think with better integration with the window manager I could do better, but I’m busy.
I was not aware that X11 had a KV store on windows. You monster.
Oh yeah! Check this bad boy out:
Gross, right? But now I just press alt-enter to get another terminal in the same directory that I’m pointing to.
Calvin’s law: All distributed systems expand until they contain a process-local key-value store. (See: OTP)
Woah. This is an amazing and deep rabbit hole for customization!
I’d like an operating system I can run on a laptop that I can:
I guess Ubuntu is my best bet? I still have yet to find a single linux-using coworker who can reliably join google hangouts video calls though (and I have no control over what video tool I use for work, but I CAN control my OS)
Give Parrot OS a try.
I’m guessing macOS is not on the cards?
I mean, this is a Macintosh?
I’m specifically frustrated with my nearly $3000 2018 macbook pro which has a swollen battery and cannot simultaneously handle a video call, slack and my IDE. I do recognize I’m being grumpy though.
lol fair enough.
To be fair, that’s probably an issue with the video calling software, Slack, and your IDE. Though IDEs at least have an excuse for eating some resources.
My tip is to use Slack in a browser tab if you can, though this doesn’t allow you do call through it.
I run Fedora on a Thinkpad and carry an iPad for slack and other text/video conferencing tools. Oddly enough, I landed here out of frustration with Apple laptops released since 2016. I’d been happily using Mac laptops since the ’90s, prior to that.
The only place Fedora falls down for me (for the items on your list) is the reliability of conferencing tools. I’d say the ones you list are fine for me about 80% of the time, but soaking up the rest is worth the cost of the low-end iPad I use to do it. Hangouts/Meet, Zoom, Goto Meeting, Teams, etc. all work well there. Plus the camera is better and slack sucks less on iPad than anywhere else I used it. Slack and Discord don’t chew battery there they did on Mac and Linux, either.
My non-lastpass password manager of choice is Bitwarden, fwiw. It did a decent job importing my large, very heavily used, 11-years-old (all my passwords since 2007!) 1password database, but there were quite a few empty entries to clean up after initial import.
I still have not found any app for my notes. It should be a simple, fast and responsive, and beautiful, app that syncs my notes and works on Linux, MacOs, iOs and has a web-client. There are two million notes apps out there, why doesn’t anyone one of them get it right?
Have you looked at / considered Joplin ?
I use it daily on Mac, IOS, Windows and Linux. For my personal universe, all the versions sync to the WebDAV client on my NAS.
For my work universe, I maintain another notes DB behind my cloudy overlords firewall using their WebDAV infra for syncing.
As to a web interface there’s joplin-web - I haven’t set this up. YET :)
I’d really like a HTTP proxy inside Emacs so that I can navigate and edit requests and responses with all of the usual text editing tricks. It would be something like a cross between mitmproxy and magit.
I’m not sure if this is what you are looking for… Maybe it is! https://github.com/skeeto/skewer-mode
At some point I’m going to create a “complex” software following the UNIX philosophy and using https://mkws.sh/pp.html to render to the web. Think it of as web interface for something like https://github.com/leahneukirchen/mblaze in case of an email client, or a web interface for something like https://adi.tilde.institute/cbl/ in case of an analytics system.
Full Stack (hardware to desktop environment) rearchitecture with low latency as the primary focus.
A programmatic diagram / graphic builder, like Processing that also lets you edit the diagram with a mouse, like OmniGraffle, and the source updates as you do.
I’ve had to draw diagrams with lots of repeating pieces in omnigraffle, which gets tedious to edit if you need to make a change that isn’t just a property change. It should be a function that can be repeated in any location, and then updates live everywhere when I change it. And of course most editors aren’t nearly as great at helping you select things - I miss being able to select e.g. all dotted lines, when I’m not in omnigraffle.
In the other direction, drawing lines between boxes, and making selections are much better with a mouse.
While there’s lots of examples of graphics languages, there are clearly some interesting issues about how to translate common mouse operations through into source. I’d like to dig into it, but really I just want to use it - I came up with the idea because I want to make more better diagrams about the things I’m actually working on.
A simple terminal task manager / time tracker with curses based interface with tagging, contexts, quick per-task notes and automatic priority calculation. TaskWarrior is close, but not really there.
Have you looked into https://orgmode.org?
I am not a fan of Emacs. I prefer smaller tools and vim.
What are you missing in taskwarrior? I am using it with tasksh as the “frontend” and like it a lot
A Peer To Peer strike app to coordinate workers along an international supply chain to perform a Machine-Learning optimized chessboard strike.
It would work like this: users can map the logistic of the supply chains (production times, travel times, buffers, salaries, contract types and so on) through a simple app. When enough people along the supply chain are on-board, a strike can be proposed. If enough people/organizations/unions agree, the app will call the strike and maximize the disruption on the supply chain and minimize the hours of strike, weighted by the vulnerability of the workers along the supply chain.
Gigantic open problems:
Electron without electron. I can see the benefits that electron has had in terms of making desktop applications easier to build for a whole lot of people. But I hate the cruft and janky dependencies of NodeJS and the whole dependence on embedding chromium. I want a native desktop framework that has a good solid HTML/CSS interpreter and an embedded UI programming language which has more type-safety and doesn’t end up being a resource hog while being somewhat aligned with the language used for the lower-level programming of the application. It should compile down to a manageable static binary size. CSS, LESS and JS concepts of UI development work for people, people understand them so it has to be something parallel to that, but strip off all the extra cruft that should be handled by the underlying language that the framework is built in as well as exposing a GL context into the HTML/Views to render highly customized graphics / bypass the HTML rendering engine from the backend.
Also an out of the box CRDT (or similar) based embeddable database library with a gossip/viral-like syncing protocol for deploying desktop, embedded and mobile applications in places where connectivity is typically pretty bad and it’s easier to shunt data along through other devices until connectivity is stable.
What’s a good candidate for cross-platform GUI? Qt?
Qt5 is nice, but if you go the QML way it’s sort of a bastardization of HTML/CSS and when I’ve used it, it doesn’t feel as easy to use as it could be, I’d like to be able to have a frontend developer be able to build out the UI with something that they can easily pick up, and has parallels to the browser context.
Also Qt5 licensing is awkward to understand, which I think is why it doesn’t get as much uptake as it should, it’s a great framework/application building platform, but I think the license is just too confusing for people that want to dip their toe in. When you hold up GUI frameworks beside each other, you look at Qt5 and see a really robust platform for cross-platform development, but you see the license terms and a lot of people get scared away by them, there’s not a lot Qt can do about that as it’s because of the libraries they use, and the reason the framework is as good as it is is because they’ve had commercial sales to underpin the work.
But there are a lot of semantics about how C++ works in there, and how the whole QML + JSON/JS hybrid thing works that just irk me, I want to be able to hire a frontend developer to work on an application from the pool of available web develoeprs and have minimal friction for them transitioning to a desktop context (this is one of the main factors why electron is as popular to use as it is). There are some awesome frontend devs/designers out there and I think we sort of shat the bed on making usable / understandable tools for them on native platforms to build UIs.
Qt has QWebEngine but that just embeds chromium again, so you’re back to basically the same as electron.
An intelligent tiling window manager that can automatically re-arrange (on demand) based on the content of its windows (i.e. it would be aware where your browser / terminal has no significant content).
An open educational computing system with OS & compiler (+ simple hardware) that can be understood in its entirety by a single human being. http://www.projectoberon.com comes close, but a bit more modern and less archaic.
A reimagined web browser that builds on itself (i.e. starts with some SVG/PDF level primitives and creates higher level webcomponents with increasing complexity from that, all with a single coherent logical syntax (like s-expressions f.i.)).
A UI framework targeted to power users, like a modern age ncurses that isn’t restricted to the terminal with excellent support for extensible and scriptable apps, supports (but also limits) graphics, lot’s of focus on keybindings an layered VI-like modes.
Most of them are games, which, even if I could program, I will probably need help in other areas. Pikmin clon, X-COM style game for robbing banks, Theme Hospital like but for a whole town, some small puzzles…
As for software what I really want is to improve Haiku & Scryer Prolog projects.
have you tried asset packs like the stuff on kenney.nl to build stuff out with premade assets?
I did it once! The end result was nice but almost always you find yourself needing something more, and adding it by yourself ruins the style completely. Also, most of the packs seem to be focused on fantasy RPGs or another cliché, which is OK but I don’t like it very much. For now, the best solution I think is minimalism (not pixel art, more like vectorized art, with just two or three colors in high res figures), which in games is not very popular, but I can defend myself.
csdoc
Like
pydoc
for C#An alternative twitter in which all current posts are displayed on the screen at the same time, using a kind of tree layout.. (Well, I imagine bending it around visually to make a wheel layout, but that’s just display-level fluff.) All posted messages that start with “You know,” share a common root. When some starting letters are common to a lot of posts (in the last 24 hours?) then those letters are displayed with a larger font.
So, trending topics and hash tags and such would arise naturally from the medium.. so long as people put their hashtags in the front. (I thought about arranging it by longest common substring anywhere within the message, but I wasn’t sure what that would look like.)
If some really long common starting string exists, then the whole message is displayed on the home page in a large font.
Users would navigate by typing. If you type “you know,”, that’s a filter.. You’d only see messages that start with that.
A mono repo version control system: http://beza1e1.tuxen.de/monorepo_vcs.html
that’s more or less how version control works in the game industry, it’s why most game studios use either perforce or plastic.
Something that makes writing/understanding very large amounts of YAML easy.
Bonus points if I can feed it something like… all Kubernetes options and available annotations and their types, and get some form of auto-complete and validation.
Maybe Dhall is what you are looking for.
Or https://cuelang.org
The
acme
editor, but for photo management. I’m still dreaming up what it looks like.I currently use different software and services to get news in different topics:
This is too much, I often miss important news or read things I could avoid reading at all.
There are also sources I follow that could be used to trigger actions automatically:
I realize that all this stuff could be assembled to build a framework that I could use to retrieve information from different sources, normalize it, sort it by importance, trigger actions. Some things I’d like to do:
I’ve looked at different software that could provide some of needed features (e.g. Weboob), but my conclusion is that I need to write specifications for a type system for applications to be based on, so that I could adapt existing libraries and front-ends to build a larger project (since I will never be able to build everything from scratch).
Recently, I’ve been thinking that for implementing content aggregators and convertors, the FAAS “paradigm” seems promising so I’m looking at knative although this project isn’t stable yet. Coupled with GitLab CI, I feel like I could easily deploy working code and finally start making something (it’s been at least 4 years I’m writing ideas around and talking about it).
A programming language that: a) Allows me to express side effects b) Compiles with a static syscall sandbox based on those side effects c) Is actor-oriented, with first class support for spawning actors in separate, isolated processes d) Doesn’t have a traditional main, but instead has first class, static dependency injection e) Has move semantics
Pony and Rust get super close to this in some capacity. For me this would just be a dream. It would give me what I love from Java (yes, I love things from Java), Rust, Pony, Erlang, etc, in one place.
If I had endless time I would build it.
A cloud storage solution that maintains xattr tags (for use in search, etc). Along with that, something like WinFS - a better way to query a filesystem
A replacement for APRS “SmartBeaconing” that makes more efficient use of bandwidth and provides more accurate projected positions.
SmartBeaconing is an improvement over “just transmit position every 5 minutes”, but in my opinion it’s both too complicated and not smart enough. The idea is to control the difference between the last-transmitted position and the current position using two basic rules: 1) transmit more often when going faster, and less often when moving slowly or not moving at all, and 2) transmit early when making a turn, if the product of the turn angle and the speed exceeds some threshold.
The problems with it are that it takes seven tunable parameters to do that, with different values required to get reasonable operation at different typical speeds (e.g. walking vs. biking vs. driving), and that both of its core algorithms are dirty hacks that only almost do what they’re supposed to. The speed part has a completely unnecessary parameter (the “slow speed”) that results in a discontinuity in the plot of distance-between-beacons vs. speed, the preference for beaconing at high speed biases projected positions, and the corner pegging mis-handles slow turns especially if it decides to beacon before a turn is complete.
My idea for a replacement is simple:
There are only five parameters here (A, B, C, Tmin, and Tmax), and not all of them necessarily have to be exposed to the user. A, Tmin, and Tmax have units of time; C has units of distance, and B is dimensionless. The idea is that a beacon should go out if:
The parts that I don’t have time for these days:
A good desktop app for storing, analyzing, and annotating chess games.
While there are plenty of things that clear the bar of functionality, there are none that are really nice to use; their user interfaces tend to be cluttered, cryptic, or both. Most of the necessary components – PGN parsing/writing/annotation, UCI engine interface, move validation and board generation – are even available under open-source licenses, but I just don’t have the time to pick out the right set and wire them together.
I’d love a web version of Mutt. Lean as the original, with the support of the browser’s supported file format.
Some sort of native widgets library that actually works cross-platform (GTK, Qt, Cocoa, Windows), works well, and is still maintained. Electron doesn’t count.
LCL might be the closest… (Lazarus Component Library, a part of Lazarus IDE for Free Pascal Compiler)
I’m not super familiar with LCL (or Pascal for that matter), but this does look similar to what I’m looking for. I’ve also seen libui thrown out there, but last I heard, it was in maintenance mode and not very complete.
Well, Pascal is very readable, and FPC dialect includes the modern conveniences of imperative / object oriented programming with some basic generics support, and C-like FFI is literally just a declaration of a function + name of the implementing library.
So even if you do not want to write the entire application in Pascal, you can just write the immediate GUI-handling code in Pascal (LCL uses a class hierarchy, so wrapping it is not a completely trivial task) and FFI the real logic.
I would say LCL has quite a nice library of UI elements; and there are some third-party components, too.
File synchronization service with on-demand local file loading.
I always wanted a network file synchronization mechanism where all files, directories, symbolic links etc. are locally visible but the data of the files is only loaded on demand. There would be a command (and context menu item for graphical desktops) to load and unload specific files or directory trees for the local system. Once loaded locally, it should transparently and continuously synchronize the files with the server.
With traditional remote disk mounts there’s no local storage space wasted, but usage experience suffers from the network dependency. File synchronization services are more pleasant to use since all files are local, but waste locale storage space. This would combine the advantages of both.
libprojfs might help you on Linux? (disclaimer: wrote a bunch of it) You can build it without the C# extension points and make a responsive, virtualised filesystem mount.
That looks promising. After a quick look into the project description, the project seems to revolve around the crucial part providing the necessary generalized APIs/libs to build such a synchronization mechanism. I’m gonna need to find time to dive into this. Thx.
Have you seen Seafile?
I took it for a spin apporximately 3 years ago. I did not notice that since then it now has the exact feature that I described. Thx for your hint. Would be extra nice, if it also ran on OpenBSD.
An open source distributed cloud that with a ledger to value storage and processing.
An open source and distributed app store for apps that I could single click install on my home server that’s a member of this collective cloud.
A feed reader that can process thousands of feeds and filter up the news I find important or read worthy.
A software for documenting workflows in pseudocode that works really well.
I’m having trouble imagining this. What would be the difference between using this software and writing a Markdown document with embedded code blocks?
A distributed store-and-forward network protocol to replace email/Dropbox, but with API hooks for negotiating real time P2P sessions (slack/hangouts/etc), and community moderation (a la mastodon server federation) to address spam.
Bit late to the thread, but I just want to let this out.
I want a graphical code editor, one that is not electron, has extensive plugin support (so that it can even be called a “light IDE” with the right plugins installed), is cross platform, and most importantly, is cross CPU architecture.
Sublime Text does all of those except for “cross CPU architecture”, and honestly ever since I got into ARM computers I didn’t even genuinely feel the need for something that can replace it, but as it stands now, ST runs a bit slow with qemu-user-static on my PBP, and nothing much can hold its place. I use vim for the most part, but I need something graphical for better productivity.
A note taking and set organization app specifically for comedians.
A simple usable RADIUS server that takes a tiny config file, if any. Aim for 60-300 seconds from downloading/building the binary to running service.
An IRCd with the same goals.
Both in a compiled language.
A website with data and charts. Take data series and plot them on the same charts, easy to remix existing datasets and plot new information on other datasets. So you can point out e.g. political events with commit rates in open source projects, etc.
a serious graph file system, ideally on NFS. Eg, I want to CD into the same folder from many locations. I know perhaps something similar can be built via (Sym)Links but i feel they just aren’t the same.
An event pipeline of all events, in my life and in the world. I want to correlate wake up times with external temperature and news and school closings, etc etc
a whiteboard software that is actually similar to the real thing
a graph architectural decision tree that shows selections and alternatives and trade offs and allows to compute quantities across traversals
something like a “Kirchhoff circuit laws” builder that computes maximum theoretical uptime/latency for distributed architectures
Re: graph file system — what do you really want from it? After all, symlinks are cool… Of course, there is this issue of reachability and garbage collection otherwise.
If you just want a graph — maybe have an mkdir wrapper that creates uniquely named directories in a stash, and the name you give is immediately the name of a symlink, maybe that symmetry would help?
(I kind of have something remotely similar with a query-based filesystem, but there graph structure is irrelevant and queries to stuff stored e.g. in SQL are the focus, so I cannot easily interpolate what you want…)
I’m sure it’s a terrible idea, just thinking about permissions makes my head spin. In part I’d be curious in the academics of it, on the other side we already have a lot of linking going around so why not go bold with it and make it a first class thing ? Perhaps edges can have properties too..
Well, symlinks just apply to directories the logic already applied to files. There are things-stored (inodes), they have permissions. There are navigational links (hardlinks for files, symlinks for directories), they have names and locations. The reason first-class indistinguishable-from-first hardlinks for directories are not widely used is reachability: for files it is enough to count references, for directories you need a true GC to handle unreachable loops.
So if you just do that store-directories-separately-and-symlink, the permissions applied to the real targets would work just fine.
If you want edge attributes and stuff like that, then I guess you need to start by finding a graph database you really like w.r.t. its data model, then specify the FS representation that that could actually work with unaware tools, then it is a question of a virtual FS for browsing such a DB. But graph DB comes first in this case.