I feel like we often hear about what people are actually working on, so I thought this question might lead to an interesting discussion and is meant to encourage some people to speak that usually arenât able to share something. đ
There are two dramatically different models at play: on one end, the website as document makes perfect sense. But when youâre building a WebGL/WASM app that lets you plug your piano in, what does a DOM get you anyways?
And we live in the annoying middle: HTML/JS/CSS has become the lowest common denominator, but devs are free to pick and choose from higher in the spectrum without any real negative feedback â which means a browser is no longer something any one person can really understand, let alone build. As a result, only the worlds largest advertising company can really keep up with delivering one.
So if I could wave the magic wand and build the world from scratch Iâd either make the browser subsume the entire OS â give me a rumpkernel that boots Firefox, no POSIX no nothin, OR Iâd draw a line and make browsers âthings that can render hypermediaâ or âthings that can do arbitrary network connections/rendering and drawingâ, and if you tried to make one do the other youâd be laughed out of the room.
Slight misunderstanding hereâŚ. @owen is on to it.
I donât think I can rewrite firefox better. Donât want to even try.
Look at it this way. If I printed out the whole stack of standards the current firefox or chrome is based on and told you to read themâŚ..
It would be stack of paper insanely high, a fair bit of it would not be complied with, a vast stack would be in the per browser docs and a another large bit would not even be documented.
A large chunk of the reason for the massive ecosystem of javascript frameworks is they have been tested into the shape of something that sort of works on the main browsers.
If you looking for consistency of design or architectural principlesâŚ. you might find traces in the older stuff⌠and then a flood of âah, shit. Letâs just maximally kludge what we have to make cool shitâ thereafter.
I believe the point of consistency of design and of architectural principles is to select the trade off points you wish to hit and then permit the implementer and clients to rely on massively simplifying assumptions.
ie. If you do that well, you explicitly take some use cases off the table, but make everything literally several orders of magnitude simpler and easy to understand, implement and use.
Developers always want to rewrite code and think that almost everything that has been written before is bad. This can be correct, but is often wrong.
I agree with the first bit, but not the second. Almost everything in the software world. The bit people are often wrong about is the idea that rewriting it will give something less bad.
I daydream about writing a web browser. Make network requests, parse, render. A modern. competitive browser would be more than a lifetime of work, but getting something that can halfway render an ugly version of simpler pages (say, this site) seems like a fun couple-month project.
You might be a little late to the party, but SerenityOS is building just that. I forgot how its called, but they are building their own web browser from scratch. May be you can still find some interesting issues to work on (I donât know if it can render lobste.rs currently). At least it may serve as inspiration and proof that it is possible :)
Or dillo, or links. I really loved using the latter under NetBSD on my old underpowered iBook G4. That machine was so slow that Firefox was a total resource hog (and it occasionally had weird issues where colors would be inverted due to some endian issue). Dillo development seems to have stagnated unfortunately - I thought it was a really exciting project when I first learned about it (circa 2003).
I wrote a dom.d module that parses all kinds of trash html and can apply css and do form population etc, and a script.d that can run code to play with that dom, and simpledisplay.d that creates windows⌠so then thought maybe it wouldnât be too hard to actually render some of that, so I slapped together https://github.com/adamdruppe/arsd/blob/master/htmlwidget.d too. But like I didnât have a very good text render function at the time, which is why it is a particular font with some weird word wrapping (it wanted each element to be a single rectangle). The script never actually worked here.
The table algorithm is complicated too, so I just did a simplified version. Then of course the css only did basics, like the float left there was the bare minimum to make that one site work. But stillâŚ. it kinda worked. You could even type in forms
Iâm tempted to revisit it some day. Iâve expanded my library support a lot since then, could prolly do a better job. Realistically though links and dillo would still be better lol. but still it is kinda cool that I got as far as I did. The only libraries used if you go down are basic win32/xlib and of course bsd sockets. The rest is my stuff (all in that same repo btw)
Typesetting systems. Itâs interesting to think about the differences between TeX and html, one predating scrolling and one designed for screens rather than paper. What would a simple typesetting system look like that was built with a minimalist ethos, for scrolling, without perfect hyphenation and pixel-perfect page boundaries.
Iâve been playing around with SILE recently. While it still has some rough edges, it has been refreshing coming from LaTeX. I donât know if youâve already looked into it.
I have seen it. I will admit that I havenât dug too deep into it. I respect the effort, however the clean slate implementation of SILE (as opposed to Tectonticâs port of XeTeX to Rust) offers some advantages.
Documents can be either in TeX-like syntax or XML. (Meaning they can be generated by a program and be valid) Also the native support of SVG (instead of the convoluted Tikz) is a killer feature for me. But in general, SILE is more lightweight.
the virtual console / terminal emulator / shell thing. On the systems level, itâd be great if we had a simple API to build text based applications (batch or interactive) which isnât based on emulating escape codes for ancient hardware. On the users side, itâd be cool to have a shell with non-blocking commands, command palette, modern keyboard shortcuts, etc. Bonus points if you can make this a new stable interface to write apps against, the way win32 or, ahem, escape codes are stable
relational database without SQL and nulls, and with wasm-style minimal relational language with text/binary format which is a target for orms/high level interactive query language. Itâs insane that SQL injection is a thing,
If stdout to the shell is a pipe, then invoking say ls --color will inherit the pipe as stdout. This means isatty(stdout) will return false, which means you wonât get color.
with the headless shell, the GUI can create a TTY and send the FD over the Unix domain socket, and ls will have the TTY as its stdout! This works!
You donât know when the output of ls ends and the output of the next command begins
with the headless shell you can pass a different TTY every single time. Or you can pass a pipe
You donât know where the prompt begins and ends, and the where the output of ls begins, etc.
with the headless shell, you can send commands that render the prompt, return it, and display it yourself in a different area of the GUI
Also, with the headless shell, you can make a GUI completion and history interface. In other words, do what GNU readline does, but do it in a GUI, etc. This makes a lot of sense since say Jupyter notebook and the web browser already have GUIs for history and completion.
(Note there is a bash Jupyter kernel, but itâs limited and doesnât appear to do any of these things. It appears to scrape stdin/stdout. If anyone has experience Iâd be interested in feedback)
Terminals offer âcapabilitiesâ, stuff like querying the width and height, or writing those weird escapes that change the color. I would guess there would either be no capabilities available to a headless shell, or maybe their own limited set of capabilities emulated or ignored in the UI. I havenât looked at the source so this is merely speculation.
Well a typical usage would be to have a GUI process and a shell process, with the division of labor like this:
GUI process starts the âheadless shellâ process (osh --headless), which involves setting up a Unix domain socket to communicate over.
GUI process allows the user to enter a shell command. This is just a text box or whatever.
GUI process creates a TTY for the output of this command.
GUI process sends the command, along with the file descriptor for the TTY over the Unix Domain socket to the headless shell, which sets the file descriptor state, parses, and executes the command
GUI process reads from the other end of the TTY and renders terminal escape codes
So the point here is that the shell knows nothing about terminals or escape codes. This is all handled in the GUI process.
You could have a shell multiplexer without a terminal multiplexer, etc.
If none of the commands needed a terminal, then the GUI doesnât even need a terminal. It could just do everything over pipes.
So there is a lot of flexibility in the kinds of UIs you can make â itâs not hard-coded into the shell. The headless shell doesnât print the prompt, and it doesnât handle completion or history, etc. Those are all UI issues.
Iâd like a much smaller version of the web platform, something focused on documents rather than apps. Iâm aware of a few projects in that direction but none of them are in quite the design space Iâd personally aim for.
Well, âweâ tried that with PDF and it still was infected with featureitis and Acrobat Reader is yet another web browser. Perhaps not unsurprising considering Adobeâs track record, but if you factor in their proprietary extensions (thereâs javascript in there, 3D models, there used to be Flash and probably still is somewhere..) it followed the same general trajectory and timeline as the W3C soup. Luckily much of that failed to get traction (tooling, proprietary and web network effect all spoke against it) and thus is still more thought of âas a documentâ.
This is another example of âitâs not the tech, itâs the economy, stupid!â The modern web isnât a adware-infested cesspool because of HTML5, CSS, and JavaScript, itâs a cesspool because (mis)using these tools make people money.
Yeah exactly, for some examples: Twitter stopped working without JS recently (what I assume must be a purposeful decision). Then I noticed Medium doesnât â it no longer shows you the whole article without JS. And Reddit has absolutely awful JS that obscures the content.
All of this was done within the web platform. It could have been good, but they decided to make it bad on purpose. And at least in the case of Reddit, it used to be good!
Restricting or rewriting the platform doesnât solve that problem â they are pushing people to use their mobile apps and sign in, etc. They will simply use a different platform.
(Also note that these platforms somehow make themselves available to crawlers, so I use https://archive.is/, ditto with the NYTimes and so forth. IMO search engines should not jump through special hoops to see this content; conversely, if they make their content visible to search engines, then itâs fair game for readers to see.)
Iâll put it like this: I expect corporate interests to continue using the most full-featured platforms available, including the web platform as we know it today. After all, those features were mostly created for corporate interests.
That doesnât mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.
That doesnât mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.
The trick here is to make sure people use it for a large value of people. I was pretty interested in Gemini from the beginning and wrote some stuff on the network (including an HN mirror) and I found that pushing back against markup languages, uploads, and some form of in-band signaling (compression etc) ends up creating a narrower community than Iâd like. I fully acknowledge this might just be a âme thingâ though.
EDIT: I also think youâve touched upon something a lot of folks are interested in right now as evidenced by both the conversation here and the interest in Gemini as a whole.
That doesnât mean everybody else has to build stuff the same way the corps do.
I agree, and you can look at https://www.oilshell.org/ as a demonstration of that (both the site and the software). But all of that is perfectly possible with existing platforms and tools. In fact itâs greatly aided by many old and proven tools (shell, Python) and some new-ish ones (Ninja).
There is value in rebuilding alternatives to platforms for sure, but it can also be overestimated (e.g. fragmenting ecosystems, diluting efforts, what Jamie Zawinski calls CADT, etc.).
Similar to my âalternative shell challengesâ, I thought of a âdocument publishing challengeâ based on my comment today on a related story:
The challenge is if the platform can express a widely praised, commercial multimedia document:
Yeah, there are good reasons this is my answer to âif you couldâ and not âwhat are your current projectsâ. :)
I like the idea of that challenge. I donât actually know whether my ideal platform would make that possible or not, but situating it with respect to the challenge is definitely useful for thinking about it.
Indeed - tech isnât the blocker to fixing this problem. The tools gets misused from the economic incentives overpowering the ones from the intended use. Sure you can nudge development in a certain direction by providing references, templates, frameworks, documentation, what have you - but whatever replacement needs to also provide enough economic incentives to minimise the appeal of abuse. Worse still, deployed at a tipping point where the value added exceed the inertia and network effect of the current Web.
I absolutely believe that the most important part of any effort at improving the situation has to be making the stuff you just said clear to everyone. Itâs important to make it explicit from the start that the projectâs view is that corporate interests shouldnât have a say in the direction of development, because the default is that they do.
I think the interests of a corporation should be expressible and considered through some representative, but given the natural advantage an aggregate has in terms of resources, influence, ânetwork effectâ, ⌠they should also be subject to scrutiny and transparency that match their relative advantage over other participants. Since that rarely happens, effect instead seem to be that the Pareto Principle sets in and the corporation becomes the authority in âappeal to authorityâ. They can then lean back and cash in with less effort than anyone else. Those points are moot though if the values of the intended tool/project/society arenât even expressed, agreed upon or enforced.
Yes, I agree. I do think that this is largely a result of PDF being a corporate-driven project rather than a grassroots one. As somebody else said in the side discussion about Gemini, thatâs not the only source of feature creep, but I do think itâs the most important factor.
I do like the general idea of Gemini. Iâm honestly still trying to put my thoughts together, but Iâd like something where itâs guaranteed to be meaningful to interact with it offline, and ideally with an experience that looks, you know⌠more like 2005 than 1995 in terms of visual complexity, if you see what I mean. I donât think we have to go all the way back to unformatted text, it just needs to be a stable target. The web as it exists right now seems like itâs on a path to keep growing in technical complexity forever, with no upper bound.
TCP/IP/HTTP is fine (I disagree with Gemini there). Itâs HTML/CSS/JS that are impossible to implement on a shoestring.
The webâs core value proposition is documents with inline hyperlinks. Load all resources atomically, without any privacy-leaking dependent loads.
Software delivery should be out of scope. Itâs only needed because our computers are too complex to audit, and the programs we install keep exceeding their rights. Letâs solve that problem at the source.
Itâs of course totally fine to disagree, but I genuinely believe it will be impossible to ever avoid fingerprinting with HTTP. Iâve seen stuff, not all of which Iâm at liberty to talk about. So from a privacy standpoint I am on board with a radically simpler protocol for that layer. TCP and IP are fine, of course.
I agree wholeheartedly with your other points.
That is a really cool project! Thank you for sharing it!
Sorry, I neglected to expand on that bit. My understanding is that the bits of HTTP that can be used for fingerprinting require client (browser) support. I was implicitly assuming that weâd prune those bits from the browser while weâre reimplementing it from scratch anyway. Does that seem workable? Iâm not an expert here.
Iâve been involved with Gemini since the beginning (I wrote the very first Gemini server) and I was at first amazed at just how often people push to add HTTP features back into Gemini. A little feature here, a little feature there, and pretty soon itâs HTTP all over again. Prune all you want, but people will add those features back if itâs at all possible. Iâm convinced of that.
Pretty much. At least Gemini drew a hard line in the sand and not try to prune an existing protocol. But people like their uploads and markup languages.
, but Iâd like something where itâs guaranteed to be meaningful to interact with it offline
This is where my interest in store-and-forward networks lie. I find that a lot of the stuff I do on the internet is pull down content (read threads, comments, articles, documentation) and I push content (respond to things, upload content, etc) much less frequently. For that situation (which I realize is fairly particular to me) I find that a store-and-forward network would make offline-first interaction a first-class citizen.
I distinguish this from IM (like Matrix, IRC, Discord, etc) which is specifically about near instant interaction.
Instead of having this Frankensteinâs monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.
There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.
The same code can run on your local machine, or on someone elseâs machine. A website is just a document on someone elseâs machine. It can run scripts on their machine or yours. Except on your machine they canât run unless you let them and they canât do I/O unless you let them.
There is one email protocol. Email addresses canât be spoofed. If someone doesnât like getting an email from you, they can charge you a dollar for it.
There is one IM protocol. Itâs used by computers including cellphones.
There is one teleconferencing protocol.
There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.
Every GUI program is a CLI program underneath and can be scripted.
(Some of this was inspired by legends of what LISP can do.)
I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?
Pithy as that sounds, it is not convincing for me.
Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.
I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.
It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but youâve tried to hide them so an attacker canât exploit them because they donât know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.
Security through diversity, in contrast, says that you may have vulnerabilities but they wonât affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.
This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If weâd only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.
Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.
Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? Itâs an accidental observation and not a really compelling one.
In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.
A few examples come to mine thoughâheartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.
You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (itâs basically one plant) and thereâs a fungus threatening to kill the banana market. A monoculture is a bad idea.
Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.
I donât buy that the we need to employ the same idea in an engineered system. Itâs a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasnât intentional.
Iâd rather have an engineered, intentional robustness to the systems we build.
To go in a slightly different directionâbuilding codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we donât have snowâwe just need a shallow angle to shed rain water. Conversely, we donât need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. Iâm sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).
We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.
I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.
In principle, yeah. But even the best security engineers are human and prone to fail.
If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.
Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, theyâll all explode. Weâd eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars haveâwhile still having problems of its own.
In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I donât think this is better.
Sure, youâd have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.
From an attackerâs perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesnât need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.
To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.
Iâd rather drive a car a million other drivers have been driving than drive a car thatâs driven by 100 people. Because over a million drivers itâs much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.
We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!
There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.
I would also like to rewrite most stuff from the ground up. But monocultures arenât good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:
proven correct microkernel written in rust (or similar borrow-checked language), something like L4
capability based OS
no TCP/HTTP monoculture in networks (SCTP? pubsub networks?)
are our current processor architectures anywhere near sane? could safe concurrency be encouraged at a hardware level?
seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. Itâs a fantastic demonstration of the state of modern proof tools, itâs a terrible example of a microkernel.
I know there was the work in ~2014, but being able to run things that were multi-threaded, have a much simpler language, and support the guile ecosystem where emacs libraries would just be guile libraries makes my heart happy.
Doesnât make a lot of sense to actually do, and I understand the limitations.. but whoooo boy have I been lusting for this for years and years.
Why Guile instead of Common Lisp? Elisp is far closer to Lisp than to Scheme; Lisp has multiple compatible implementations while Guile (and every other useful Scheme) is practically incompatible with any other useful Scheme, because the RnRS series, even R6RS, tend to underspecify.
GNUâs focus on Scheme rather than Common Lisp for the last 20 years has badly held it back. Scheme is great for teaching and implementation: it is simple and clean and pure. But it is to small, and consequently different implementations have to come up with their own incompatible ways of doing things.
While the Common Lisp standard is not as large as I would like in 2021, it was considered huge when it came out. Code from one implementation is compatible with that of others. There are well-defined places to isolate implementation-specific code, and compatibility layers for, e.g., sockets and threading, exist.
Because itâs my favorite flavor of scheme, and I enjoy programming in scheme much more than common lisp. I do not like lisp-2âs, but I do like defmacro (gasp) so who knows.
Emacs being rewritten in common lisp would also be awesome.
Multiple times Iâve considered, and then abandoned, taking microemacs and embedding a Scheme in it. The reality is that itâs basically a complete rewrite, which just doesnât seem worth itâŚ. And you lose all compatibility with the code I use today.
Guile-emacs, though, having multiple language support, seems to have a fighting chance at a successful future, if only it was âstaffedâ sustainably.
I mean, by the reductivistâs view, yes. Edwin serves a single purpose of editing scheme code with an integrated REPL. You could, of course, use it beyond that, but thatâs not really the goal of it, so practically no one does (I am sure there are some edwin purists out there).
My interest in this as a project is more as a lean, start from scratch standpoint. I wonder what concepts Iâd bring from emacs over. I wonder if Iâd get used to using something like tig, instead of magit. I wonder if the lack of syntax highlighting would actually be a problem⌠The reason Iâve never made a dent in this is because I donât view reflection of how I use things like this as deeply important. Theyâre toolsâŚ
Iâm going to go more heretical: emacs with the core in Rust and Typescript as the extension language. More tooling support, more libraries, more effort into optimizing the VM.
Lisps are alright but honestly I just donât enjoy them. That maxim about how you have to be twice as clever to debug code as you do to write it, so any code thatâs as clever as you can make it is undebuggable.
I donât think yaml is the best choice for this. Most of the time userâs will want to see tables rather than a nested format like yaml. I guess it is a bit nicer to debug than JSON but ideally the user would never see it. If it was going to hit your terminal it would be rendered for human viewing.
Yaml is also super complex and has a lot of extensions that are sparsely supported. JSON is a much better format for interop.
On the other hand the possibility of passing graphs between programs is both intriguing and terrifying.
I was recently trying to figure out how, from inside a CLI tool Iâm building, to determine whether a program was outputting to a screen for a user to view, or a pipe, for another program to consume⌠Turns out itâs not as straightforward as I thought. I do believe the modern rust version of cat, bat can do this. Because my thought isâŚ. why not both?
It depends on your definition of viable, of course, but Sailfish, even with its annoyances, has been my only or primary mobile OS for about eight years now. It is pretty solid these days. But I do use the version with an android emulator for banking apps and such.
A store-and-forward messaging system. I often find myself in areas of dodgy connectivity and would love to have a way to âcatch upâ when Iâm connected, head back out to , and then queue up responses for when I have some connectivity back. Iâd also like some QoS for messages that are important to me (like from my partner).
This might not be quite what youâre looking for, but Scuttlebutt was built to do exactly that.
From memory, the creator did quite a lot of sailing, and wanted a way to receive information via mesh, potentially from other boats, who could propagate data back to shore.
I remembered starring on github a SAT solver written in Rust 3 or 4 years ago, it looked really promising. I was about to tell you to look into it, until I realized you were the author of it. :D
I know that Z3 does much more than Minisat, but Iâm wondering: is it worth it? (= All the extra Z3 features)
Thereâs a bunch of other SAT solvers in rust (and this one is just mostly a port of minisat done by someone else, that I forked to add a few things, I canât claim authorship).
I know that Z3 does much more than Minisat, but Iâm wondering: is it worth it? (= All the extra Z3 features)
Yes! SMT solvers are more convenient to use than SAT solvers in many situations, and I think itâll increasingly be the case in the future (e.g. bitvectors are often as powerful as their SAT encoding equivalent). In some cases, you have a clear encoding of your problem into SAT, in which case it might be better. This book has a lot of examples using both a SAT solver and Z3.
Beyond that, SMT solvers are one order of magnitude more advanced than SAT solvers, Iâd say. Theyâre full of features to handle theories (hence the name), and give users a nice high-level language to express their problem, rather than a list of clauses. SAT has its place (see for example Marijn Heuleâs work on using clusters for parallel SAT solving to solve really hard math problems), but for most users I think SMT is the easiest to use, because of all these additional features. Amazon AWS has formed a large team of SMT experts and is using cvc5 for crypto or authentication stuff, not sure which exactly.
No, rust is better suited to this kind of very high performance tools :-). I do have a SMT solver in OCaml in the works, but you canât really compete with the C/C++ crowd in terms of performance.
The web. Itâs a gibsonian chaotic mess without being cool. 1995 cyberpunk aesthetics and fluidity. I should be swimming in data, not staring at a screen with an awkward posture, almost drooling.
If I tasked you with finding where a specific feature is implemented (to verify its correctness), given only a web browser and access to their source repository on Github, youâll have a miserable time.
I once had such plan in form of Octavo written in Rust. However I lost interest a little (also I need to eat, and I am not a student with bunch of free time anymore).
If by ârewriteâ you mean jump back in time and be in charge of making (that is, if Iâd have to actually do the work and live with the consequences of that world), I would replace C with a C that didnât have null, had a string type, and probably have objects. Itâs the obvious answer, itâs probably going to be a good thing for everyone else on the list, and while I donât think Iâd like living pre-internet, Iâd be okay to take the hit for the team.
If by ârewriteâ you mean magically replace something that exists now with something changed, I think Iâd probably rewrite the MIT license to be this ungodly poisonous copyleft thing and watch the world burn.
I have big ambitions here, so I wonât probably ever be able to work on them:
Desktop environment with a slate of applications that fit. Do something new; or at least if youâre going to copy the past, copy whatâs interesting, not do âWindows 98, foreverâ like so many other desktops tend to go in. Relatedly, itâs interesting that there was no parallel evolution of the GUI. There is pre-Xerox (the Mother of all Demos) and post-Xerox. Even stuff you wouldnât think would be influenced like a Lisp Machine draws from Xerox even from the smallest of widgets.
A server operating system with again, a set of applications, because Unix is an unavoidable black hole currently. As above, itâs OK to copy the past if itâs interesting and rarely tread. Perhaps itâd be worth addressing commercial concerns like i.e. containerization, by approaching it from the start instead of bolting it on later.
This is an idea Iâve been toying with for some time. Basically an HTML rendering engine meant for GUIs, like Sciter. But instead of Javascript, youâd control everything using the host language (probably Rust). If youâd want JS youâd have to somehow bind that to Rust.
I think this could really work out. However: Iâve dealt with XML and HTML parsers for years in the past, and Iâm not sure Iâm ready yet to dive into the mess that is HTML again.
All of it. I want something that is like Emacs in inspectability and plasticity, but with the care and attention of classic Mac OS. I donât care if nobody else wants it, or wants to use it. I donât care if it annoys Unixheads. I donât care if itâs hard to communicate with the rest of the world.
Iâd be really excited about rewriting Terraform, something thatâs not Nix but also not Lisp and has some better state/secret/API endpoint functionalities. I feel Terraform is lacking many features, some of which can be be filled in by Terragrunt and but things like API logs, applying/reverting/⌠would be incredible useful to add.
Email, I am already considering how to do it using activitypub or something federated for longform asynchronous messaging, maybe with some newsletter-like follow system that could be an alternative to RSS if it caught on.
Also specifically no interop with Email, Ideally I dont want any of emailâs spam and legacy to flood into the new system. Not sure if and when i will get this fleshed out, but its always on my mind.
Programming teachings to avoid breaking brains and let people do most of their work without state.
Communications between computers. I want Artoo to plug in to my port and be able to navigate around my help docs! Also want to bring projects between computers Iron Man style.
The expectations of embedded device makers that nobody will ever want to jack in to the device and drive it with external software. That applies to at least ovens, furnaces, washing machines, and clock radios.
Smalltalk. Imagine if the host system files were first class ways to host code in a smalltalk, with similar first class support for dynamic libraries (or other executables so at least you could link them in).
Programming teaching to make the inner state of more programs, including intermediate stages in transformations/ compilations, to be visible and possibly editable. See how Excel does this to great effect.
Typescript â would love a typescript that performs well at scale. I donât think Iâd ever finish if I tried it though, the scope of the project is quite vast now (language features, editor integrations, refactoring, etc).
I have not thought about this deeply but I have a feeling that some of the things that make typescript great (e.g. the ability to incrementally port a javascript codebase) are the things that make it slow. For example, being able to omit the return type of a function likely forces a lot of things to happen in series that could otherwise happen in parallel.
Those tradeoffs makes sense in a world thatâs mostly JavaScript, but do they still make sense in a world thatâs mostly TypeScript? I think not. Since there has been a culture shift towards static typing, the constraints on the tool have also shifted.
Why not C# ? As far I can see thatâs a fast language that TypeScript was very much influenced by.
Itâs obviously not compatible with JS, but anything that is wonât be fast. JS can be fast in some cases with heroic JITs, but it will always have slow cases due to its semantics, which canât be changed without breaking the language. (This was the motivation behind Dart too.)
Ah, I didnât make myself very clear. I was talking about the performance of the compiler rather than the output of the compiler. Though it would be nice if that was faster too :)
Itâs not 30ms, itâs more like 6 seconds when you run something like hdfs dfs -ls /
To be clear, the above is the client warming up before it begins to contact the server. The server will respond to anything in an instant as there is no warm-up to respond to requests.
Iâm not sure if Iâd rebuild in Rust or GoLang but most definitely not Python. There is a lot of CPU-intensive operations hiding in Hadoop.
Thereâs actually an HDFS client written in Go. Itâs awesome as there is no JVM warmup time when you execute it unlike the HDFS client that ships with Hadoop. Waiting 6 seconds every time you want to list the contents of another folder is frustrating. Someone now just needs to port everything else.
I for one would not admit emojis into unicode. Maybe let whatever vendors want standardize something in the private use areas. But reading about new versions of unicode and the number of emojis added has me wondering about the state of progress in this world.
MSN Messenger had emoji-like things 20+ years ago, but they were encoded as [[:picture name:]]. This works, because they are pictures, not characters. Making them characters causes all sorts of problems (what is the collation order of lower-case lambda, American flag and poop in your locale? In any sane system, the correct answer is âdomain errorâ).
Computers have been able to display small images for at least a decade before Unicode even existed, trying to pretend that theyâre characters is a horrible hack. It also reinvents the problems that Chinese and other idiographic languages have. A newspaper in a phonographic language can introduce a neologism by rearranging existing letters, one in an ideographic language has to either make a multi-glyph word or wait for their printing press to be updated with the new symbols. If I want a new pictogram in a system that communicate images, I can send you a picture. If I want to encode it as unicode then I need to wait for a new unicode standard to add it, then I need to wait for your and my software to support it.
On the contrary, shipping new emoji is a great way to trick people into upgrading something when they might not otherwise be motivated. If you have some vulnerability fixes that you need to roll out quickly, bundle them alongside some new emoji and suddenly the update will become much more attractive to your users. Works every time. All hail the all-powerful emoji.
The main rewrite project that I will never get to is reinventing NeWS on top of a modern set of abstractions. A full modern colour-space, text layout engine (OpenType fonts, proper typesetting with something like SILE built in), along with the application-specific bits provided as WebAssembly (no hand-written PostScript or fun attempts to generate PostScript from another language, just compile your view objects to run on the display server), and a UI toolkit that makes views store only ephemeral state so that you can always disconnect from one display server and reconnect to another, per application. Iâd want to write a browser-based implementation and a bare-metal one, so that you could write apps for this system that could run on any existing platform but also natively.
The rewrite project that I might actually do, is writing a distributed filesystem for Confidential Computing where data is stored in encrypted cloud blob storage and metadata in CCF, so the entire thing has strong confidentiality and integrity guarantees (the cloud provider is trusted only for availability, and even that can be mitigated by mirroring the blob storage across multiple providers).
I would like a Xorg that is easier to build and doesnât do so many things I donât need. I understand why some of those things are there historically, but itâs one of the last remaining resource hogs on my system and I am sure there is some way to slim it down.
Screensharing and collaboration tools that donât suck. I spend a lot of time collaborating, mentoring and helping folks when I have something to contribute or even need help of my own. I deal with wide range of skillsets from very computer illiterate to system architects at major corporations, but it seems we always run into some unforeseen issue with setup or other technical difficulties.
I use mainly Microsoft Teams (which is actually pretty decent) and Teamviewer, and even Discord on occasion, but they all have strengths and weaknesses.
I have briefly played with Jitsi and it seems ok, but there doesnât really seem to be any open standards for this stuff out there. RDP as painful as it is for me to admit is the one consistent protocol that works well. VNC has been around for a long time but is laggy and the amount of derivatives makes my head spin!
Rewriting other peopleâs stuff is cool because then you can say you did it yourself. Iâm ALL about that! Even if it is worse, at least it is YOURS. Iâve done tons of that. But now Iâm at the point where the majority of what I use is my own stuffâŚ. and if I were to rewrite it now, Iâd probably just break functionality for a long time then eventually repeat the same ugliness again that made me want to rewrite it in the first place.
A few months ago, I was tempted to rewrite my gui library. But that library took a long time to write and a rewrite would tooâŚ. and probably come out bad as well. And when I got specific about what I really hated most, a migration path actually came to mind that I could realistically do in a month. I wrote about it in my blog too: http://dpldocs.info/this-week-in-d/Blog.Posted_2021_05_03.html#minigui and now Iâm a little happier with it without a rewrite.
that said there are still a few things on my todo list, like a custom calendar, and a new browser ui (doing from scratch is too much, but at least my own ui over a webview would help a lot with the pain i have using them). Which arenât exactly rewrites but still pieces of my custom environment that arenât complete at this time.
I thought about that a while ago, but Godotâs text handling was incredibly primitive. I saw some recent blog posts that look as if itâs improved massively. There are a few other things that are probably possible to implement, but not there yet in Godot. For example, if I understand correctly, the standard structured data controls donât do lazy loading in Godot. I didnât find a table view in the API but the tree view looks as if it requires you to populate it up-front. The last GUI application I wrote needed to be able to create a table view with tens of millions of rows. This was trivial with Cocoa / GNUstep, because the GUI framework just needed rows to be provided as they became visible.
I have a few quick tests for a GUI framework:
Can I apply arbitrary semantic annotations to a range with their default rich text representation and then have those mapped to concrete styling by something later in my pipeline?
Can I create two text views and have content flow from one to the other?
Can I create a scroll view and embed everything else that I have inside it?
Can I create a table view with a million rows and scroll it smoothly?
Rewriting the way GPU drivers work with OpenGL and Vulkan, etc. Instead of drivers being required in the OS they are ROM drivers on the GPUs and they work with the software without having to install a binary blog driver or anything. That way ALL OSes are GPU accelerated. You can flash the ROMs to update the GPU drivers.
I mention Plan 9 in a couple places, but I think it mostly works over LANs (fast reliable networks), but I think something that works on more diverse hardware would be more useful (similar to the web, git, BitTorrent, etc.)
Iâd like to completely reimagine, redesign and reimplement a modern operative systems that, learned the lessons of UNIX, Linux etc. would be centered about modern concepts in computer science (no more âeverything is a fileâ), designed for the interoperability between desktop, mobile and IoT. Itâs more about creating a new, more modern and mature standard that can overcome POSIX, Windows, Mac/iOS (BSD) and the bizarre Android SDK, and then develop a whole new OS that learned from all these lessons.
I want to make an iTunes 4.x-era inspired music player for local music libraries. No other music player has ever come close to matching my mental model for how music should be organized and having the feature set I want, so it seems if I want anything like it in 2021 that is reliable and not trying to upsell me on streaming music subscriptions, Iâm just gonna have to write my own. I started work on the Mac version a few weeks ago.
Iâll go a different way with this and take more of a broad software design ârewriteâ:
JSON. Not really âfrom scratchâ, just the little things like trailing commas that are annoying enough in real life to drive people to horrifying monstrosities like YAML that almost infinitely raise the bar of entry for implementations.
or, rewinding a bit further: the entire syntactic side of the web stack, informed by⌠well, Lisp. S-expressions (with some universal DOCTYPE-like header syntax and/or namespacing mechanism) everywhere, entirely avoiding this ridiculous zoo of [SG|X]ML + a vaguely C-like veneer on a scripting language lazily cobbled together in a week + a subset of that being the de-facto structured data syntax.
I sometimes wish I had the time to rewrite Rust with dependent typing and with monads, and it would compiles to readable C99.
Some goals: low overhead, safety.
But in addition to this, the same operators for async and results, since they both would be monads. And also more expressive types. (Vec<len=n > 1>.head() returns the value, since it is proven that it has at leas one element) Also since it compiles to C, no needs to ârewrite in newlangâ, you can use almost any library as is.
My thought probably requires some time travel, so obviously a not realistic.
I would love to have a base cloud API that is supported by all major cloud providers. It would need to be something more than library with specific language binding, it would have to be defined (g?)RPC or REST API first, and have the tooling built around this. It would provide ways to deal with common compute, storage and network tasks in a way that you declare the dependencies between the base units so we donât need to orchestrate outside the API for deployments that fit this base mold.
The API would need to support extension points that would allow the unique services and capabilities for the cloud provider. My hope for this is that the base API design will influence these extensionâs and we would likely have a more consistent API even for the extensions.
The value I would hope this base cloud API would provide is removing, a ton of complexity from all the systems we now build to deal with any cloud system. I imagine if TerraForm/Ansible/Chef/Puppet/Salt didnât have to implement these independently, they could focus on building out their own niche configuration philosophy more. Additionally, I think this base would simplify a self hosted cloud (on-prem, home or security lab). In the same way it may reduce vendor tie-in, but hopefully in a way that doesnât hamper innovation at the cloud providers.
The whole bloody web stack. Itâs an insane pile of dodgy shit.
No I donât mean rewrite firefox or something like that. A complete tear down and redesign.
Developers always want to rewrite code and think that almost everything that has been written before is bad. This can be correct, but is often wrong.
This subjective opinion needs more substance to not be brushed off as yet another developer calling something âbadâ, IMO.
Yet itâs the third most upvoted opinion, so I suspect thereâs something to it!
Iâll take a stab at my view into why JohnCarter is right. âThe webâ today could mean anything along a spectrum of
There are two dramatically different models at play: on one end, the website as document makes perfect sense. But when youâre building a WebGL/WASM app that lets you plug your piano in, what does a DOM get you anyways?
And we live in the annoying middle: HTML/JS/CSS has become the lowest common denominator, but devs are free to pick and choose from higher in the spectrum without any real negative feedback â which means a browser is no longer something any one person can really understand, let alone build. As a result, only the worlds largest advertising company can really keep up with delivering one.
So if I could wave the magic wand and build the world from scratch Iâd either make the browser subsume the entire OS â give me a rumpkernel that boots Firefox, no POSIX no nothin, OR Iâd draw a line and make browsers âthings that can render hypermediaâ or âthings that can do arbitrary network connections/rendering and drawingâ, and if you tried to make one do the other youâd be laughed out of the room.
Slight misunderstanding hereâŚ. @owen is on to it.
I donât think I can rewrite firefox better. Donât want to even try.
Look at it this way. If I printed out the whole stack of standards the current firefox or chrome is based on and told you to read themâŚ..
It would be stack of paper insanely high, a fair bit of it would not be complied with, a vast stack would be in the per browser docs and a another large bit would not even be documented.
A large chunk of the reason for the massive ecosystem of javascript frameworks is they have been tested into the shape of something that sort of works on the main browsers.
If you looking for consistency of design or architectural principlesâŚ. you might find traces in the older stuff⌠and then a flood of âah, shit. Letâs just maximally kludge what we have to make cool shitâ thereafter.
I believe the point of consistency of design and of architectural principles is to select the trade off points you wish to hit and then permit the implementer and clients to rely on massively simplifying assumptions.
ie. If you do that well, you explicitly take some use cases off the table, but make everything literally several orders of magnitude simpler and easy to understand, implement and use.
I agree with the first bit, but not the second. Almost everything in the software world. The bit people are often wrong about is the idea that rewriting it will give something less bad.
I would like a CalDav protocol which isnât tied to WebDav.
Same, now that you mention it.
Please! This would be fantastic!
Not exactly what youâre asking for, but have you seen the JMAP protocol?
https://datatracker.ietf.org/doc/html/draft-ietf-jmap-calendars
Without XML and one that everbody complies with too :)
I daydream about writing a web browser. Make network requests, parse, render. A modern. competitive browser would be more than a lifetime of work, but getting something that can halfway render an ugly version of simpler pages (say, this site) seems like a fun couple-month project.
You might be a little late to the party, but SerenityOS is building just that. I forgot how its called, but they are building their own web browser from scratch. May be you can still find some interesting issues to work on (I donât know if it can render lobste.rs currently). At least it may serve as inspiration and proof that it is possible :)
Same! I donât belive it has to be âmodern/competitiveâ, https://suckless.org/project_ideas/ recommends
Someone showed up in a chat Iâm in and started doing just that. For a school project, they said.
I was wailing and warning and explaining but now they can parse HTML, CSS and started doing layout. Youth must have been fun. ;)
Fwiw, we pointed them to http://browser.engineering/ and https://htmlparser.info/ and https://limpet.net/mbrubeck/2014/08/08/toy-layout-engine-1.html
Sounds like netsurf
Or dillo, or links. I really loved using the latter under NetBSD on my old underpowered iBook G4. That machine was so slow that Firefox was a total resource hog (and it occasionally had weird issues where colors would be inverted due to some endian issue). Dillo development seems to have stagnated unfortunately - I thought it was a really exciting project when I first learned about it (circa 2003).
Iâve actually kinda done that, back in 2013ish. It was able to more-or-less render my old website http://arsdnet.net/htmlwidget4.png and even tried to do dlang.org http://arsdnet.net/htmlwidget3.png
I wrote a dom.d module that parses all kinds of trash html and can apply css and do form population etc, and a script.d that can run code to play with that dom, and simpledisplay.d that creates windows⌠so then thought maybe it wouldnât be too hard to actually render some of that, so I slapped together https://github.com/adamdruppe/arsd/blob/master/htmlwidget.d too. But like I didnât have a very good text render function at the time, which is why it is a particular font with some weird word wrapping (it wanted each element to be a single rectangle). The script never actually worked here.
The table algorithm is complicated too, so I just did a simplified version. Then of course the css only did basics, like the float left there was the bare minimum to make that one site work. But stillâŚ. it kinda worked. You could even type in forms
Iâm tempted to revisit it some day. Iâve expanded my library support a lot since then, could prolly do a better job. Realistically though links and dillo would still be better lol. but still it is kinda cool that I got as far as I did. The only libraries used if you go down are basic win32/xlib and of course bsd sockets. The rest is my stuff (all in that same repo btw)
Did you see https://serenityos.org/ already? They are re-writing an OS and a Browser from scratch. It looks like a lot of fun.
Typesetting systems. Itâs interesting to think about the differences between TeX and html, one predating scrolling and one designed for screens rather than paper. What would a simple typesetting system look like that was built with a minimalist ethos, for scrolling, without perfect hyphenation and pixel-perfect page boundaries.
Iâve been playing around with SILE recently. While it still has some rough edges, it has been refreshing coming from LaTeX. I donât know if youâve already looked into it.
Going the other wayâ a easier to deploy/run TeX systemâ have you seen Tectontic?
Iâm using it on and off with some existing documents and was pleasantly surprised.
I have seen it. I will admit that I havenât dug too deep into it. I respect the effort, however the clean slate implementation of SILE (as opposed to Tectonticâs port of XeTeX to Rust) offers some advantages.
Documents can be either in TeX-like syntax or XML. (Meaning they can be generated by a program and be valid) Also the native support of SVG (instead of the convoluted Tikz) is a killer feature for me. But in general, SILE is more lightweight.
the virtual console / terminal emulator / shell thing. On the systems level, itâd be great if we had a simple API to build text based applications (batch or interactive) which isnât based on emulating escape codes for ancient hardware. On the users side, itâd be cool to have a shell with non-blocking commands, command palette, modern keyboard shortcuts, etc. Bonus points if you can make this a new stable interface to write apps against, the way win32 or, ahem, escape codes are stable
relational database without SQL and nulls, and with wasm-style minimal relational language with text/binary format which is a target for orms/high level interactive query language. Itâs insane that SQL injection is a thing,
Oil has the start of this, called âheadless modeâ !
http://www.oilshell.org/blog/2021/06/hotos-shell-panel.html#oils-headless-mode-should-be-useful-for-ui-research
Itâs a shell divorced from the terminal. A GUI / TUI can communicate with the shell over a Unix domain socket. There is a basic demo that works!
One slogan is that a shell UI should have a terminal (for external commands); it shouldnât be a terminal.
As mentioned recently I need people to bang on the other side of this to make it happen, since I am more focused on the Oil language, etc.
Iâm curious how this is different from just running the shell and talking to itâs stdout/stdin?
A few different issues, not exhaustive:
stdout
to the shell is a pipe, then invoking sayls --color
will inherit the pipe as stdout. This meansisatty(stdout)
will return false, which means you wonât get color.ls
will have the TTY as its stdout! This works!ls
ends and the output of the next command beginsls
begins, etc.Also, with the headless shell, you can make a GUI completion and history interface. In other words, do what GNU readline does, but do it in a GUI, etc. This makes a lot of sense since say Jupyter notebook and the web browser already have GUIs for history and completion.
(Note there is a bash Jupyter kernel, but itâs limited and doesnât appear to do any of these things. It appears to scrape stdin/stdout. If anyone has experience Iâd be interested in feedback)
Terminals offer âcapabilitiesâ, stuff like querying the width and height, or writing those weird escapes that change the color. I would guess there would either be no capabilities available to a headless shell, or maybe their own limited set of capabilities emulated or ignored in the UI. I havenât looked at the source so this is merely speculation.
Well a typical usage would be to have a GUI process and a shell process, with the division of labor like this:
osh --headless
), which involves setting up a Unix domain socket to communicate over.So the point here is that the shell knows nothing about terminals or escape codes. This is all handled in the GUI process.
You could have a shell multiplexer without a terminal multiplexer, etc.
If none of the commands needed a terminal, then the GUI doesnât even need a terminal. It could just do everything over pipes.
So there is a lot of flexibility in the kinds of UIs you can make â itâs not hard-coded into the shell. The headless shell doesnât print the prompt, and it doesnât handle completion or history, etc. Those are all UI issues.
To be fair, SQL injection is only a thing if you use your DB driver insanely wrong.
I agree on the need for a relational query language thatâs better than SQL
Iâd like a much smaller version of the web platform, something focused on documents rather than apps. Iâm aware of a few projects in that direction but none of them are in quite the design space Iâd personally aim for.
Well, âweâ tried that with PDF and it still was infected with featureitis and Acrobat Reader is yet another web browser. Perhaps not unsurprising considering Adobeâs track record, but if you factor in their proprietary extensions (thereâs javascript in there, 3D models, there used to be Flash and probably still is somewhere..) it followed the same general trajectory and timeline as the W3C soup. Luckily much of that failed to get traction (tooling, proprietary and web network effect all spoke against it) and thus is still more thought of âas a documentâ.
This is another example of âitâs not the tech, itâs the economy, stupid!â The modern web isnât a adware-infested cesspool because of HTML5, CSS, and JavaScript, itâs a cesspool because (mis)using these tools make people money.
Yeah exactly, for some examples: Twitter stopped working without JS recently (what I assume must be a purposeful decision). Then I noticed Medium doesnât â it no longer shows you the whole article without JS. And Reddit has absolutely awful JS that obscures the content.
All of this was done within the web platform. It could have been good, but they decided to make it bad on purpose. And at least in the case of Reddit, it used to be good!
Restricting or rewriting the platform doesnât solve that problem â they are pushing people to use their mobile apps and sign in, etc. They will simply use a different platform.
(Also note that these platforms somehow make themselves available to crawlers, so I use https://archive.is/, ditto with the NYTimes and so forth. IMO search engines should not jump through special hoops to see this content; conversely, if they make their content visible to search engines, then itâs fair game for readers to see.)
Iâll put it like this: I expect corporate interests to continue using the most full-featured platforms available, including the web platform as we know it today. After all, those features were mostly created for corporate interests.
That doesnât mean everybody else has to build stuff the same way the corps do. I think we can and should aspire for something better - where by better in this case I mean less featureful.
The trick here is to make sure people use it for a large value of people. I was pretty interested in Gemini from the beginning and wrote some stuff on the network (including an HN mirror) and I found that pushing back against markup languages, uploads, and some form of in-band signaling (compression etc) ends up creating a narrower community than Iâd like. I fully acknowledge this might just be a âme thingâ though.
EDIT: I also think youâve touched upon something a lot of folks are interested in right now as evidenced by both the conversation here and the interest in Gemini as a whole.
I appreciate those thoughts, for sure. Thank you.
I agree, and you can look at https://www.oilshell.org/ as a demonstration of that (both the site and the software). But all of that is perfectly possible with existing platforms and tools. In fact itâs greatly aided by many old and proven tools (shell, Python) and some new-ish ones (Ninja).
There is value in rebuilding alternatives to platforms for sure, but it can also be overestimated (e.g. fragmenting ecosystems, diluting efforts, what Jamie Zawinski calls CADT, etc.).
Similar to my âalternative shell challengesâ, I thought of a âdocument publishing challengeâ based on my comment today on a related story:
The challenge is if the platform can express a widely praised, commercial multimedia document:
https://ciechanow.ski/gears/
https://ciechanow.ski/js/gears.js (source code is instructive to look at)
https://news.ycombinator.com/item?id=22310813 (many appreciative comments)
Yeah, there are good reasons this is my answer to âif you couldâ and not âwhat are your current projectsâ. :)
I like the idea of that challenge. I donât actually know whether my ideal platform would make that possible or not, but situating it with respect to the challenge is definitely useful for thinking about it.
Oops, I meant NON-commercial! that was of course the point
There is non-commercial content that makes good use of recent features of the web
Indeed - tech isnât the blocker to fixing this problem. The tools gets misused from the economic incentives overpowering the ones from the intended use. Sure you can nudge development in a certain direction by providing references, templates, frameworks, documentation, what have you - but whatever replacement needs to also provide enough economic incentives to minimise the appeal of abuse. Worse still, deployed at a tipping point where the value added exceed the inertia and network effect of the current Web.
I absolutely believe that the most important part of any effort at improving the situation has to be making the stuff you just said clear to everyone. Itâs important to make it explicit from the start that the projectâs view is that corporate interests shouldnât have a say in the direction of development, because the default is that they do.
I think the interests of a corporation should be expressible and considered through some representative, but given the natural advantage an aggregate has in terms of resources, influence, ânetwork effectâ, ⌠they should also be subject to scrutiny and transparency that match their relative advantage over other participants. Since that rarely happens, effect instead seem to be that the Pareto Principle sets in and the corporation becomes the authority in âappeal to authorityâ. They can then lean back and cash in with less effort than anyone else. Those points are moot though if the values of the intended tool/project/society arenât even expressed, agreed upon or enforced.
Yes, I agree with most of that, and the parts I donât agree with are quite defensible. Well said.
Yes, I agree. I do think that this is largely a result of PDF being a corporate-driven project rather than a grassroots one. As somebody else said in the side discussion about Gemini, thatâs not the only source of feature creep, but I do think itâs the most important factor.
Iâm curious about what direction is that too. Iâve been using and enjoying the gemini protocol and I think itâs fantastic.
Even the TLS seems great since it would allow some some simple form of client authentication but in a very anonymous way
I do like the general idea of Gemini. Iâm honestly still trying to put my thoughts together, but Iâd like something where itâs guaranteed to be meaningful to interact with it offline, and ideally with an experience that looks, you know⌠more like 2005 than 1995 in terms of visual complexity, if you see what I mean. I donât think we have to go all the way back to unformatted text, it just needs to be a stable target. The web as it exists right now seems like itâs on a path to keep growing in technical complexity forever, with no upper bound.
I have some thoughts in this area:
TCP/IP/HTTP is fine (I disagree with Gemini there). Itâs HTML/CSS/JS that are impossible to implement on a shoestring.
The webâs core value proposition is documents with inline hyperlinks. Load all resources atomically, without any privacy-leaking dependent loads.
Software delivery should be out of scope. Itâs only needed because our computers are too complex to audit, and the programs we install keep exceeding their rights. Letâs solve that problem at the source.
Iâve thought about this enough to make a little prototype.
Itâs of course totally fine to disagree, but I genuinely believe it will be impossible to ever avoid fingerprinting with HTTP. Iâve seen stuff, not all of which Iâm at liberty to talk about. So from a privacy standpoint I am on board with a radically simpler protocol for that layer. TCP and IP are fine, of course.
I agree wholeheartedly with your other points.
That is a really cool project! Thank you for sharing it!
Sorry, I neglected to expand on that bit. My understanding is that the bits of HTTP that can be used for fingerprinting require client (browser) support. I was implicitly assuming that weâd prune those bits from the browser while weâre reimplementing it from scratch anyway. Does that seem workable? Iâm not an expert here.
Iâve been involved with Gemini since the beginning (I wrote the very first Gemini server) and I was at first amazed at just how often people push to add HTTP features back into Gemini. A little feature here, a little feature there, and pretty soon itâs HTTP all over again. Prune all you want, but people will add those features back if itâs at all possible. Iâm convinced of that.
So youâre saying that a new protocol didnât help either? :)
Pretty much. At least Gemini drew a hard line in the sand and not try to prune an existing protocol. But people like their uploads and markup languages.
Huh. I guess the right thing to do, then, is design the header format with attention to minimizing how many distinguishing bits it leaks.
Absolutely. There is nothing very fingerprintable in minimal valid http requests.
This is where my interest in store-and-forward networks lie. I find that a lot of the stuff I do on the internet is pull down content (read threads, comments, articles, documentation) and I push content (respond to things, upload content, etc) much less frequently. For that situation (which I realize is fairly particular to me) I find that a store-and-forward network would make offline-first interaction a first-class citizen.
I distinguish this from IM (like Matrix, IRC, Discord, etc) which is specifically about near instant interaction.
I agree.
Have you looked at the gemini protocol?
I have, see my other reply.
The whole damn thing.
Instead of having this Frankensteinâs monster of different OSs and different programming languages and browsers that are OSs and OSs that are browsers, just have one thing.
There is one language. There is one modular OS written in this language. You can hot-fix the code. Bits and pieces are stripped out for lower powered machines. Someone who knows security has designed this thing to be secure.
The same code can run on your local machine, or on someone elseâs machine. A website is just a document on someone elseâs machine. It can run scripts on their machine or yours. Except on your machine they canât run unless you let them and they canât do I/O unless you let them.
There is one email protocol. Email addresses canât be spoofed. If someone doesnât like getting an email from you, they can charge you a dollar for it.
There is one IM protocol. Itâs used by computers including cellphones.
There is one teleconferencing protocol.
There is one document format. Plain text with simple markup for formatting, alignment, links and images. It looks a lot like Markdown, probably.
Every GUI program is a CLI program underneath and can be scripted.
(Some of this was inspired by legends of what LISP can do.)
Goodness, no - are you INSANE? Technological monocultures are one of the greatest non-ecological threats to the human race!
I need some elaboration here. Why would it be a threat to have everyone use the same OS and the same programming language and the same communications protocols?
One vulnerability to rule them all.
Pithy as that sounds, it is not convincing for me.
Having many different systems and languages in order to have security by obscurity by having many different vulnerabilities does not sound like a good idea.
I would hope a proper inclusion of security principles while designing an OS/language would be a better way to go.
It is not security through obscurity, it is security through diversity, which is a very different thing. Security through obscurity says that you may have vulnerabilities but youâve tried to hide them so an attacker canât exploit them because they donât know about them. This works as well as your secrecy mechanism. It is generally considered bad because information disclosure vulnerabilities are the hardest to fix and they are the root of your security in a system that depends on obscurity.
Security through diversity, in contrast, says that you may have vulnerabilities but they wonât affect your entire fleet. You can build reliable systems on top of this. For example, the Verisign-run DNS roots use a mixture of FreeBSD and Linux and a mixture of bind, unbound, and their own in-house DNS server. If you find a Linux vulnerability, you can take out half of the machines, but the other half will still work (just slower). Similarly, a FreeBSD vulnerability can take out half of them. A bind or unbound vulnerability will take out a third of them. A bind vulnerability that depends on something OS-specific will take out about a sixth.
This is really important when it comes to self-propagating malware. Back in the XP days, there were several worms that would compromise every Windows machine on the local network. I recall doing a fresh install of Windows XP and connecting it to the university network to install Windows update: it was compromised before it was able to download the fix for the vulnerability that the worm was exploiting. If weâd only had XP machines on the network, getting out of that would have been very difficult. Because we had a load of Linux machines and Macs, we were able to download the latest roll-up fix for Windows, burn it to a CD, redo the install, and then do an offline update.
Looking at the growing Linux / Docker monoculture today, I wonder how much damage a motivated individual with a Linux remote arbitrary-code execution vulnerability could do.
Sure, but is this an intentional strategy? Did we set out to have Windows and Mac and Linux in order that we could prevent viruses from spreading? Itâs an accidental observation and not a really compelling one.
Iâve pointed out my thinking in this part of the thread https://lobste.rs/s/sdum3p/if_you_could_rewrite_anything_from#c_ennbfs
In short, there must be more principled ways of securing our computers than hoping multiple green field implementations of the same application have different sets of bugs.
A few examples come to mine thoughâheartbleed (which affected anyone using OpenSSL) and Specter (anyone using the x86 platform). Also, Microsoft Windows for years had plenty of critical exploits because it had well over 90% of the desktop market.
You might also want to look up the impending doom of bananas, because over 90% of bananas sold today are genetic clones (itâs basically one plant) and thereâs a fungus threatening to kill the banana market. A monoculture is a bad idea.
Yes, for humans (and other living things) the idea of immunity through obscurity (to coin a phrase) is evolutionarily advantageous. Our varied responses to COVID is one such immediate example. It does have the drawback that it makes it harder to develop therapies since we see population specificity in responses.
I donât buy that the we need to employ the same idea in an engineered system. Itâs a convenient back-ported bullet list advantage of having a chaotic mess of OSes and programming languages, but it certainly wasnât intentional.
Iâd rather have an engineered, intentional robustness to the systems we build.
To go in a slightly different directionâbuilding codes. The farther north you go, the steeper roofs tend to get. In Sweden, one needs a steep roof to shed show buildup, but where I live (South Florida, just north of Cuba) building such a roof would be a waste of resources because we donât have snowâwe just need a shallow angle to shed rain water. Conversely, we donât need codes to deal with earthquakes, nor does California need to deal with hurricanes. Yet it would be so much simpler to have a single building code in the US. Iâm sure there are plenty of people who would love to force such a thing everywhere if only to make their lives easier (or for rent-seeking purposes).
We have different houses for different environments, and we have different programs for different use cases. This does not mean we need different programing languages.
In principle, yeah. But even the best security engineers are human and prone to fail.
If every deployment was the same version of the same software, then attackers could find an exploitable bug and exploit it across every single system.
Would you like to drive in a car where every single engine blows up, killing all inside the car? If all cars are the same, theyâll all explode. Weâd eventually move back to horse and buggy. ;-) Having a variety of cars helps mitigate issues other cars haveâwhile still having problems of its own.
In this heterogeneous system we have more bugs (assuming the same rate of bugs everywhere) and fewer reports (since there are fewer users per system) and a more drawn out deployment of fixes. I donât think this is better.
Sure, youâd have more bugs. But the bugs would (hopefully) be in different, distinct places. One car might blow up, another might just blow a tire.
From an attackerâs perspective, if everyone drives the same car, it the attacker knows that the flaws from one car are reproducible with 100% success rate, then the attacker doesnât need to spend time/resources of other cars. The attacker can just reuse and continue to rinse, reuse, recycle. All are vulnerable to the same bug. All can be exploited in the same manner reliably, time after another.
To go by the car analogy, the bugs that would be uncovered by drivers rather than during the testing process would be rare ones, like, if I hit the gas pedal and brake at the same time it exposes a bug in the ECU that leads to total loss of power at any speed.
Iâd rather drive a car a million other drivers have been driving than drive a car thatâs driven by 100 people. Because over a million drivers itâs much more likely someone hits the gas and brake at the same time and uncovers the bug which can then be fixed in one go.
Sounds a lot like https://en.wikipedia.org/wiki/Genera_(operating_system)
Yes, thatâs probably the LISP thing I was thinking of, thanks!
I agree completely!
We would need to put some safety measures in place, and there would have to be processes defined for how you go about suggesting/approving/adding/changing designs (that anyone can be a part of), but otherwise, it would be a boon for the human race. In two generations, we would all be experts in our computers and systems would interoperate with everything!
There would be no need to learn new tools every X months. The UI would familiar to everyone, and any improvements would be forced to go through human testing/trials before being accepted, since it would be used by everyone! There would be continual advancements in every area of life. Time would be spent on improving the existing experience/tool, instead of recreating or fixing things.
I would also like to rewrite most stuff from the ground up. But monocultures arenât good. Orthogonality in basic building blocks is very important. And picking the right abstractions to avoid footguns. Some ideas, not necessarily the best ones:
A solved problem. seL4, including support for capabilities.
seL4 is proven correct by treating a lot of things as axioms and by presenting a programmer model that punts all of the difficult bits to get correct to application developers, making it almost impossible to write correct code on top of. Itâs a fantastic demonstration of the state of modern proof tools, itâs a terrible example of a microkernel.
FUD unless proven otherwise.
Counter-examples exist; seL4 can definitely be used, as demonstrated by many successful uses.
The seL4 foundation is getting a lot of high profile members.
Furthermore, Genode, which is relatively easy to use, supports seL4 as a kernel.
Someone wrote a detailed vision of rebuilding everything from scratch, if youâre interested. 1
https://urbit.org/
I never understood this thing.
I think that is deliberate.
And one leader to rule them all. No, thanks.
Well, I was thinking of something even worse - design by committee, like for electrical stuff, but your idea sounds better.
We already have this, dozens of them. All you need to do is point guns at everybody and make them use your favourite. What a terrible idea.
Emacs in Guile.
I know there was the work in ~2014, but being able to run things that were multi-threaded, have a much simpler language, and support the guile ecosystem where emacs libraries would just be guile libraries makes my heart happy.
Doesnât make a lot of sense to actually do, and I understand the limitations.. but whoooo boy have I been lusting for this for years and years.
Why Guile instead of Common Lisp? Elisp is far closer to Lisp than to Scheme; Lisp has multiple compatible implementations while Guile (and every other useful Scheme) is practically incompatible with any other useful Scheme, because the RnRS series, even R6RS, tend to underspecify.
GNUâs focus on Scheme rather than Common Lisp for the last 20 years has badly held it back. Scheme is great for teaching and implementation: it is simple and clean and pure. But it is to small, and consequently different implementations have to come up with their own incompatible ways of doing things.
While the Common Lisp standard is not as large as I would like in 2021, it was considered huge when it came out. Code from one implementation is compatible with that of others. There are well-defined places to isolate implementation-specific code, and compatibility layers for, e.g., sockets and threading, exist.
Iâd also love to see a Common Lisp Emacs. One day, Iâm hoping that CLOS-OS will be usable, and using it without Emacs is kind of unthinkable.
Because itâs my favorite flavor of scheme, and I enjoy programming in scheme much more than common lisp. I do not like lisp-2âs, but I do like defmacro (gasp) so who knows.
Emacs being rewritten in common lisp would also be awesome.
Multiple times Iâve considered, and then abandoned, taking microemacs and embedding a Scheme in it. The reality is that itâs basically a complete rewrite, which just doesnât seem worth itâŚ. And you lose all compatibility with the code I use today.
Guile-emacs, though, having multiple language support, seems to have a fighting chance at a successful future, if only it was âstaffedâ sustainably.
Isnât that basically edwin?
I mean, by the reductivistâs view, yes. Edwin serves a single purpose of editing scheme code with an integrated REPL. You could, of course, use it beyond that, but thatâs not really the goal of it, so practically no one does (I am sure there are some edwin purists out there).
My interest in this as a project is more as a lean, start from scratch standpoint. I wonder what concepts Iâd bring from emacs over. I wonder if Iâd get used to using something like tig, instead of magit. I wonder if the lack of syntax highlighting would actually be a problem⌠The reason Iâve never made a dent in this is because I donât view reflection of how I use things like this as deeply important. Theyâre toolsâŚ
Iâm going to go more heretical: emacs with the core in Rust and Typescript as the extension language. More tooling support, more libraries, more effort into optimizing the VM.
Lisps are alright but honestly I just donât enjoy them. That maxim about how you have to be twice as clever to debug code as you do to write it, so any code thatâs as clever as you can make it is undebuggable.
this exists doesnât it?
The last implementation work was ~2015 according to git.
If you know of this existing, let me know!
Rewrite the whole Unix tool space to emit and accept ND-JSON instead of idiosyncratic formats.
Iâm a HUGE fan of libucl. It supports several constructs, including JSON.
I was thinking YAML in that itâs still human readable at the console but also obviously processable, and JSON is a subset.
YAML is a terrible format that should literally never be used. :-) Thereâs always a better choice than YAML.
Strict yaml, without the stupid gotchas
I donât think yaml is the best choice for this. Most of the time userâs will want to see tables rather than a nested format like yaml. I guess it is a bit nicer to debug than JSON but ideally the user would never see it. If it was going to hit your terminal it would be rendered for human viewing.
Yaml is also super complex and has a lot of extensions that are sparsely supported. JSON is a much better format for interop.
On the other hand the possibility of passing graphs between programs is both intriguing and terrifying.
I was recently trying to figure out how, from inside a CLI tool Iâm building, to determine whether a program was outputting to a screen for a user to view, or a pipe, for another program to consume⌠Turns out itâs not as straightforward as I thought. I do believe the modern rust version of cat, bat can do this. Because my thought isâŚ. why not both?
this is a good idea and wouldnât even be that much work
I wish there was a viable mobile phone platform that had decent devices and wasnât tied to either Apple or Google.
It depends on your definition of viable, of course, but Sailfish, even with its annoyances, has been my only or primary mobile OS for about eight years now. It is pretty solid these days. But I do use the version with an android emulator for banking apps and such.
A store-and-forward messaging system. I often find myself in areas of dodgy connectivity and would love to have a way to âcatch upâ when Iâm connected, head back out to , and then queue up responses for when I have some connectivity back. Iâd also like some QoS for messages that are important to me (like from my partner).
This might not be quite what youâre looking for, but Scuttlebutt was built to do exactly that.
From memory, the creator did quite a lot of sailing, and wanted a way to receive information via mesh, potentially from other boats, who could propagate data back to shore.
I am also very interested in this.
Matrix, p2p matrix in particular, should handle that. It does allow for modern nntp replacement.
I would try and reimplement Z3 in Rust. It would probably take 10 years and most of my hair, though.
I remembered starring on github a SAT solver written in Rust 3 or 4 years ago, it looked really promising. I was about to tell you to look into it, until I realized you were the author of it. :D
I know that Z3 does much more than Minisat, but Iâm wondering: is it worth it? (= All the extra Z3 features)
Thereâs a bunch of other SAT solvers in rust (and this one is just mostly a port of minisat done by someone else, that I forked to add a few things, I canât claim authorship).
Yes! SMT solvers are more convenient to use than SAT solvers in many situations, and I think itâll increasingly be the case in the future (e.g. bitvectors are often as powerful as their SAT encoding equivalent). In some cases, you have a clear encoding of your problem into SAT, in which case it might be better. This book has a lot of examples using both a SAT solver and Z3.
Beyond that, SMT solvers are one order of magnitude more advanced than SAT solvers, Iâd say. Theyâre full of features to handle theories (hence the name), and give users a nice high-level language to express their problem, rather than a list of clauses. SAT has its place (see for example Marijn Heuleâs work on using clusters for parallel SAT solving to solve really hard math problems), but for most users I think SMT is the easiest to use, because of all these additional features. Amazon AWS has formed a large team of SMT experts and is using cvc5 for crypto or authentication stuff, not sure which exactly.
Not in OCaml?!
No, rust is better suited to this kind of very high performance tools :-). I do have a SMT solver in OCaml in the works, but you canât really compete with the C/C++ crowd in terms of performance.
History, specifically when I decided to write software as a career
Iâd like to rewrite history, specifically when other people decided to write software as a career!
My life
feel you
The web. Itâs a gibsonian chaotic mess without being cool. 1995 cyberpunk aesthetics and fluidity. I should be swimming in data, not staring at a screen with an awkward posture, almost drooling.
You have a way with words, sir! Reading this made my morning.
OpenSSL. I hate working with OpenSSL.
If I tasked you with finding where a specific feature is implemented (to verify its correctness), given only a web browser and access to their source repository on Github, youâll have a miserable time.
I came here to also say OpenSSL. And libcurl.
I once had such plan in form of Octavo written in Rust. However I lost interest a little (also I need to eat, and I am not a student with bunch of free time anymore).
Isnât https://mesalink.io kinda that?
Thereâs a lot of such projects, but they havenât displaced OpenSSL yet.
If by ârewriteâ you mean jump back in time and be in charge of making (that is, if Iâd have to actually do the work and live with the consequences of that world), I would replace C with a C that didnât have null, had a string type, and probably have objects. Itâs the obvious answer, itâs probably going to be a good thing for everyone else on the list, and while I donât think Iâd like living pre-internet, Iâd be okay to take the hit for the team.
If by ârewriteâ you mean magically replace something that exists now with something changed, I think Iâd probably rewrite the MIT license to be this ungodly poisonous copyleft thing and watch the world burn.
Donât forget 64-bit timestamps!
I have big ambitions here, so I wonât probably ever be able to work on them:
A GUI toolkit that is easy as html to use but as memory efficient as native.
For some reason Iâm reminded of XUL. And I think I just heard a Firefox developer cry out in terror.
the big question there would be what features do you consider essential from html?
This is an idea Iâve been toying with for some time. Basically an HTML rendering engine meant for GUIs, like Sciter. But instead of Javascript, youâd control everything using the host language (probably Rust). If youâd want JS youâd have to somehow bind that to Rust.
I think this could really work out. However: Iâve dealt with XML and HTML parsers for years in the past, and Iâm not sure Iâm ready yet to dive into the mess that is HTML again.
All of it. I want something that is like Emacs in inspectability and plasticity, but with the care and attention of classic Mac OS. I donât care if nobody else wants it, or wants to use it. I donât care if it annoys Unixheads. I donât care if itâs hard to communicate with the rest of the world.
Iâd be really excited about rewriting Terraform, something thatâs not Nix but also not Lisp and has some better state/secret/API endpoint functionalities. I feel Terraform is lacking many features, some of which can be be filled in by Terragrunt and but things like API logs, applying/reverting/⌠would be incredible useful to add.
Email, I am already considering how to do it using activitypub or something federated for longform asynchronous messaging, maybe with some newsletter-like follow system that could be an alternative to RSS if it caught on.
Also specifically no interop with Email, Ideally I dont want any of emailâs spam and legacy to flood into the new system. Not sure if and when i will get this fleshed out, but its always on my mind.
Programming teachings to avoid breaking brains and let people do most of their work without state.
Communications between computers. I want Artoo to plug in to my port and be able to navigate around my help docs! Also want to bring projects between computers Iron Man style.
The expectations of embedded device makers that nobody will ever want to jack in to the device and drive it with external software. That applies to at least ovens, furnaces, washing machines, and clock radios.
Smalltalk. Imagine if the host system files were first class ways to host code in a smalltalk, with similar first class support for dynamic libraries (or other executables so at least you could link them in).
Programming teaching to make the inner state of more programs, including intermediate stages in transformations/ compilations, to be visible and possibly editable. See how Excel does this to great effect.
Typescript â would love a typescript that performs well at scale. I donât think Iâd ever finish if I tried it though, the scope of the project is quite vast now (language features, editor integrations, refactoring, etc).
I have not thought about this deeply but I have a feeling that some of the things that make typescript great (e.g. the ability to incrementally port a javascript codebase) are the things that make it slow. For example, being able to omit the return type of a function likely forces a lot of things to happen in series that could otherwise happen in parallel.
Those tradeoffs makes sense in a world thatâs mostly JavaScript, but do they still make sense in a world thatâs mostly TypeScript? I think not. Since there has been a culture shift towards static typing, the constraints on the tool have also shifted.
Why not C# ? As far I can see thatâs a fast language that TypeScript was very much influenced by.
Itâs obviously not compatible with JS, but anything that is wonât be fast. JS can be fast in some cases with heroic JITs, but it will always have slow cases due to its semantics, which canât be changed without breaking the language. (This was the motivation behind Dart too.)
Ah, I didnât make myself very clear. I was talking about the performance of the compiler rather than the output of the compiler. Though it would be nice if that was faster too :)
Re-write HDFS in something that doesnât run on the JVM. Same for Hive.
What on earth needs HDFS but canât take 30ms of JVM startup time?
Itâs not 30ms, itâs more like 6 seconds when you run something like hdfs dfs -ls /
To be clear, the above is the client warming up before it begins to contact the server. The server will respond to anything in an instant as there is no warm-up to respond to requests.
Go? Rust? Python?
Iâm not sure if Iâd rebuild in Rust or GoLang but most definitely not Python. There is a lot of CPU-intensive operations hiding in Hadoop.
Thereâs actually an HDFS client written in Go. Itâs awesome as there is no JVM warmup time when you execute it unlike the HDFS client that ships with Hadoop. Waiting 6 seconds every time you want to list the contents of another folder is frustrating. Someone now just needs to port everything else.
Sorry, I was trolling a bit with Python. That said, doesnât recent JDK have native compilation support?
Unicode. Seriously, Iâd rewrite Unicode specifications from scratch.
What would you change?
I would go back much further and redesign the alphabet and English spelling rules.
I for one would not admit emojis into unicode. Maybe let whatever vendors want standardize something in the private use areas. But reading about new versions of unicode and the number of emojis added has me wondering about the state of progress in this world.
Customers demand emojis. Software vendors have to implement Unicode support to accommodate that. Unicode support is more widespread.
I take that as a win.
Besides, sponsoring emoji funds Unicode development to some extent.
MSN Messenger had emoji-like things 20+ years ago, but they were encoded as [[:picture name:]]. This works, because they are pictures, not characters. Making them characters causes all sorts of problems (what is the collation order of lower-case lambda, American flag and poop in your locale? In any sane system, the correct answer is âdomain errorâ).
Computers have been able to display small images for at least a decade before Unicode even existed, trying to pretend that theyâre characters is a horrible hack. It also reinvents the problems that Chinese and other idiographic languages have. A newspaper in a phonographic language can introduce a neologism by rearranging existing letters, one in an ideographic language has to either make a multi-glyph word or wait for their printing press to be updated with the new symbols. If I want a new pictogram in a system that communicate images, I can send you a picture. If I want to encode it as unicode then I need to wait for a new unicode standard to add it, then I need to wait for your and my software to support it.
On the contrary, shipping new emoji is a great way to trick people into upgrading something when they might not otherwise be motivated. If you have some vulnerability fixes that you need to roll out quickly, bundle them alongside some new emoji and suddenly the update will become much more attractive to your users. Works every time. All hail the all-powerful emoji.
Sure, let software vendors push security updates with emojis. Unicode the standard doesnât need to do that.
The main rewrite project that I will never get to is reinventing NeWS on top of a modern set of abstractions. A full modern colour-space, text layout engine (OpenType fonts, proper typesetting with something like SILE built in), along with the application-specific bits provided as WebAssembly (no hand-written PostScript or fun attempts to generate PostScript from another language, just compile your view objects to run on the display server), and a UI toolkit that makes views store only ephemeral state so that you can always disconnect from one display server and reconnect to another, per application. Iâd want to write a browser-based implementation and a bare-metal one, so that you could write apps for this system that could run on any existing platform but also natively.
The rewrite project that I might actually do, is writing a distributed filesystem for Confidential Computing where data is stored in encrypted cloud blob storage and metadata in CCF, so the entire thing has strong confidentiality and integrity guarantees (the cloud provider is trusted only for availability, and even that can be mitigated by mirroring the blob storage across multiple providers).
I would like a Xorg that is easier to build and doesnât do so many things I donât need. I understand why some of those things are there historically, but itâs one of the last remaining resource hogs on my system and I am sure there is some way to slim it down.
Screensharing and collaboration tools that donât suck. I spend a lot of time collaborating, mentoring and helping folks when I have something to contribute or even need help of my own. I deal with wide range of skillsets from very computer illiterate to system architects at major corporations, but it seems we always run into some unforeseen issue with setup or other technical difficulties.
I use mainly Microsoft Teams (which is actually pretty decent) and Teamviewer, and even Discord on occasion, but they all have strengths and weaknesses.
I have briefly played with Jitsi and it seems ok, but there doesnât really seem to be any open standards for this stuff out there. RDP as painful as it is for me to admit is the one consistent protocol that works well. VNC has been around for a long time but is laggy and the amount of derivatives makes my head spin!
Moreover Google Meet is ridiculously CPU-hungry on Firefox. Of course, the official âsolutionâ is to use Chrome insteadâŚ
Actually I probably wouldnât in part because Iâve been there, done that, and know rewrites rarely deliver on the other promises. I wrote up a rant about this not long ago: http://dpldocs.info/this-week-in-d/Blog.Posted_2021_03_08.html#adam's-rant
Rewriting other peopleâs stuff is cool because then you can say you did it yourself. Iâm ALL about that! Even if it is worse, at least it is YOURS. Iâve done tons of that. But now Iâm at the point where the majority of what I use is my own stuffâŚ. and if I were to rewrite it now, Iâd probably just break functionality for a long time then eventually repeat the same ugliness again that made me want to rewrite it in the first place.
A few months ago, I was tempted to rewrite my gui library. But that library took a long time to write and a rewrite would tooâŚ. and probably come out bad as well. And when I got specific about what I really hated most, a migration path actually came to mind that I could realistically do in a month. I wrote about it in my blog too: http://dpldocs.info/this-week-in-d/Blog.Posted_2021_05_03.html#minigui and now Iâm a little happier with it without a rewrite.
that said there are still a few things on my todo list, like a custom calendar, and a new browser ui (doing from scratch is too much, but at least my own ui over a webview would help a lot with the pain i have using them). Which arenât exactly rewrites but still pieces of my custom environment that arenât complete at this time.
Iâm not confident this is a good idea, but Iâd like tosee GUIs rewritten with the Godot Engine in order to find out if it actually is a good idea.
https://medium.com/swlh/what-makes-godot-engine-great-for-advance-gui-applications-b1cfb941df3b
I thought about that a while ago, but Godotâs text handling was incredibly primitive. I saw some recent blog posts that look as if itâs improved massively. There are a few other things that are probably possible to implement, but not there yet in Godot. For example, if I understand correctly, the standard structured data controls donât do lazy loading in Godot. I didnât find a table view in the API but the tree view looks as if it requires you to populate it up-front. The last GUI application I wrote needed to be able to create a table view with tens of millions of rows. This was trivial with Cocoa / GNUstep, because the GUI framework just needed rows to be provided as they became visible.
I have a few quick tests for a GUI framework:
Very few things do well on all of these.
Rewriting the way GPU drivers work with OpenGL and Vulkan, etc. Instead of drivers being required in the OS they are ROM drivers on the GPUs and they work with the software without having to install a binary blog driver or anything. That way ALL OSes are GPU accelerated. You can flash the ROMs to update the GPU drivers.
Kubernetes. Travis / GHA / BuildKite / the cloud job system of the week. Terraform.
Theyâre all the same thing.
The answer was probably plan9. đ
Long comment about that: https://lobste.rs/s/ww7fw4/unix_shell_history_trivia#c_mjcz7m (with disclaimers up front, etc.)
I mention Plan 9 in a couple places, but I think it mostly works over LANs (fast reliable networks), but I think something that works on more diverse hardware would be more useful (similar to the web, git, BitTorrent, etc.)
Humanity
Iâd like to completely reimagine, redesign and reimplement a modern operative systems that, learned the lessons of UNIX, Linux etc. would be centered about modern concepts in computer science (no more âeverything is a fileâ), designed for the interoperability between desktop, mobile and IoT. Itâs more about creating a new, more modern and mature standard that can overcome POSIX, Windows, Mac/iOS (BSD) and the bizarre Android SDK, and then develop a whole new OS that learned from all these lessons.
I want to make an iTunes 4.x-era inspired music player for local music libraries. No other music player has ever come close to matching my mental model for how music should be organized and having the feature set I want, so it seems if I want anything like it in 2021 that is reliable and not trying to upsell me on streaming music subscriptions, Iâm just gonna have to write my own. I started work on the Mac version a few weeks ago.
https://www.enqueueapp.com/
https://github.com/Overcyn/Enqueue
Hadnât seen this one. It looks pretty nice but still a bit off from what I want. Thanks for sharing though!
Iâll go a different way with this and take more of a broad software design ârewriteâ:
JSON. Not really âfrom scratchâ, just the little things like trailing commas that are annoying enough in real life to drive people to horrifying monstrosities like YAML that almost infinitely raise the bar of entry for implementations.
or, rewinding a bit further: the entire syntactic side of the web stack, informed by⌠well, Lisp. S-expressions (with some universal DOCTYPE-like header syntax and/or namespacing mechanism) everywhere, entirely avoiding this ridiculous zoo of [SG|X]ML + a vaguely C-like veneer on a scripting language lazily cobbled together in a week + a subset of that being the de-facto structured data syntax.
Fascbook
I sometimes wish I had the time to rewrite Rust with dependent typing and with monads, and it would compiles to readable C99.
Some goals: low overhead, safety.
But in addition to this, the same operators for async and results, since they both would be monads. And also more expressive types. (
Vec<len=n > 1>.head()
returns the value, since it is proven that it has at leas one element) Also since it compiles to C, no needs to ârewrite in newlangâ, you can use almost any library as is.My thought probably requires some time travel, so obviously a not realistic.
I would love to have a base cloud API that is supported by all major cloud providers. It would need to be something more than library with specific language binding, it would have to be defined (g?)RPC or REST API first, and have the tooling built around this. It would provide ways to deal with common compute, storage and network tasks in a way that you declare the dependencies between the base units so we donât need to orchestrate outside the API for deployments that fit this base mold.
The API would need to support extension points that would allow the unique services and capabilities for the cloud provider. My hope for this is that the base API design will influence these extensionâs and we would likely have a more consistent API even for the extensions.
The value I would hope this base cloud API would provide is removing, a ton of complexity from all the systems we now build to deal with any cloud system. I imagine if TerraForm/Ansible/Chef/Puppet/Salt didnât have to implement these independently, they could focus on building out their own niche configuration philosophy more. Additionally, I think this base would simplify a self hosted cloud (on-prem, home or security lab). In the same way it may reduce vendor tie-in, but hopefully in a way that doesnât hamper innovation at the cloud providers.
ada compiler and GUI toolkit