But I’d rather be stabbed by a butter knife than with a sword.
I’d rather be slashed with a butter knife, but not stabbed. The butter knife’s bluntness would require way more force to stab, where as a sword probably goes through smooth…
Basically, you’re being brutally assaulted, too, if being stabbed by a butter knife.
If you’re actively being stabbed, I think Rust is the least of your immediate problems. But I definitely would prefer to be stabbed by a Ceramic knife/sword to avoid the complications of Rust.
yes it is a platitude, but imo slightly better than considering a language as being generically “unsafe” (right after promoting rust as the solution to go…)
just another day at these forums I guess, sometimes I wonder if these rust promoters aren’t just bots that look out for C language posts and fire away ¯_(ツ)_/¯
I mean, I get where you’re coming from; I didn’t like the comment you’re responding to either. But I do think there’s truth to the core issue: we’ve tried this C thing for quite a while now, and we’re basically still running in to the same issues as decades ago. I don’t think Rust is necessarily the answer to all of these problems either though, but that’s a different story.
Personally I think this entire thing is just off-topic here; it’s just whining about C on something that happens to be written about C with no real substance, so I flagged the previous (now-deleted) comment as such.
But I do think there’s truth to the core issue: we’ve tried this C thing for quite a while now, and we’re basically still running in to the same issues as decades ago.
Serious question: are we running into the same issues over and over again because of the number of C codebases (eg net new problems)? Or are we running into the same problems over and over again in the same set of (< 100 widely used) codebases because they suck horribly, and reoffend?
I am not arguing porting those codebases to Rust isn’t viable, but what also seems viable is rearchitecting portions of said codebases to minimize potential security problems, adopting coding standards, sanitizers, investment into Frama-C, etc..
There are probably billions of lines of C out there as part of 10s of thousands of codebases that exist without any first hand knowledge of the original decisions, requirements, design, etc. Porting that code to Rust might as well be a project from scratch, starting at requirements, which will likely fail because Joe in accounting (who needs the software) doesn’t actually know that the system does X to account for Y… the “tribal” knowledge is long dead.
However, some smart kid touches the very important C code now and then to update typos or some constants and occasionally needs to debug and patch a segfault,. They can build and distribute new releases. They also likely have enough familiarity to apply recommendations from analysis tools and can make decisions on “this part of the code has scary business logic in it, I’ll leave it alone.”
This begs the question, though: is this codebase even an assessed security risk?
Serious question: are we running into the same issues over and over again because of the number of C codebases (eg net new problems)? Or are we running into the same problems over and over again in the same set of (< 100 widely used) codebases because they suck horribly, and reoffend?
I don’t know exactly. The number of common (C) applications and libraries exposed to untrusted input/the network is actually not all that large in total, so it kind of makes sense it’s a comparatively small set we keep hearing about. As far as I know, there are very few significant codebases that have avoided these problems entirely.
As for the rest: I don’t think we should start rewriting C to $something_else en-masse. That would be silly, for the reasons you mentioned. I accept the reality that C will be with us for a long time and just continue writing in C where it makes sense, but also think that some newer languages would actually be better in the vast majority of cases.
here used as a response to “my favourite gun doesn’t kill good people”.
which is not something new since C has a track record of getting attacked by most other languages whenever these languages bring something “new(ish)” while making the tradeoff of complexifying the simple machine abstraction that C does (which was imo Dennis Richie greatest achievement).
this was just my poor miserable attempt at trying to avoid the discussion derailing into a Rust discussion…
It needs to be said that the people behind Soapbox and Spinster are TERFs. Alex Gleason is a TERF, and if I need to I can pull up plenty more receipts, but I’d rather not link to their stuff.
Now is a really bad time to be a trans person, their basic civil rights are under attack, it’s getting hard for them to use the bathroom (i.e. go outside) or get basic medical care in part because of people like this coordinating harassment campaigns. We really can’t support people who spend their time creating forums to attack trans people, and using or promoting their software is definitely part of that.
That’s sad. I keep seeing Pieroma recommended as an alternative to the bloat and operational complexity of Mastodon but whenever I’ve looked the install documentation is just abysmal.
This is such a tricky problem space. I initially had the thought that “Oh hey it’s FLOSS, somebody who wasn’t a hatemonger could fork a new project and inherit their work” but is it fruit from the tainted tree at that point?
¯_(ツ)_/¯
How about Honk instead?
Even lighter than Pleroma, and I do not know anything to the author’s detriment.
EDIT: Also, someone could do a hard fork of Soapbox, and maybe that would be ok, but this is promoting the project itself, not a hard fork thereof.
Looks neat but… No stars? That’s… Unfortunate. I don’t love the idea of not being able to say “Hey I really like your post!”.
I mean, I get it, it removes the whole negative dopamine release loop but… Hrrm.
It DOES look deliciously light and easy to set up!
I don’t love the idea of not being able to say “Hey I really like your post!”.
You can always say that directly, usually means a lot more than just a number.
You know that’s a REALLY good point.
Either you REALLY like the post, which warrants a comment, and might spur some good conversation which wouldn’t happen if you just starred and moved on.
I DO think removing that quick dopamine hit from the “like” response could lead to consuming the content in a more thoughtful, meaingful way.
I’ll add setting Honk up to my projects list :)
It’s configuration isn’t great. I have a Dockerfile that runs it and SoapBoxUI:
https://github.com/sumdog/bee2/tree/master/dockerfiles/SoapBox
Mastodon is easier to scale up with zero downtime releases (you can launch more web or sidekiq containers) but it’s a hog and not as robust at Pleroma. Pleroma has some better rules engines, but they’re more complex to configure. I hope with the SoapBox fork that Alex tries to do standard release containers and make the configuration easier. He’s done a lot of work.
🇬🇧 The UK geoblock is lifted, hopefully permanently.
For further context, have an update:
https://rabbitu.de/articles/security-disclosure-2
Well this is neat, I may adapt part of this for a ttrpg mini game @jackdoe.
Hi everyone! Happy to lend a hand.
^^^
This is handy too https://www.ec2throughput.info/
Well this would have been handy a few months back when we migrated our ingestion layer to aws.
Kept running into odd auto scaling issues because we were using t3a.large instances which without burst has a very low actual max throughout.
Nice to finally see this written down. I experienced this when we had the idea to put a sync-to-customers server on a micro instance, because of course the service needs basically no CPU. Had to upgrade to a large instance just because we could not get data out. 60mbit/s explains everything, this is about the same upload speed as my home DSL.
Echoing @fs111 I’d note that it’s not good community manners to make your first post a self-post.
Should keep in mind to post more third party content then first party.
I’d disagree, I think this is fine. There’s no rule about this, so “bad community manners” is more of a personal preference.
Good content is good content, no matter whether it’s first party or third party.
Now is this good content? I personally feel it’s a little boring, and since it’s a company blog, that may make people suspect blogspam. But telling OP that the problem would be solved if they posted some other people’s content first before posting their own won’t help the quality of their content. Specific feedback about this content (not meta issues) would help future content.
Noted. I will be sure to keep that in mind from now on.
Here are two rough uBlock rules to remove most of those stupid furry-pictures. This makes the blog much more bearable for me. I hope it helps somebody.
Regarding the topic itself: The author raises a valid point. I am often surprised how unprofessional even large corporations are in this regard. In the long term, quite a few whitehats will probably choose to become grey- or blackhats. Not only is the pay much better and consistent (you don’t have to nudge companies to pay promised bounties like an idiot), it also usually is safer, given there have been quite a few cases where companies “killed the messenger” and sued the security researcher instead of thanking them for the free service.
Was there a need to post these ublock rules though? Or a need to refer to the pictures in such a way as “stupid”, it seems fairly unkind to me and not really living up to the standards of this site.
Do you really expect anyone to read any further, if that’s how you open your comment ? You could have added this separately, as a P.S. or anything else.. Oh and without getting personal.
Let me quote you from 1 month ago:
Now I feel like I need some ublock rules for users.
The rules you provided also filter out some of the screenshots, so you end up missing important context by doing this. Not sure I would recommend hobbling the communication to others, just because of a weird personal aversion to cartoon animals.
I am making the call as a moderator that I believe both the motivation and effect of this comment are at odds with the type of community we are trying to be. In particular, there seems to be some sort of ideology in play here - I won’t try to guess at what, I don’t want to put words in your mouth.
If you don’t like an article, don’t read it, but it is not appropriate to use lobste.rs as a place to share tips on how to excise the author’s identity from their work. @Absolucy, this goes for you as well.
Additionally, the furry community is a vibrant one that prizes creative expression, and one I am proud to be an ally to. With this in mind, it is not appropriate to spread rhetoric falsely claiming that an entire demographic is inherently sexual, as your other comment below does. I also do note that, though you’ve said nothing that would positively indicate this, in many cases attacks on the furry community function as coded attacks on the queer community; the rhetoric you’re using could apply, without modification, to both, and is equally inappropriate and counter-factual in both cases.
My apologies to everyone for how long it took to put a formal response together. These things can have lasting impact, so despite the fact that it was several days ago, the mod team deemed it important enough to respond to.
FRIGN, by some miracle you have not had a formal warning before, so this is your warning: If this happens again, you will be banned.
Don’t worry, I’ve recognized that I shouldn’t have posted what I did, and I do not plan to do anything like that again. Sorry for causing trouble, and I wish everyone here to have a nice day.
Thank you.
the ublock rules work just as well if you don’t throw unnecessary hurtful insults. i appreciate the illustrations 🤷.
The fact that the rules exist is fine.
Posting them on the lobsters post the author has submitted to the blog post in which they ask people not to do that is weirdly passive-aggressive behaviour from both you & FRIGN.
I think the author is being a bit passive-aggressive here. The internet should be such that everyone can choose to block any content they want, with all advantages and disadvantages, even if it blocks part of the content because the content creator makes it deliberately hard to block certain elements on their website.
I also don’t see @Absolucy cowering and crying over being called a “jackass”. I don’t think calling something “stupid”, as I did, is something to get so hung up about. The author @soatok appears to be immature and thin-skinned to react in such a way, which is sad given his article is really excellent.
As a side remark: Furry art is not just cartoon animals. The overwhelming context I see them used in is of lewd or pornographic nature, and I don’t want to be reminded of that when I’m just interested in the technical content. I don’t mean to imply the author with such an intent of sexualization, which is evidently not the case, but it’s how I sadly perceive furry art on the internet.
It’s not a personal attack against @soatok that I want to block the furry art, it’s just that I don’t like furry art.
We all know your actual beliefs. Your excuses don’t hide them.
How you respond to security researchers says everything about you.
Thank you for your much more refined ruleset! I really appreciate it! :)
EDIT: Nevermind seems I didn’t read the docs/source clearly enough heh.
Query, any thoughts on adding options to alias vim/via to nvim in the hm module? The default nvim module already does this and it’d be handy to do so with your flake as well.
they claim that Rocket is production-ready. Is anyone using it in production?
https://rust-lang.org for example.
I’m not intentionally moving the goalpost here, but I just really don’t know what they mean by “production-ready”.
Assuming this is the right repo: https://github.com/rust-lang/www.rust-lang.org
rust-lang.org requires nightly and performs no database interactions; what data it has is stored in yaml files and loaded into memory. I would expect at the minimum to work with stable. If all they mean by “production-ready” is “well, we put it on the internet”, I don’t think that’s a meaningful signal.
For context, I have a handful of Rocket services at my dayjob and have found it to be a bad experience.
At $job we were using it for several fairly high volume/traffic services for quite a while.
We did eventually move off it to warp due to the lack of maintenance/forward movement on the 0.5 release for the last 1.5 years~
Since Python was mentioned but not polars: https://github.com/pola-rs/polars is a data frame library for Rust (it can be used as a Python lib as well) that is a welcome alternative to pandas. For processing, slicing, grouping etc. tabular data it provides an interesting expression-based interface that makes Rust a better fit for data processing and some data analysis tasks. It provides a number of cargo features that allows picking the functionality you need.
I can second this, I’ve been doing a number of data slicing and aggregation tasks at home lately so I’ve taken to using polar-rs quite a bit. If you’re familiar with the apache arrow format and data model you should be able to grok polar-rs decently enough.
So in order to make your site slightly more accessible to screen readers, you’ll make it completely inaccessible to browsers without JavaScript?
i was born without javascript and life has been so hard for me
Accessibility isn’t just about disorders.
I think @river’s point is that it’s most important to accommodate limitations due to circumstances beyond the user’s control. And these are limitations that can prevent people from getting or keeping a job, pursuing an education, and doing other really important things. In all cases that I’m aware of, at least within the past 15 years or so, complete lack of JavaScript is a user choice, primarily made by very techy people who can easily reverse that choice when needed. The same is obviously not the case for blindness or other disabilities. Of course, excessive use of JavaScript hurts poor people, but that’s not what we’re talking about here.
If using
<details>made the site impossible to use for blind people, that would obviously be much more important, but here the complaint is that… the screen reader reads it slightly wrong? Is that even a fault of the website?Fair point. Personally, I wouldn’t demand, or even request, that web developers use JavaScript to work around this issue, which is probably a browser bug, particularly since it doesn’t actually block access to the site.
On the other hand, if a web developer decides to use JavaScript to improve the experience of blind people, I wouldn’t hold it against them. IMO, making things easier for a group of people who, as @river pointed out, do have it more difficult due to circumstances beyond their control, is more important than not annoying the kind of nerd who chooses to disable JS.
Well, disabling JS is not always a choice. Some browsers, such as Lynx or NetSurf, don’t support it. But yeah, I generally agree.
I suppose it’s possible that some people have no choice but to use Lynx or Netsurf because they’re stuck with a very old computer. But for the most part, it seems to me that these browsers are mostly used by tech-savvy people who can, and perhaps sometimes do, choose to use something else.
And what percentage of those lynx users is tech-savvy blind people? Or blind people who are old and have no fucks left to give about chasing the latest tech? There are, for instance, blind people out there who still use NetTamer with DOS. DOS, in 2022. I’m totally on board with their choice to do that. Some of these folks aren’t particularly tech savvy either. They learned a thing and learned it well, and so that’s what they use.
Many users who need a significant degree of privacy will also be excluded, as JavaScript is a major fingerprinting vector. Users of the Tor Browser are encouraged to stick to the “Safest” security level. That security level disables dangerous features such as:
Even if it were purely a choice in user hands, I’d still feel inclined to respect it. Of course, accommodating needs should come before accommodation of wants; that doesn’t mean we should ignore the latter.
Personally, I’d rather treat any features that disadvantage a marginalized group as a last-resort. I prefer selectively using
<details>as it was intended—as a disclosure widget—and would rather come up with other creative alternatives to accordion patterns. Only when there’s no other option would I try a progressively-enhanced JS-enabled option. I’m actually a little ambivalent about<details>since I try to support alternative browser engines (beyond Blink, Gecko, and WebKit). Out of all the independent engines I’ve tried, the only one that supports<details>seems to be Servo.JavaScript, CSS, and—where sensible—images are optional enhancements to pages. For “apps”, progressive enhancement still applies: something informative (e.g. a skeleton with an error message explaining why JS is required) should be shown and overridden with JS.
(POSSE from https://seirdy.one/notes/2022/06/27/user-choice-progressive-enhancement/)
I mean not for not, but I’m fairly certain you can constrain what can be executed in your browser from the website.
I’m certainly okay with a little more JS if it means folks without sight or poorer sight can use the sites more easily.
In my experience (the abuse of) JavaScript is what often leads to poor accessibility with screen readers. Like, why can I not upvote a story or comment on this site with either Firefox or Chromium? ISTR I can do it in Edge, but I don’t care enough to spin up a Windows VM and test my memory.
We need a bigger HTML, maybe with a richer set of elements or something. But declarative over imperative!
I use Firefox on desktop and have never had a problem voting or commenting here.
The fallback is always full-page reloads. If you want interactivity without that, you need a general-purpose programming language capable of capturing and expressing the logic you want; any attempt to make it fully declarative runs into a mess of similar-but-slightly-different one-off declaration types to handle all the variations on “send values from this element to that URL and update the page in this way based on what comes back”.
Yes, but do you use a screenreader? I do.
Sure, but most web applications are not and do not need to be fully interactive. Like with this details tag we’re talking about here? It’s literally the R in CRUD and the kind of thing that could be dealt with by means of a “richer HTML”.
On modern browsers, the
<details>element functions builtin, without JS.In fact that’s the entire point of adding the element.
Yes, and the article recommends against using
<details>.Great post, very cool project.
It’s crazy how ideas are just floating around.
For the past few months I’ve been thinking about what I call “singe-server web apps” and “single-binary web apps”.
What i’m interested in is simplifying the deployment of web apps on services like Linode or Digital Ocean by just uploading and running a single binary that contains everything needed including the DB and scheduled tasks. I was thinking about using Go with embedded HTML, CSS and images and SQLite as the DB. One could also use something like Litestream to make sure the DB is safe in the event of a major server failure, but that would require a “second server/binary”.
I don’t really know what this is but the concept feels very appealing to me. Kinda reminds me of the PHP days where you just upload a script and just opened the browser. I know PHP still exists but it requires a web server and configuration to run. The idea of a single binary feels even more portable than PHP.
Also https://redbean.dev/ is a very inspiring and interesting project.
These exist. They are called unikernels and lots of people are using them.
What is the advantage over CGI?
CGI would assume that you are in a multi-process environment. Most unikernels are single-process (many are multi-threaded though). CGI would also assume all the usual unix trappings such as users, interpreters, shells, etc.
The most obvious benefit is ease of use. There are no shells or interactivity with unikernels so when you deploy it - it’s either running or it’s not. While you can configure the server there isn’t ssh running where you pop in and do extra configure post-deploy.
Then there is security. CGI quite literally is the canonical “shelling out” mechanism. CGI and the languages that heavily used it in the 90s and mid aughts were fraught with numerous security issues because of this practice. You have to deal with all the input/output sanitization and lots of edge cases.
Then there is performance. CGI is woefully under performant since you have to spawn a new process for it and under more modern systems that use “big data” dealing with heaps in tens of gigabytes that becomes ridiculously bad.
Anyways, very few people actually use ‘cgi’ as it were today. For languages that need to rely on a front-end proxy like nginx (because they are single process, single thread - like ruby, python, node, etc.) they siphon incoming requests off the front-end (nginx) and push it to your app back-end (your actual application).
Unikernels work really well for languages that deal with threads like Go, Rust, Java, etc. They work well for scripting languages too but what I just described above the back-end becomes individual vms instead of worker processes. They basically force the end-user to fit their workloads to the appropriate instance type.
Isn’t that anti ease-of-use? I like to be able to go in and dig around when something goes wrong.
IMO this makes debugging significantly easier than deploying to a linux system. If I throw on my devops hat and start thinking about all the times pagerduty woke me up at 2am in the morning half the time is spent figuring what process is causing an issue. Something is opening up too many sockets too fast? Open up lsof to figure what process it is. I can’t ssh in because logfiles are overflowing the disk? Now I have to hunt down the cronjob that I didn’t know existed that didn’t have proper log rotation on. In unikernel land there really is only one process so you know which program is causing the issue. Instrument it, ship your logs out and you are going to solve the vast majority of issues quickly.
Also there are other cases where debugging is significantly easier. Since it’s a single process system you can literally clone the vm in production (say a live database), download the image and attach gdb to it and now you not only are going to find the root of the problem but you are going to do so in a way that is non-destructive and doesn’t touch prod.
As an aside the ease-of-use I was referring too was not pertaining to debugging (although that is insanely easy) but to deployment/administration as compared to the dumpster fire of k8s/“cloud native”. Unikernels shift all the administration responsibilities to the cloud of choice. So while you can configure networking/storage to your hearts content you don’t have to actually manage it. Most people don’t understand this point about unikernels - they think a k8s like tool is necessary to use them which is totally not true - this won’t make sense to most people until they actually go deploy them for the first time and then this clicks.
I’d argue that’s a combination of A) familiarity, and B) linux OS having built out robust introspective/investigative tooling over decades.
A unikernel has the advantage that many of those investigative tools are for problems that no longer exist, and the disadvantage that it no longer has those tools baked-in for the problems that it does still have.
EG you don’t have
du, but you also don’t have a local filesystem that can bring down the server when you accidentally fill it with logs/tempfiles.Most unikernels actually do work with filesystems as most applications that do anything want to talk to one. Databases, webservers, etc. Logging is dealt with mostly 2 ways: 1) You either don’t care and you just rely on what goes out the serial - which is stupid slow so not really a prod method or 2) You do care and you ship it out via syslog to something like elastic, papertrail, splunk, etc.
So how do you find out that your app e.g. crashes because some database key was NULL?
You have logging and/or crash reporting and you do something useful with them. It’s your problem to do though, but that’s not really any different than deploying to a traditional stack.
OK, but where do the logs go? If they go to another server, which then stores them on a filesystem you’ve just kicked the can elsewhere.
Last I checked kicking the can down the road is most of IT :)
Sure, but you can’t argue that e.g. you don’t have a filesystem to manage.
If you wanted to use rust (or see about statically linking in a sqlite VFS via cgo) you could see about using https://GitHub.com/backtrace-labs/verneuil for sqlite replication. Completely in-process and exposes both a rust and C interface. It works quite well for all my home use cases, like for example replicating my blogs sqlite db to a s3 blob store for easier rollbacks.
Yep. The Lemur Pro I’m typing this on has similarly atrocious speakers.
Yeah they’re usable but not the greatest, they occasionally sound tinny but otherwise work in a pinch.
One of the first things I do when I set up linux on a new laptop is disable the onboard sound. They should just stop shipping laptops with built-in speakers, they’re all pretty bad, and pluggable/bluetooth speakers or headphone will almost always be a better experience.
My T450 had alright speakers, and my (work) MacBook Pro has good ones. The S76 speakers are bad even by the standard of laptop speakers.
It reminds me of https://litestream.io/
It does, but different! It’s specifically called out in the README.
Packaging some new Prometheus exporters for my home setup for nix and probably working through more of my anime and movie backlog.
Maybe deep clean the house as well.
which exporters are those? I currently only use the node-exporter for my servers.
A deluge torrent client exporter, modifying my network exporters, followed by seeing about getting a custom build of jellyfin going with a Prometheus endpoint enabled for stats on my media library.
I know enough of ddevault to understand why he went with IRC instead of Matrix. But I think it is the wrong choice. There’s a reason why sr.ht uses git instead of CVS (or RCS). Similarly IRC should be replaced with Matrix.
I’m not sure Matrix is obviously better than IRC, especially not on a protocol level; it might have more features than IRC right now, but I think half of the point of the project was to try and fix that disparity (?)
We already have lots of Matrix clients that work pretty well; why should people not be allowed to work on IRC clients, too, especially since we don’t have as much development going on there?
I find that a very weird statement. If you’re talking about an org of a certain size you can say it makes sense they choose X over Y. But this is a very small org that provides a service to its paying customers, but most probably because they use IRC themselves. Nothing should be replaced if people are happy to use it.
Matrix is still AFAIK, Not Great™ to operate because the server guzzles resources and lacks a lot of moderation features (which can be hard to implement due to the DAG).
So on a resource front I think it definitely depends on the server implementation you use.
I’ve moved to using the conduit home server implementation which is using 500MB RES in some high volume channels.
I don’t have numbers on hand but dendrite and synapse both used gigs of RES mem iirc.
Now admittedly compare that to an ircd which is no doubt much lower.
As far as mod features yeah matrix can use more things in the spec, which probably will be hard to implement.
FWIW I’m running Synapse with a ton of open channels across multiple homeservers and I’m not running into any resource issues on a VPS with 2 GB of RAM (previously 1, but it was running a bit tight) and some swap. I expect Dendrite, the new Go homeserver, to cut resource usage down significantly once it stabilizes.
Synapse has improved a lot.
I don’t think this is a relevant critique. I do, personally, think that Matrix is the better protocol and I use it myself, but sr.ht uses IRC themselves and is just offering a service to its paying customers to use IRC. If you find that valuable, you can pay for it or use it along with your existing sr.ht account, and if you don’t, you don’t need to. 🤷 . If they used XMPP instead, they could even offer similar XMPP services.
Do you mind clarifying what parallels you’re drawing between CVS vs Git and IRC vs Matrix?
CVS and IRC are hosted on a central server, Git and Matrix are distributed/federated.
That makes sense; I’m not sure that alone is an argument that Matrix is an unqualified better choice than IRC though. Matrix is a very heavy ecosystem that has relatively few implementations, actually setting up a homeserver is an arduous process, the homeservers tend to be pretty resource intensive which presents scalability issues, especially for a more “independent” service which is not backed by a cloud monopolist with compute resources coming out the wazoo. IRC is not federated, but the relative ease in which IRC servers are spun up and their undemanding operational requirement smake it far more effectively decentralised as an ecosystem than Matrix.
Matrix also seems not to fit a lot of the ‘ethos’ that sourcehut espouses; in-house developed software that’s for-purpose and aiming to be pragmatic in both terms of use and design, often using technologies and workflows that free software developers already regularly use. IRC fits into this category much neater than Matrix does. It also feels to me personally that the advantages of federation are not as pronounced in real-time (synchronous) chat as source control.
what do you mean by this? IRC networks consist of many interconnected servers run by different people
IRC is a closed federation. Matrix/XMPP are open federations (that can be limited by allow/denylists or firewalls)
IRC servers can have allow/denylists and firewalls too. What makes it closed and the others open?
IRC servers must trust each other, so only trusted servers are allowed in the federation. Matrix, thanks to the state resolution protocol, can work without server administrators trusting each other. Much like email. Spam can still be a problem.
So Matrix and SMTP servers are vulnerable to spam attacks if they allow untrusted servers, and IRC servers are vulnerable to more attacks like kills from malicious operators. So yes, IRC requires a higher level of cooperation between servers, but it’s a matter of degree.
Federation between untrusted servers means spam cannot be solved on the server level. Matrix has some plans to solve spam with a reputation-based system.
I don’t know what you mean by “on the server level.”
IRC can be distributed through server-to-server connections, but IRC is not a federated protocol because these networks share a common view of users and channels for as long as they are connected, and there is no way to bridge communications with other networks at the level of the protocol itself.
Compare and contrast with XMPP and Matrix, where it’s perfectly possible to communicate with other on federated servers that have absolutely no relation to your homeserver, and there is clear delineation of the ownership of identities and rooms.
I don’t see any substance to the idea that IRC servers can’t federate with eachother while XMPP/Matrix servers can. All federated protocols form networks which can become fragmented by mismatches between software and policies.
You also contrast “a common view of users and channels” with a “clear delineation of the ownership of identities and rooms.” This arises from a fundamental difference in what the protocols offer, namely that XMPP offers persistent identities while IRC does not, but that has no bearing on whether a protocol supports federation. For a non-federated contrast to IRC/XMPP/Matrix, see ICQ.
A more obvious choice in that case might be XMPP. Or even SMTP (see delta chat)
IRC is federated
perhaps you could lay out why ddevault would disagree, considering that he is unable to respond here (due to a series of incidents that lobste.rs has decided to keep secret)
https://drewdevault.com/2019/07/01/Absence-of-features-in-IRC.html
The site isn’t listening on ports 22 or 514. They expect us to set our .plan files over HTTPS?
Yes. Either via UI or curl.
I’m still waiting for somebody to write a little CLI that does all these tests and prints out a report. Imagine how many thousands of hours it would save if that tool came installed by default on all systems.
Bonus when an error happens, it prints out the command to replicate that exact test.
Welp this should be fun to write.
yes please! I’m to lazy and sidetracked with other things at the moment.
I prefer to see this type of project that builds upon what it considers the good parts of systemd, instead of systemic refusal and dismissal that I’ve seen mostly.
Same. Too often I see “critiques” of systemd that essentially boil down to personal antipathy against its creator.
I think it makes sense to take in to account how a project is maintained. It’s not too dissimilar to how one might judge a company by the quality of their support department: will they really try to help you out if you have a problem, or will they just apathetically shrug it off and do nothing?
In the case of systemd, real problems have been caused by the way it’s maintained. It’s not very good IMO. Of course, some people go (way) to far in this with an almost visceral hate, but you can say that about anything: there are always some nutjobs that go way too far.
Disclaimer: I have not paid close attention to how systemd has been run and what kind of communication has happened around it.
But based on observing software projects both open and closed, I’m willing to give the authors of any project (including systemd) the benefit of the doubt. It’s very probable that any offensive behaviour they might have is merely a reaction to suffering way too many hours of abuse from the users. Some people have an uncanny ability to crawl under the skin of other people just by writing things.
There’s absolutely a feedback loop going on which doesn’t serve anyone’s interests. I don’t know “who started it” – I don’t think it’s a very interesting question at this point – but that doesn’t really change the outcome at the end of the day, nor does it really explain things like the casual dismissal of reasonable bug reports after incompatible changes and the like.
I think that statements like “casual dismissal” and “reasonable bug reports” require some kind of example.
tbf, Lennart Poettering, the person people are talking about here is a very controversial personality. He can come across as an absolutely terrible know-it-all. I don’t know if he is like this in private, but I have seen him hijacking a conference talk by someone else. He was in the audience and basically got himself a mic and challenged anything that was said. The person giving the talk did not back down, but it was really quite something to see. This was either at Fosdem or at a CCC event, I can’t remember. I think it was the latter. It was really intense and over the top to see. There are many articles and controversies around him, so I think it is fair that people take that into account, when they look at systemd.
People are also salty because he basically broke their sound on linux so many years ago, when he made pulseaudio. ;-) Yes, that guy.
Personally I think systemd is fine, what I don’t like about it is the eternal growth of it. I use unit files all the time, but I really don’t need a new dhcp client or ntp client or resolv.conf handler or whatever else they came up with.
In my experience, most people who hate systemd also lionize and excuse “difficult” personalities like RMS, Linus pre-intervention, and Theo de Raadt.
I think it’s fine to call out abrasive personalities. I also appreciate consistency in criticism.
Why?
At least because it’s statistically improbable that there are no good ideas in systemd.
Seems illogical to say projects that use parts of systemd are categorically better than those that don’t, considering that there are plenty of bad ideas in systemd, and they wouldn’t be there unless some people thought they were good.
Where did I say that though?
Obviously any project that builds on a part of system will consider that part to be good. So I read this as a categorical preference for projects that use parts of systemd.
There have been other attempts at this. uselessd (which is now abandoned) and s6 (which still seems to be maintained)
I believe s6 is more styled after daemontools rather than systemd. I never looked at it too deeply, but that’s the impression I have from a quick overview, and also what the homepage says: “s6 is a process supervision suite, like its ancestor daemontools and its close cousin runit.”
A number of key concepts are shared, but it’s not like systemd invented those.
It’s a fork, not a rewrite. It’s not trivial to port systemd to a new os, and rewriting all in a new language would be a significant effort on top. For a one person project, let’s not expect miracles :-)
I realize I’m probably replying to bait but you do realize the BSDs are written almost entirely in C and will be for the foreseeable future, right
That sounds like part of the problem, not part of the requirement to me.
Considering *BSD’s religious repudiation of free software like GCC, wouldn’t polishing Rust (with its LLVM backend) be more “strategically” fitting than trying to support C with all of the GCCisms more popular codebases come with?
Theo answered this a few years back.
https://marc.info/?l=openbsd-misc&m=151233345723889&w=2
Fortunately someone did write it - not just ls, but most of the coreutils: https://github.com/uutils/coreutils
It was created in 2013 which makes the post in 2017 look not very well informed. He is right about the compilation time though.
(from the mail written by Theo in 2017)
Ripgrep was already around when this was written, and judging by the git commits, it took a few months to write to a working state. Perfected over several years, though. @burntsushi is that at all accurate?
It depends on the BSD; FreeBSD might be willing to do so because they’re willing to deploy new things, but OpenBSD is very conservative and TdR seems skeptical of languages that focus on memory safety. NetBSD might be willing, but it needs portability to architectures they kinda-support like VAX first.
(My stance on OpenBSD’s “security focus” is they’re focused on plugging holes in the boat, not structural improvements like formal methods or memory safe languages.)
Maybe; but actual viable replacements to C for stuff like kernel work is quite a new thing. It certainly wasn’t around when OpenBSD got started, or for most of its history. And whether we like it or not: C will be with us for a long time even if everyone would agree that we need to rewrite it all in $safer_language.
Some stuff in OpenBSD is written in Perl by the way, like the package manager.
Free software predates GNU, they don’t get to have a monopoly on it. I’m perfectly happy with GCC for now, especially since it includes numerous backends that LLVM doesn’t. Clang is fine, though. Getting moved beyond being a Tier 3 Rust platform if you’re not Linux seems to have become even harder recently.
Presumably as Rust matures and it’s standard library grows it becomes a bigger push to meet the quality bar.
OpenBSD supports many hardware platforms that Rust does not. C is the correct choice, given their goals.
If the RESF spent as much time submitting patches to all the software they so graciously could fix as they do making snarky comments, we’d all be living in their memory safe utopia by now.
What are these patches are supposed to do?
If there is C/C++ code, there is little more to be done than to decommission it. Sending patches would be the opposite of that.
Quite a lot can be done, actually. Safety-critical C code can be formally verified for correctness and memory safety using Frama-C and ACSL, for example.
And UBsan is one flag away from providing runtime checks for a big surface of undefined behaviours. Compilers already warn about UB that can be reasoned in compile-time and modern C has syntax to help them in a few corner cases.
But in the end this is very much about how a problem is decomposed into the provided elements of the machine abstraction that you chose to use. Being simple and easier to reason about is a very big plus that Rust folks simply disregard because they are all way smarter than the lot of us and are very proud of the big mess that Rust has become with all of their FOMO features (half-baked closures, unworkable recursive data-types, memory layouts that all resume to a big heap pool getting trashed at runtime, etc…).
OpenBSD and BSD’s in general are easy to understand and read and these kind of systemd alternative implementations are very welcome.
Now lets all take a moment to guess one language that does not have a single alternative compiler or implementation…
To be fair there is at least one alternative compiler for Rust technically, mrustc.
Sure, but maintaining Rust on half a dozen platforms that the core Rust team doesn’t care about is an enormous task requiring deep compiler knowledge. It’s not just “a patch”, because Rust is rapidly evolving and your patch will bitrot. It’s a huge open ended commitment.
I wouldn’t expect the OpenBSD team to have the resources or expertise to take on this huge task, and criticizing them for not doing this (using C instead) is off base. If you want systemd rewritten in Rust, (which may entail porting and maintaining Rust on all the platforms that systemd and Linux support) then by all means go ahead and keep us informed of your progress. Criticizing other people for not doing it seems off.
In the absence of porting Rust to all the platforms C supports, C (and to a large extent C++) continue to be the best systems languages if you want to support all of the platforms.
Dead to you. Not dead to its users or the industrial machines still using it.
I mean, speaking as someone with deep familiarity with weird platforms, these ISAs are either used for existitng applications that don’t care about Rust (why would your VMS BASIC application need Rust?), the hobbyists are keeping it going are doing it out of inertia (i.e. compare ia64 to m68k hobbyists; former are basically dead, yet the latter has enough bodies for LLVM and Rust on a commercially/industrially dead platform), or the company that maintains it hasn’t gotten around to throwing money at the problem yet (i.e. AIX).
Considering the wide enough adoption, especially for people still using these non x86 platforms, I think it’s clear they have built credible operating systems.
Come on, man, I get it. I know Rust, I use it, and I like it. I want it to succeed so we can leave C and especially C++ behind us. But you can’t just go around and tell people what platforms to support in their project. That’s just not nice. That’s why people get mad at the RESF. That’s why they get hostile.
You can’t just show up at someone’s door and say hey, you know that thing you’ve been working on for the last two years or so – yeah, the one I’ve been working on in my spare time, late at night, after the wife’s gone to sleep – yeah, man, sucks to be you, we don’t support it so it’s dead, why don’t you do something more productive with your time instead? It’s just not nice.
Do you see how it’s not nice? How would you feel if, whenever someone posted something on Rust mailing list, someone who never used Rust, knows Rust, or even cares about it, for that matter, would chime in to say hah, you know what you call a language that makes you jump through hoops to get a doubly-linked list? Useless.
Nevermind that it makes it hard for those of us who like the language but aren’t die-hard fans in the Silicon Valley bubble to push for its adoption to our customers or inside our organisations. It’s just nasty. Please, if you won’t stop it, will you at least try to find a more elegant way to go about it?
It gives all of us a bad name and honestly, it’s ridiculous. If making everything more secure is so important, and Rust is such a sure way to go about it, that it’s worth being nasty to people, why not just wait for the inevitable demise of insecure software? Surely after enough CVEs people will just get bit one too many times and switch from OpenBSD to Redox OS or whatever on their own. So why not contribute to Redox OS instead of telling people how to go about writing OpenBSD?
What has changed in the FOSS community to create such ignorance?
There are more platforms besides amd64 and arm64 and they all have their value and people still use them in production. And some of them have no Rust backend for whatever reason but that doesn’t mean they’re dead!
On one hand, yes, I agree; I’m also tired of unsafety. On the other hand, Rust isn’t memory-safe, just less unsafe than C, so you’re selling snake oil. (Or, rather, you’re asking people to kill snakes and press their own homemade snake oil because you’re confident that it’s better than bear oil. Yeah, bear oil is bad, but so is snake oil.)
Arduino is not a dead platform. RISC-V is not a dead platform.
Right now, Rust is immature, unstable and has shitty platform support. Rust is so immature and unstable that it doesn’t have a formal definition, and there is only one viable implementation, whose constantly changing code defines the language semantics. This makes it a bad choice for people who want a mature language that provides deep guarantees that their code will still compile and run in 10 or 20 years. In the long term, Rust will likely become mature enough to challenge C in these respects, but not now.
As it stands, rust guarantees that your code will still compile and run in 10 or 20 years just as much as C does, thanks to editions and semver. In the rare cases it doesn’t, it’s because your code depended on some bad semantic bug, the same way C code will stop running properly because $compiler started to exploit some UB in your code to optimize it (or some new warning makes your
-Werrorfail).A language is only as good as its community, and look at where you left Rust standing…
Way to go troll, if it makes you feel better, you are not alone, there are a ton of others like you in Rust that are just not up for the debate nor provide any useful knowledge or expertise about what is being argued about.
I know it’s the chicken-and-egg dilemma we’re so familiar with in the FOSS world, but software that runs today is often a safer and more reliable choice than software that might be written tomorrow. If my goal were to write an init system that provides systemd compatibility today, why would I spend time writing patches that have absolutely nothing to do with that problem?
And I mean a lot of time, we’re talking about sending substantial patches to the compiler and runtime of a language that rivals C++ in complexity, even if it’s nowhere near the same level of brain damage. It takes weeks just to get your bearings on that source tree, and that’s assuming you already know Rust very well.
There are problems in this world that people can, and want to solve today, not when Rust gets the things they need, whatever those things are.
Okay, first of all, we all know adding new platform support to any language, but especially an unstable one, is not a one-time investment. That’s evidenced by the fact that there are three tiers of platform support (with sub-tiers!) in Rust. Tier 1 ppc64 support is not going to happen because the people who write an init system send an occasional compiler patch.
Second, some people just want to write an init system. Not a compiler and not a language runtime. Just an init system. That’s their project. And the “it’s open source” argument applies both ways. Want to see Tier 1 ppc64 support in Rust? Great, send patches. Want to see an init system written in Rust? Great, well, send patches.
It’s not bait, it’s just an uninformed commenter that likes to repeat orange site narratives. Obviously, if we are still using C in 2021, it’s because there are no performant and portable alternatives. But in their mind, everybody uses an x64 CPU and has 16 gigs of RAM to run all their electron apps.
Give me a (non obscure) language that fits the description, then.
For the sake of discussion, I would say there is at least one language that fits your requirement: Ada.
But except for Ada, I would like to see some more effort on the Frama-C direction, or the (seemed defunct) Brainy Code Scanner effort.
Sincerely, I think it is way more realistic to keep developing in C and using tools such as mentioned above (or even Infer), than rewriting it in a new language, be it Ada, ATS, F*, Rust, or whatever the new hype is.