Since this loads all of its code live from an HTTPS website through the browser, its security boils down to the security of TLS and the WebPKI. On top of that the user is also trusting the website operator, the openpgpjs codebase, and the proof verification implementation.
Instead of verifying a file through this, one could download it from an HTTPS URL with equivalent or better security. For encryption, the same holds for an HTTPS form.
Maybe I misinterpret the site author’s intent but for me this looks like just a convenience profile page + some utils. The same argument holds for Keybase that also has verify and encrypt pages. So while the “browser crypto is broken by design” point is valid it’s actually nothing new. I do admit that the author could clarify this better on the page.
Or are you pointing out that there is no standalone app like Keybase has for the non-browser usage?
Very interesting design decision. I’ve seen it previously implemented in JavaScript through generators and wonder why other languages didn’t take this route (e.g. Rust)?
I agree, and I have pretty strong feelings about this.
I mostly use three instances, two out of the three Silence mastodon.social (i.e. you can’t see content from the instance unless you follow the user from there). One of those is because I admin the instance. On the instance that doesn’t Silence m.s, I silence it myself on a user-level. I also refuse to interact with or accept followers from mastodon.social users.
I’ve never had any problems with this setup, and I encourage everyone to do so.
However I fully recognize that this is easier said than done. Many folks probably have friends on mastodon.social which makes outright not interacting with them a non-starter.
I do think though that m.s is too big and agree with the honestly kinda harsh take that a lot of folks using m.s are probably just using it as a way to say “hey see i’m on mastodon i did the minimum amount of effort to appear cool” and then they just plug a twitter crossposter into their account instantly.
Sorry to make this post even longer, but there’s another theme I’m noticing in this thread that is something that really irritates me about some people’s reaction to fedi.
You’re not joining just a server when you choose a Fediverse instance. You’re joining a community. That’s the entire point. It’s a group fo communities. Stop thinking of them just as servers that happen to host you.
They’re communities with their own themes, goals, policies and expectations.
You can’t just go into it thinking you can just jump in and get fed a bunch of people to follow or hashtags to look at like Twitter provides. You, gasp, have to put in some effort in order to get benefits back from these communities.
And Fedi is better for it.
Time to purge the brain poison that Twitter and Facebook and etc. have left which makes people think Social Interaction is now this trivial thing we don’t need to put any effort into anymore.
Put effort in, and you’ll get the massive benefits out.
You’re not joining just a server when you choose a Fediverse instance. You’re joining a community. That’s the entire point. It’s a group fo communities. Stop thinking of them just as servers that happen to host you.
Have you considered that not everyone shares your opinion on the Fediverse and that it’s actually OK? Some people don’t consider themselves part of any small community and just want to interact with others on any random topics that they’re interested in.
Crossposters are annoying but I don’t see anything wrong with normal people setting up accounts on generic servers.
This isn’t just my opinion, this is how fedi is supposed to work. One of the main reasons A LOT of people use it is because they’re sick of being siloed into one massive social network site and miss a sense of community.
if you don’t want that, then fedi isn’t for you. plain and simple.
you can’t ignore the social aspects of a social network by saying “I don’t think that’s how it should work” and then being surprised when nobody else wants things to behave that way.
This isn’t just my opinion, this is how fedi is supposed to work.
According to who? Could you point the link to ActivityPub spec that clarifies that?
One of the main reasons A LOT of people use it is because they’re sick of being siloed into one massive social network site and miss a sense of community.
A LOT of people use Facebook should the internet be just for browsing Facebook? Nope - you can use Fedi your way but don’t reject people that don’t want to close themselves in small communities because some people think “this is how fedi is supposed to work”.
if you don’t want that, then fedi isn’t for you. plain and simple.
Why? If I don’t join furries instance or BSD network “fedi isn’t for me”? Can’t I just use it to communicate with other people regardless on which instance they’re on? For me this seems to be in spirit of the Fediverse - federation.
you can’t ignore the social aspects of a social network by saying “I don’t think that’s how it should work” and then being surprised when nobody else wants things to behave that way.
I’m not surprised, I’m actually surprised that people want to join one specific instance as if using BSD or drawing was just one characteristic that defines them. What if they migrate to Plan9? Should they move their account? It seems I’m not alone with this issue: https://lobste.rs/s/d4t4ex/centralisation_mastodon#c_wjdcsr
For the record I don’t have problems with communication on the Fediverse, thanks for your concerns.
According to who? Could you point the link to ActivityPub spec that clarifies that?
Stop trying to apply purely technical solutions to social problems. It doesn’t work.
You didn’t answer my question and I didn’t apply any technical solution to social problems so I’m not sure why you bring this up. Let me repeat it: who said that “it’s not now fedi is supposed to work”?
One of those is because I admin the instance. On the instance that doesn’t Silence m.s, I silence it myself on a user-level. I also refuse to interact with or accept followers from mastodon.social users.
You are blocking people only because they are using the largest instance?
It’s already annoying that you have to pick a server at all, but even worse are seemingly random (to the user) silencing policies. I am on mastodon.social (but rarely use it), because I had no idea what other server to pick. There were services which had a clear description that do not align with my interests, but then there were many more where I wouldn’t have the slightest clue why I would pick one over another. It’s a very weird thing to ask from someone who is completely new to a social network and doesn’t have a clue what the implications are of picking an instance. So I picked the largest because it probably has the smallest probability to go under.
At any rate, one day I decide to follow Drew DeVault because I like sr.ht and sort of keep an eye on Sway. Turns out I couldn’t because mastodon.social decided to block/silence/whatever him without a clear explanation. This definitely soured opinion of Mastodon. So, we have federation, but actually you cannot federate with a lot of folks because of $REASONS most people do not want to care about.
I fear that as Mastodon and the larger fediverse grows, these issues will only get larger, because most people do not want or cannot host their own instances and will be at the mercy of instance administrators who will block/silence any instance that they have some grudge with, are too centralized, or that they don’t politically agree with.
Edit: the parent post has been extended substantially, so this comment may be outdated.
You are blocking people only because they are using the largest instance?
This kind of behavior is exactly why I’m recommending people to just use mastodon.social and be done with it (now mastodon.online). Each random instance has their own admin quirks that would put off normal people in the long run.
So it sounds like you’re just looking for a Twitter clone. Sounds like fedi isn’t right for you, then, huh?
Why? I’m pretty happy using Mastodon for months with a lot of nice interactions. Should I stop because it doesn’t match your description on how it should look like?
I think you should stop prescribing how a social network should work on a technical level, or how folks should moderate their own servers, in order to fit your vision of it perfectly.
It’s very funny because I thought it is exactly you who is prescribing me how I should use it (“So it sounds like you’re just looking for a Twitter clone. Sounds like fedi isn’t right for you, then, huh?”). I never did ask for any changes, why did you bring the “how folks should moderate their own servers, in order to fit your vision of it perfectly.”? I also did not present any “vision”: on the contrary it seems that it’s you that’s presenting one.
No, I’m describing how communites on mastodon behave.
You’re the one that’s so caught up with how admins making decisions is somehow the worst thing ever even though it’s something that has been a part of every single website and online community since the beginning of time.
I’m done.
You’re the one that’s so caught up with how admins making decisions is somehow the worst thing ever even though it’s something that has been a part of every single website and online community since the beginning of time.
It would be good if you quoted what I actually said instead of assuming I said something that I didn’t. In my experience smaller instances have much more risks than bigger ones that can actually better handle misbehavior. (I assume you’re talking about the link I pasted: https://mastodon.social/@Gargron/100639540096793532 )
I’m done.
Have a nice day! 👋
You are blocking people only because they are using the largest instance?
I’m not blocking them. They can still view my instance’s content (and my instance is a small community, and it’s meant to be that way). Their content just won’t show up to my instance, it keeps things quiet and peaceful when looking at the Federated Timeline (a timeline that shows content from all the instances that your instance “knows about”)
I decide to follow Drew DeVault because I like sr.ht and sort of keep an eye on Sway. Turns out I couldn’t because mastodon.social decided to block/silence/whatever him without a clear explanation
Do you feel this way about folks who are suspended from Twitter for violating their policies? I don’t know the exact situation, but I know of Drew well enough to not be at all surprised that he violated a policy on mastodon.social and therefore was suspended from the instance entirely. That’s how it works. Violate an instance’s policies, get suspended. If you care that much, find an instance that is friendlier to the folks who you want to interact with and join it. Nobody’s stopping you, and nobody’s stopping other instances from deciding they don’t want to have anything to do with that sort of thing either.
This is how literally every other community for the history of the internet has worked. You cannot pretend it is something new with fedi.
If a community does not want you, they’ll get rid of you. That’s how things work. Forums, Twitter, reddit subreddits, etc. etc. etc. etc.
This definitely soured opinion of Mastodon. So, we have federation, but actually you cannot federate with a lot of folks because of $REASONS most people do not want to care about
No, this is on you for having chosen an instance that’s heavily moderated. Go pick another one that’s more lenient.
No, this is on you for having chosen an instance that’s heavily moderated.
How could I know?
Go pick another one that’s more lenient.
How do I know? The stated rules are often so general, that they could or could not apply heavy moderation. Can I already move my account without losing any followers or toots?
Can I already move my account without losing any followers or toots?
Not toots, but you won’t lose followers. Migrating an account to a new instance is easy.
you will indeed lose followers unless all servers that your followers are on follow the Move
protocol, which is not guaranteed.
How does it behave if someone on instance A follows you while you’re on instance B, then you move your account to instance C, and instance A blocks / doesn’t federate with instance C? I assume you’ll lose that follower?
Yep, this is everything I’ve wanted to say and more. People assume Mastodon is a drop-in Twitter alternative. No! It’s more than that. It’s more personal, engagements are more real.
I too block mastodon.social on my instance (I run it). I even wrote a rant about people joining it—you’re right, it’s to look cool on Twitter is all.
Choosing an instance has kind of stymied me from getting into Mastodon. I totally get that it’s decentralized. So is email. But I choose an email provider for reasons like reliability, price and features … not because I’m a gamer or Swedish or into BSD or furry fanfic. I also have an email address whose domain name I own, so my identity isn’t handcuffed to my choice of provider.
So being asked by the Mastodon sign-up process to choose an identity based on a (small) choice of interests or subculture identifications is weird and difficult and off-putting.
I also get that Mastodon instances are kind of like small communities, and there’s value in that. But making this such a big part of onboarding acts to negate network effects, making Mastodon less attractive (to most) compared to centralized communities where you just sign up and don’t have to first figure out which 50 users you;d most like to talk with.
So is email. But I choose an email provider for reasons like reliability, price and features … not because I’m a gamer or Swedish or into BSD or furry fanfic.
Choosing a fediverse instance is not like choosing an email provider, it’s more like choosing a neighbourhood to live in. It won’t prevent you from going other places, but the people around you will be those with who you interact the most, generally. Of course, something like mastodon.social is like any big city, a sea of pretty much anonymous users.
But I choose an email provider for reasons like reliability, price and features … not because I’m a gamer or Swedish or into BSD or furry fanfic.
I can totally relate to this and suggest people either: host their own if they have a domain name or select a big instance so that they don’t have issues like this one: https://mastodon.social/@Gargron/100639540096793532
Mastodon is an absolute pig to self-host assuming you care about backups & reliability. I’d like to pay someone to take care of that, but that’s uneconomical for a single user because it can’t share resources across domain names.
Agreed. It seems we’re in the infancy of the Fediverse as all available solutions have issues here or there. I actually wrote my own ActivityPub client/server that’s minimal (almost no server code). For the record there are hosted mastodon instances: https://masto.host/
masto.host starts at 7 euro a month. That is not economical. It’s what providers need to charge because the mastodon code A) doesn’t implement host-sharing, and B) is written in rails (to be clear, I love rails, but it’s not cheap to run).
I’m running honk
, which works sort-of-alright.
Multiple string types and relevant traits beget multiple conversion functions:
I did miss Cows in the string types comparison too and although I agree with the sibling comment that Rust just shows you footguns that are really there I agree that this can be a little bit confusing. String types, and in general references seriously impact how one designs the API so this is not a trivial matter.
Sometimes rustc needs me to add where Self: Sized to my static (…)
I believe it has something to do with object safety. This issue actually uncovers one thing that people keep talking about: no written standard for Rust exists.
I actually enjoy reading books about languages but the existing Rust books just scratch the surface of Rust as language. I think I read all available books on Rust on the internet but only one got deep enough to cover differences between Fn, FnOnce, FnMut and using more specialized std APIs like PhantomData.
I actually enjoy reading books about languages but the existing Rust books just scratch the surface of Rust as language. I think I read all available books on Rust on the internet but only one got deep enough to cover differences between Fn, FnOnce, FnMut and using more specialized std APIs like PhantomData.
I really liked Programming Rust by Jim Blandy and Jason Orendorff. It also covers Fn*
and PhantomData
.
Yes, that’s the book I’ve been referring to :) And I liked it very much too. I’m glad that Jim is writing 2nd edition now and I’ll blindly buy it and read it. Even though I’ve overgrown the initial pains the writing style is very nice and the last chapter about integration with libgit2 was really impressive.
I had high hopes for the Rust Programming Language book thinking it would be more like a spec covering everything but unfortunately some advanced concepts were just briefly mentioned.
That’s how I felt with the book too, I ended up having to ask a bunch of questions of my friends of concepts that aren’t in the book… I should give Programming Rust a try
Depending on your journey you may already know a lot of what’s in Programming Rust but thinking from a longer perspective that’s actually that one book which I’d recommend reading about Rust. (Rust in Action has nice fragments esp building a kernel but it’s still not finished and not exhaustive enough).
Maybe that’s just me but it seems like most Rust book authors think that Rust is complex but people want a “for dummies” book and spend a lot of time with borrow checker and syntax trivia instead of proper in depth treatment. I’d kill for a proper Advanced Rust book instead of collecting random knowledge across the internet.
And I liked it very much too. I’m glad that Jim is writing 2nd edition now and I’ll blindly buy it and read it.
Same here! I was really happy to see that they are keeping the book in sync with recent Rust changes.
I stopped working on the JS-based git engine I was developing when GitHub bought npm. Seemed like it was time to return after two years of hosting my code on Gitlab, Gitea, and Rhodecode.
However, I like how Sorcia is looking thus far.
JS-based git engine? I’d be very interested in that. Actually I did a small repo-viewer in JS back then (that cloned the repo partially in the browser). Do you have your work pushed somewhere? (quick glance at your github did not reveal anything but I probably missed it :) ).
Nope, I don’t have it publicly available. It was still in the discovery phase but it was very clean visually. I may embark upon it in the future.
Have you considered comparing your setup to something like sbupdate? sbupdate signs the kernel EFI image directly and one can boot that file without intermediaries (ie. grub, systemd-boot).
sbupdate
is only a wrapper around sbsign
and EFISTUB generation for Arch. So fairly limited and not really flexible. My intentions with sbctl
is to provide a complete experience from key generation to enrollment.
Okay, thanks for your comment! So to check if I got your right: sbctl
should also cover what sbupdate
does (like signing EFI Linux kernel hook on update)? Because I’d happily migrate to sbctl (esp if it will be available in official Arch repos).
It should yes.
Because I’d happily migrate to sbctl (esp if it will be available in official Arch repos).
Maybe in the future when I have done a proper version 1 release and gotten some feedback on the tool.
Great! I’ll bookmark it and test it when time allows. If something is broken I’ll ping you via e-mail :)
If something is broken, why not file a bug in the public github thing for it? Chances are it might be broken for someone else too.
Looks interesting. Is there a similar app that can check passwords stored in pass
?
gopass has an audit feature.
Detected a shared secret for:
Password is empty or all whitespace:
Password is mangled, but too common / from a dictionary:
Password is too short:
Password is too systematic:
Not sure, as I’ve never used Pass. However, if it has an export feature, you could do that and pivot the data in something like Excel.
Also interesting: JEP 369: Migrate to GitHub. Nice that one of the goals is “ensure that OpenJDK Community can always move to a different source-code hosting provider”.
Oh, that’s even more interesting. Too bad the heavy use of GitHub API can quickly result in vendor lock-in where you cannot migrate easily. I don’t have any complains about the API itself but it’s sad that even if we have a distributed version control system it cannot be used without a collection of proprietary services.
Lock-in is true no matter what though. The moment you go farther than plain git hosting, you will take on dependencies which will cause you trouble.
Even with a self-hosted solution you will eventually get into trouble keeping up to date as OSes go out of support and/or security patches dry up.
As long as you have a valid migration plan up your sleeve, github is as good or bad as any other third-party solution and arguably so much more convenient and time-saving compared to a first-party solution that the trade-offs are still worth it.
You’d rather have a PR review system that works well, is available now and is known to many users than multiple months of development to end up with an inferior solution that will lack user engagement and binds resources not available in other places - a solution which you might have to throw away anyways a few years down the road because the platform you have chosen went out of support or doesn’t run on supported OSes.
Yes. You don’t have control over githubs roadmap. But compare the self-hosted landscape from 2007 when github launched to what is best-practice and available now, 13 years later. And then compare the effort you would have gone through to keep up with those dependencies to what it would have taken you to keep up with github.
Github is the more stable platform. And they have a lot of incentive to keep it that way.
Yes, I would love if free and open platforms could be as feature-ful, accepted by users and easy to maintain, but they aren’t and thus at least for now, the positives for the project and thus for its users and developers do outweigh the drawbacks. And with every passing year, the trust put onto github by it’s users only increases.
@ianloic I chuckled loudly when I read this. 😂
This is missing one important piece that Keybase got right - one key per device. Nowdays people have multiple devices. It’s natural that they want to sign or encrypt using any one of them. If the system only allows one key per identity, then users will have to transfer keys, and transferring keys is what puts them in danger. The less the key moves, the safer it is. This just plays against it.
Agreed, I guess that you can verify new devices by signing their keys from a previously verified key pair. This provides a chain of trust back to this first desktop app for example.
(Edit: typo 🐾)
I half wonder if keys.pub (started by an ex-(?) keybase employee) is an attempt at “forking” Keybase in lieu of the Zoom acquisition. This came out very recently, with the Zoom acquisition late last week. Timing wise, it’d make sense, and feature wise – well, yeah, one key per device is great. I guess, even if this conspiracy theory were true, I don’t see it on the “what’s next?” list.
Interesting because I had the same exact thought: https://github.com/gabriel lists themselves as “formerly keybase” and the project looks identical to keybase in technical aspects. If I had to guess he was the technical mind behind keybase, got annoyed by the acquisition news and rewrote the backend from scratch. First commit is dated 5.12.2019.
First commit is dated 5.12.2019.
That seems to invalidate the idea, really. The acquisition just happened, and it would not have been a thing that spanned a year of time. Though, I guess the “shopping for an acquisition” could have happened back then, and he, as a “Founding Engineer,” would have been informed of such a major strategic change (Probably).
The other possibility is that this was started as a “test bed repo” – we test out ideas here before committing to them in the product, but it’s close enough to the product that we know whether or not it’ll work…
Either way, the server still isn’t open source as far as I can tell. Would love to see a … as much as a I hate to say it, decentralized block chain, version of this all. sigh
Why would it be rewritten? The idea I am after is decentralization. Keybase and Keys.pub (as far as I can tell), have a central server that is closed off, which is a problem as they are now owned by Zoom, who people don’t trust. Keybase uses a Merkle Tree to build a “chain” anyway. Why not distribute it, and provide more “zero trust” mechanisms to avoid this problem? I guess there’s the problem of trusting the validations, but that happens client side in keybase, so.. maybe it’s solved? Disclaimer: I am not a cryptographer, and have not worked through this idea.
As I understand it, an actual Merkle tree crypto currency is at the whim of the majority, and if any one interested party could spin up enough devices to become the majority, then they may write the future as they see fit.
Oh yes! Hmm. I see how this could be a problem, BUT, keybase already has a design which requires a signature from a key you own to add a new key, for instance, and (probably?) also to revoke an existing key. So, I don’t think there is a consensus problem? But maybe a malicious actor could refuse to propagate updates that don’t match their special sig verification, making it possible for user revoked keys to still be seen on some replicas? But even then, the user will not encrypt/sign messages with a revoked key, which makes me think that this isn’t a real risk? Am I missing something?
This is an excellent concept and an aspect I didn’t catch originally. Key management is actually a fascinating topic I should read up on a bit more. Thanks for pointing it out.
On the orange site someone posted a link to the sample page https://metacode.biz/openpgp/key#0x6A957C9A9A9429F7 as well as the source: https://github.com/wiktor-k/openpgp-proofs#openpgp-proofs
Edit: reading through the docs it seems to add Lobste.rs as an identity provider the user JSON link should also return Access-Control-Allow-Origin: *
HTTP header so that the JSON is readable from the web UI. Would that be something lobste.rs devs are willing to add?
I did try this anonymous room but it fails on Firefox on Linux. Example: https://anonymous.cheogram.com/prosody@conference.prosody.im?nick=erwanguorg
The anonymous stuff only works for chatrooms hosted on our server, such as: https://anonymous.cheogram.com/discuss@conference.soprani.ca
If we allowed anonymous users out into the federation, we could become quite the source for SPAM unless additional measures were taken.
Thanks for the info! Maybe it could be better to spell it out explicitly either on welcome screen or when the connection fails? Because it looks quite broken from the initial impression.
I don’t really like it but I mainly use Arch. It’s a bit clunky and I have to configure way more stuff than I’d want to these days, but:
This is all for personal use – I don’t exercise any pickiness for work use. I use whatever distro is needed, for whatever purpose. Most of my customers are on Ubuntu so I have an Ubuntu laptop on the shelf next to my desk. It gets zero use on weekends. It works pretty well, mostly because I rarely update it. I… every time I touch an Ubuntu machine it breaks. I don’t know why. By now I’m convinced it’s not Ubuntu’s fault, I’m sure I’m not holding it right or whatever.
What I’d love in a distro and would definitely make me switch:
Arch covers 4 out of these 5 so… yeah.
Every 8-10 months or so I try various other distros that are supposed to be cool but there’s enough broken stuff that I end up going back to my clunky Arch install. Lots of systems try to be really smart, and it’s hard to do smart things with a system that’s developed by a hundred different teams in a hundred different places. Ubuntu, Fedora and even Debian fall flat on their head way more often than I have patience with anymore. The only one that I really liked was Void Linux. I’ve got my eyes on that (and Slackware 15, of course :-) ).
I have another machine that I use for non-work stuff pretty often, a laptop running OpenBSD. I love it but there’s lots of Linux-only tech that I occasionally dabble in on my non-work machine, so I can’t really switch full-time, I guess (but I am keeping an eye on vmm. The moment I can compile a kernel in less than forever, and maybe run Wayland and Wine, I’m so out).
My Linux history was roughly as follows:
I also liked FreeBSD a lot back in the day, and ran it at home for a while, too, but I really can’t remember when anymore…
I don’t use Arch but I do use their documentation and wiki which is often one of the best sources of information on the oddities of whatever piece of software I happen to be dealing with.
Excellent comment and argumentation. I’d disagree on systemd though. Having migrated from Gnome to Sway on Arch systemd user services are a real boon to me. I use them to run several services when Sway starts: sway starts sway-session.target
that pulls all user dependencies: mako (notifications), redshift, udiskie (mounting disks). I also have a restic backup as a template service (and timer!) so that backups go to S3 and Backblaze on schedule. I’ve got calendars sync there (vdirsyncer), mpd and some cleanup tasks. Everything is clean and readable, dependencies work, logs are there.
Is it possible to do it all without systemd? Yeah. But systemd hits the spot for me in terms of readability and flexibility.
Oh, like I said, I don’t avoid systemd religiously – in fact, in a professional setting (I mostly do embedded systems for a living), I don’t avoid it at all.
I’d rather avoid it on my machine purely due to personal preferences:
On systems where systemd’s features are relevant, I think the trade-off is usually worth it. Yes, it breaks more often than I’d want, but it still breaks way less often than a web of thirty 500-line init scripts written by developers with limited Linux experience. And it’s definitely easier to debug, too. And it makes a bunch of things related to power management a lot easier. And, of course, on systems that aren’t mine, how much I trust its development team is pretty much irrelevant.
(Edit: I also 100% understand why it’s the tool of choice for large distributions. Distributions like Arch, Debian or Ubuntu or Fedora package thousands of services. systemd absolutely makes it easier for hundreds of volunteers to make things that work better together, and makes it easier for people to package their software, too.)
On my personal machine, though, I haven’t found the trade-off to be worth making (basically, it doesn’t really “hit the spot” :-) ). However, that’s pretty much a product of my own preferences and of the way I use computers. I’m sure it’s a good trade-off for other people.
Thanks for your detailed explanation! I just wanted to highlight that I find user services of systemd particularly useful. I did not have much experience with debugging non-systemd services but comparing writing unit files and init scripts (or upstart files) for work I’d rather maintain the unit file :)
I do agree that it could be simpler. Sometimes I feel like we’re sitting on a big pile of gunpowder with our primitive tools (monolithic kernels, unsafe languages), but such is life.
I just wanted to highlight that I find user services of systemd particularly useful.
We’re definitely in agreement here, I find a lot of things about systemd useful, too, just not for most of the things that I use my home machine for, or at least not the way I use it. That’s likely a byproduct of the fact that many of my usage patterns are stuck in the 90s, I just didn’t find much of a reason to update them :).
I did not have much experience with debugging non-systemd services but comparing writing unit files and init scripts (or upstart files) for work I’d rather maintain the unit file :)
In my experience: the baseline for debugging init scripts and upstart files is higher, but the peak line is lower. That is, I find that the most common problems that you encounter are way easier to debug on systemd than they are on other systems. The excellent debugging and profiling tools definitely help (but they’re also, IMHO, a byproduct of the sheer complexity of the system: upstart & friends can do without them because you can debug most common problems between vi and looking at log files). But more complex problems, especially when they’re related to bugs in the core system (and there are lots of them, not because of sloppy coding – the systemd codebase is pretty clean actually – but simply because there’s lots of code), are way harder to troubleshoot on systemd.
But the “breakage” line is also a lot higher on systemd. For example, synchronizing services and targets is something that it gets right 99% of the time and you rarely have to debug it. That was easier to get wrong on other systems (although maybe I just sucked at it more than I do today). On the other hand, that 1% is a nightmare to debug.
Anyways, the quirks of my machine aside, I think systemd is generally a step in the right direction. I think whatever will come after it is gonna be pretty damn great.
The C people finally understood this when they made Go.
This thread covers that part of a talk by Rob Pike.
https://news.ycombinator.com/item?id=4705051
On a slightly different but very related topic… Another interesting viewpoint is the Anders Heijlsberg perspective. When he was talking about TypeScript in a video (don’t have the reference handy), he talks about the existing ECMAscript imports syntax. With their choice, it makes building a completion system in an editor very hard (aka impossible). The problem is that the user specifies the packages/modules last.
Yeah I do kinda wish ES had gone with from 'foo' import {bar};
instead of import {bar} from 'foo':
When he was talking about TypeScript in a video (don’t have the reference handy), he talks about the existing ECMAscript imports syntax. With their choice, it makes building a completion system in an editor very hard (aka impossible). The problem is that the user specifies the packages/modules last.
Yes, having seen that video it hurts me every time I need to do TypeScript imports. This only shows that language design should be an incremental, feedback-loop-driven approach where the entire development environment is considered. As a positive example: C# Linq queries start with from
and end with select
to guide IDE completions instead of the reverse because “SQL does it”.
Another tip for the language writers: the compiler should not be a “black box” - instead built as a service that can be queried by the IDE (c.f. Language Server Protocol). This was also mentioned by Anders.
Some more links about it here:
http://www.oilshell.org/blog/2017/12/15.html#appendix-lexing-the-c-language
it’s variously called:
There’s an interesting section on the parser for the CompCert C compiler. I guess they parse twice so that the second parse can be more obviously correct (?)
Re: CompCert - that is funny because the original C designers wanted the language to be a single pass compiler.
They’ve succeeded — too well. It has to be parsed in a single pass, and can’t be cleanly separated into tokenization and AST-building passes.
I think clang still uses separate steps. See https://en.m.wikipedia.org/wiki/Lexer_hack and especially the references to Eli Bendersky’s site on that page.
The C people have called out two big mistakes in the language since almost the start: Making declaration mirror usage (ie, to get an int
out of int (*fn[7])(int)
, evaluate (*fn[i])(42)
), and the operator precedence of bitwise stuff.
This is surprisingly well-written article, thanks for sharing! (The entire blog also looks very professional) It seems Chip-8 is really popular as Rust in Action also featured Chip-8 emulator.
This issue can be somehow mitigated with Suborigins, unfortunately only supported in Chrome right now.
Another issue, a counter-example: letting people register subdomains under a domain you control can lead to subtle issues like being able to provide OpenPGP keys for the root domain: https://tools.ietf.org/html/draft-koch-openpgp-webkey-service-09#section-3.1 (imagine Mallory registering name openpgpkey
under example.com
and being able to intercept all requests for keys in example.com
domain).
There is another way of solving this issue using an HTML tag which points to the right git repository, but this requires the Git plaform to support it. It doesn’t feel like a good design decision to require Git platforms to add this special case for Go modules. Not only does it require upstream changes in the Git plaforms, but it also requires the deployed software to be upgraded to a recent version, which is not always a fast process in enterprise deployments.
Not a Go programmer but from what I get this doesn’t require any changes from Git platforms: on the contrary one can have a static site with <meta>
imports (say, on example.com) that point to your Git platform of choice (say gitlab.example.com).
That the <meta>
is used and embedded in Git platforms is just a convenience, not a hard requirement.
Yeah, people do this all the time; it’s how e.g. arp242.net/uni
resolves to github.com/arp242/uni
(or, in the future, perhaps somewhere else). You can just generate these with a little script.
Thanks for the confirmation. Yep, this kind of thing actually makes perfect sense for me - it’s decentralized, makes the package names look good and yet allows redirection at will. Too bad this design is rarely seen in the package manager ecosystem.
You may want to check out links on https://adol.pw/ CV and Contact lead to 403 forbidden pages.
Yes, I’m in the process of making those again, but I don’t know when I’m going to finish it. Thank you for noticing, though :)
Thanks for the dockerfile here. I just started playing with Rocket earlier today after having finished up the Rust book. I felt like Rocket might be a good first “real” project framework. Getting something static is easy enough, but I still think I have more reading to do regarding lifetimes.
Thanks for the dockerfile here.
Unfortunately until cargo supports downloading crates only that Dockerfile is highly inefficient. That is for any code change it will have to download all crates over and over (thus not utilizing docker cache).
Depending on your model you may also want to avoid using nightly (that this Dockerfile uses).
Rocket, sadly, doesn’t support async functions yet.
Nix to the rescue :). I use a combination of crate2nix which uses buildRustCrate
to realize each crate in its own Nix store path plus dockerTools.buildLayeredImage to build a Docker image from a Nix derivation.
Most builds Docker image builds only take 1-2 seconds that way.
Thanks for the links. Any recommendations outside of warp that are worth looking at?
Rocket, sadly, doesn’t support async functions yet
It’s not that sad because async Rust is generally a poor engineering decision unless you’re building a load balancer.
Quoting from your linked comment:
Only good for load balancers that don’t do any CPU work anyway, which is exactly what the company behind tokio does. async-std got hijacked by ferrous to try to sell more Rust trainings, because it introduces so many hazards that you need to be educated to avoid.
I’m not building a load balancer but indeed a service that is mostly I/O bound as it accesses other services. Informing about trade-offs of async is okay but bringing claims that it’s designed by ferrous to “sell more Rust trainings” is not really a technical argument.
Nice post!
I did that and while it was nice for security there were numerous practical problems: selecting PCRs to use, if you use too little then the benefit disappears if you use too many (e.g. taking into account currently booting kernel) you may need to input recovery keys frequently (e.g. Arch updates kernels every week or so).
Additionally TPM chip that I used (Dell XPS 13) randomly failed.
Ultimately I just scrapped the solution but may return to it with some adjustments.
Yep. Especially interesting is having boot partition on an USB drive and setting up boot to boot to windows if the USB drive is absent.
Predicting the future kernel checksum value isn’t so hard, you just do the PE/COFF checksuming on the kernel.
It’s documented as part of the Microsoft Authenticode spec, https://docs.microsoft.com/en-us/windows/win32/debug/pe-format#appendix-a-calculating-authenticode-pe-image-hash
Grawity has written a tool that helps you do all of this to seal TPM secrets against: https://github.com/grawity/tpm_futurepcr
I have an implementation of PE/COFF checksuming in my Go UEFI library: https://github.com/Foxboron/goefi/blob/master/efi/pecoff/checksum.go#L23
Great, thanks for clearing the matter up.
Is this something that actually works? Are you using it? Why it’s not in extra/community? :)
I think it works! I believe grawity has been trying to use it.
I’m not. I have been largely focusing on fixing my secure boot stuff and look more into the TPM stuff when I’m happy with secure boot.
I’d probably consider it experimental honestly. Use at own risk instead of a solution.
Good to know. I’ve read online about TPM enforcement being a pain-in-the-arse due to non-standard PCR definitions done by specific manufacturers, but i didn’t know that it was worse because of these random failures.