the device is still better equipped to handle drops and mishandling compared to that of more fragile devices (such as the MacBook Air or Framework).
In my experience this isn’t true (at least for the Framework), and the post doesn’t provide any proof for this claim.
I’ve owned a ThinkPad X230, which is almost the same as the X220 apart from the keyboard and slightly newer CPU. I currently own a Framework 13. Although I didn’t own them both at the same time, and I also have no proof for the counter-claim, in my experience the Framework is no more fragile than the X230 and I feel equally or more confident treating the Framework as “a device you can snatch up off your desk, whip into your travel bag and be on your way.”
(I remember the first week I had the X230 I cracked the plastic case because I was treating it at the same as the laptop it had replaced, a ThinkPad X61. The X61 really was a tank, there’s a lot to be said for metal outer cases…)
The rugged design and bulkier weight help put my mind at ease - which is something I can’t say for newer laptop builds.
Confidence and security are subjective feelings, so if owning a chunky ThinkPad makes someone feel this way then good for them. Not to mention I think it’s awesome to keep older devices out of e-waste. However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
It’s a reasonable null hypothesis that a thicker chassis absorbs more shock before electronic components start breaking or crunching against each other. Maintaining the same drop resistance would require the newer components to be more durable than the older ones, which is the opposite of what I’d expect since the electronics are smaller and more densely packed.
It’s been years since my x220 died, but IMO the trackpad on the framework is leaps and bounds better than the trackpad on the x220. (Though, the one caviat is that the physical click on my framework’s trackpad died which is a shame since I much prefer to have physical feedback for clicking. I really ought to figure out how hard that would be to fix.)
The x220’s keyboard is maybe slightly better, but I find just about any laptop keyboard to be “usable” and nothing more, so I’m probably not the right person to ask.
From my recollection: keyboard of the X230 about the same, trackpad of the Framework better (under Linux).
The X230 switched to the “chiclet” keyboard so it’s considered less nice than the X220 one (people literally swap the keyboard over and mod the BIOS to accept it). I think they are both decent keyboards for modern laptops, reasonable key travel, and don’t have any of the nasty flex or flimsiness of consumer laptop keyboards. But not the absolute greatest, either.
I remember the X230 trackpad being a total pain with spurious touches, palm detection, etc. None of that grief with the Framework, but that might also be seven-ish years of software development.
This is something I’ve been meaning to play around with myself. I think it would be possible to build a thing to get real signed certs to devices with the existing public infrastructure that exists. Basically it would:
Have device (D) generate a keypair, produce a CSR, send the CSR to server (S).
S verifies that the CSR is for <hash of public key>.foobar.example.com.
S gets a signed cert using an ACME-DNS verification.
S adds a public DNS entry for <hash of public key>.foobar.example.com for D’s local IP.
S sends the signed cert down to D.
Then, as long as you’re on the same local network, you can go to <hash of public key>.foobar.example.com and see a lovely little padlock icon.
If you’re using something other than HTTP, you might also be able to do a combination of hole-punching or UPnP port-opening and SRV records to make it work with public IPs too.
For initial connection you probably want the devices to host foobar.local that any device can respond to that just shows a webpage of links to all the <hash of public key>.foobar.example.com devices that exist on your local network.
The biggest downside of all this is that it still doesn’t solve the bootstrapping problem because the device needs an internet connection to get the cert. So, you still need some other secure method to connect to a fresh device to (for example) give it your SSID and PSK to even get on your network. Though, it seems like WebTransport could conceivably be a solution to that because, as I understand it, it’ll let you make a connection with just a public key that doesn’t need to be signed by a CA.
I want to finish getting my notes, which have been scattered to so many different locations and formats since I’ve been trying out different options for taking digital notes, all coalesced into just some markdown files in folders. I’ve got the most annoying of them almost done, it was Bookstack which stores notes in a mysql db in pre-formatted HTML with no built-in export functionality. -_-
I also finally got a standalone crate using esp-rs & esp-wifi to build and connect to the internet. I’ve got a small pile of project ideas that I want to play with now that I finally have the un-fun part out of the way.
Note in particular that including plaintext nonce values with messages would break our requirement that all protocol traffic be indistinguishable from random noise.
I feel like I’m missing something in their justification for TCP. How does the message sequence number being part of the TCP header make a substantial difference from having a plaintext sequence number in a custom defined message structure sent over UDP?
In my mind, either way there’s a sequence number in plaintext in your IP packet.
The protocol is designed so that it can’t be fingerprinted. The traffic is obviously running over TCP (that can’t be hidden), but you can’t tell that it is Theseus protocol. Including plaintext in a custom defined message structure in the data part of the packet would allow for fingerprinting.
Bouncing back and forth between playing with threadiverse (threaded fediverse) and full on decentralized tech because I can’t decide between long term and short term stuff.
At the moment playing around with the rust libp2p libraries. Which, if anyone has experience using them and has any pointers I’d love to hear from you. I’m so far finding the whole thing incredibly confusing. There’s so many layers of indirection, I’m struggling with which parts I’m supposed to be using and what they’re actually doing under the hood.
I’m also toying around with stuff for an alternative Lemmy frontend. I have two different ideas and haven’t decided which I’m going to play with first. Either just a simple js-free/-lite web frontend or an IMAP (JMAP?) api that let’s you point a mail client at it interact with the threadiverse as if it’s one big set of mailing lists.
I haven’t touched libp2p in years but that was basically my experience as well. Quite disillusioning. There was a good panel on it in RustConf a few years back (2018???) that helped me a lot but I don’t remember whether or not it was recorded.
Something that watches right-wing news, stores it, captions it, and puts the scripts on a publicly searchable database so people can easily find out what X person said about Y, Z years ago, and provides the clips as needed.
At least in the UK, there’s a fairly distinct difference. The broadly centrist publications often have somewhat dubious interpretations of statistics and make unjustifiable leaps of logic. The clearly left-biased publications will cherry pick statistics that support their arguments and make huge leaps of logic that amount to ex falsie quodlibet. The right-biased publications just eschew facts entirely and make stuff up. If you compare centre-right and centre-left publications, they will often share the same statistics and will draw different conclusions from them that support their own narratives. If you compare centre-left to further-left, you’ll still see mostly the same sources just mor extreme conclusions. If you compare centre-right to further-right, you’ll see the sources that directly contradict the narrative discounted and twitter polls or equivalents replacing reputable surveys to make point (or, the more common dodge, replacing statistics about X with polls of people’s perception of X, where the sample is readers of a publication that runs scare stories about X on their front page).
I’d love for someone to figure out a way of making news publications financially accountable for the damage from intentionally spreading outright lies that isn’t also a mechanism for the government to censor a free press.
I think the media landscapes site is a good starting point if you’re thinking about how the UK media environment could be different.
I think the main issue in the UK is that most news is owned privately by very rich men. Their publications represent their class interests and so our media is very strongly biased towards conservatism and capitalism. I’d like there to be rules banning any individual or org from owning more than a small part of the news market and I’d like to see some initiatives that lead to more news orgs being co-operatives and to more good local news orgs (like The Mill in Manchester).
So it’s clear where I’m coming from, here are the large national media orgs that I think are good and what I think their political bias is: liberal/conservative: Financial Times; socialist: Novara Media, JOE; and I don’t think we have any good liberal centrist ones, but Channel 4 News and The Guardian are okay I guess. Every other large national news org in the UK is, IMO, somewhere between total crap (The Sun) and only a bit crap (The Mirror).
Thanks for that added level of detail. Good thing I’m not that naïve. But to go one level deeper, I’ll just add it’s also naïve to imagine they do so in equal quantity and magnitude.
NYTimes foreign policy reporting is so bad though. It’s a shame that they’re considered the standard bearer. Before the Afghanistan withdrawal, they had a frontpage piece saying, “Generals say withdrawal could lead to civil war!” This is the exact opposite of the reality here on Planet Earth. On Earth, Afghanistan has been in civil war on and off since the 70s and the withdrawal led to the end of the civil war because the Taliban finally won after having their opponents propped up for the last twenty years. NYTimes foreign policy reporting is all fed by neocon hawks, so they can’t do basic reporting of what is actually happening in the world. :-( I mean, obviously the Taliban sucks, but please report on reality and not the idle musings of a bunch of guys with billions of dollars to slosh around that can’t beat a bunch of poor Afghani religious fanatics.
Presumably this was Goering’s argument against the US entering the second great war (“You’ll prolong the war! It would be better if we get our lebensraum!”) and this is the argument that Russia is currently making against US and allied support for Ukraine. Also the argument against supporting pro-Democracy movements in Iran and Syria.
What I’ve learned from listening to organizations like the Taliban, ISIS, Al Quaeda, the Russian government and other sundry dictatorships is that they understand both the “left” and the “right” in the US and have some rudimentary idea of how to influence some “left” and “right” voters to their advantage.
I know all this brainwashing makes medium sized waves on social media and the opinion pages of newspapers, but I don’t know if this actually moves strategy. It might actually all be a resource drain on those organizations, because they spend a certain amount of resources influencing online opinion in the US/Europe which doesn’t translate to strategic or even tactical advantage, because even God doesn’t know how democracies work, much less people whose core belief is that democracy doesn’t work.
You are def correct, I guess I was thinking more about lies than credulous parroting of govt, corporate folks, and general status quo maintenance. There is a difference in that the Times will acknowledge their earlier position, tho I dunno if it will inform their future behavior. The advantage of a transcript of broadcast right wing talk radio, livestreams, broadcast, similar ephemeral communication, is that the worst shit is just lost, heard by their audience but unavailable for accountability.
Other news outlets also have interesting biases. For example, The Guardian is one of the most left-leaning newspapers in the UK (probably the most left-leaning broadsheet). If you read articles that they right on copyright law their bias towards copyright maximalism is very obvious, in spite of the fact that they’re advocating for rules that disadvantage workers and empower publishers, which is directly antithetical to their usual bias.
I think the Guardian’s positions makes more sense if you see it as mostly a liberal paper with a bit of leftism rather than a left paper. This explains their very spotty support for workers rights and public commons. Their environmental coverage is pretty good, tho.
Getting my desk clock to show my work calendar so I can stop missing the one meeting I have per day.
Fortunately that’s more possible than it sounds because my desk clock is an LED matrix with an ESP8266 controller backpack. If anyone has experience parsing iCal files in arduino-land, any tips?
What timing! My Desk/Homelab/‘Workshop’, which is far cleaner than it has any right to be because, purely by coincidence, I spent several hours cleaning it yesterday. Not pictured, the banker’s box full of crap that needed to not be on my desk, but didn’t otherwise have a home…
Desktop is just Gnome with a default Gnome and/or Fedora wallpaper.
Setup is 1st gen Framework Laptop w/ 2 4K monitors via a CalDigit TB3+ dock. (Some part of this setup is unreliable though, not a recommendation.) The keyboards are, (front) a first round ErgoDox Wireless from SliceMK with solar, (back left) backup main wireless keyboard for when I need to type things without the mental overhead of a learning a new layout, (back right) wired CODE Keyboard hooked up to my gaming computer in the rack that is almost entirely unused except for choosing which OS to boot since the EDW is Bluetooth’d to that computer.
The printer is a still entirely stock Prusa MK3S in an incomplete DIY lack enclosure. It’s absolutely a workhorse despite the level I neglect I provide for it.
The rack contains 4 servers. First is the aforementioned gaming computer, a Ryzen 5900X + Radeon 6900XT that I was lucky enough to get ahold of for list price in December 2019 and only had to sit outside in Minnesota for 3-5 hours before dawn each day for a week.
The one below that is my workhorse that hosts all my self-hosted services like Matrix chat for my wife an I, RSS reader, notes stuff, and what not.
The next one is a pile of parts that, until a couple months ago, when I got a new CPU on a BF sale, I could never get to be stable. It’s mostly for messing around with.
The last one is my NAS. It also has a Ryzen processor and has 64 GB of ECC RAM. It hosts just an NFS server, lol. This was built with the ambition for it to be my end game, so it’s intentionally super OP. I really need to get more moved over to that one. Right now my plan is that one will host all my internal services and server #2 will get moved to a DMZ network and only host things that I want to be publicly available.
Not visible, just sitting on top of the stack is the old NAS, an ODROID HC2, that’s doing way more on an old Samsung phone CPU. But that one is hanging out on an old unsupported debian because armbian dropped support, so I want to get everything moved off of it.
Playing around with libp2p which I’m hoping to use to create a fully p2p mesh overlay network (aka a thing to setup lots of point to point wireguard tunnels).
I’ve got a wireguard setup going for always being on my home network, but it takes too much managing for my liking, so I want to make something that can be completely automatic. But something that doesn’t require giving access to my whole network to some closed-source vc-funded control system that pinky promises it won’t do whatever. Basically, I miss OG Hamachi.
That’s definitely plan b. But this whole thing started because I tested out tailscale for a work thing and was unhappy with the inflexibility of the client. I’ve also got a few separate networks I’d like to have and having to put up a headscale server for each seems painful.
I actually stumbled on this about two weeks ago and was really excited to play with it before I realized I’m not allowed to. Do you have any plans to open source it?
I’ve been using difftastic occasionally for the past few months and it’s already become an invaluable tool for me when I’m trying to figure out what actually changed in big patches to rust code. The default diffing algorithm seems to love to latch onto two different single line {s and decide that that is unchanged line which then causes a bunch of unrelated code to get scattered between the actual changes.
For anyone who wants to try it out without mucking with their normal workflow, I’ve setup git config --global alias.difft "-c diff.external=difft diff" which allows me to just run git difft when I want to use difftastic.
The only thing I wish I could get it to do is be used by interactive diffing, like git add -p, because it’s so much better at finding unchanged lines. Though, I’m not sure that’s even possible. Anyone know?
There’s no equivalent of git add -p yet, but several users have asked for it. I’m not sure if it’s possible yet, or to what extent git lets you override how it splits patches.
I wonder if this could be hilariously sidestepped by setting your MTU to some higher value. Of course, fragmentation will occur, but those 10 initial packets will carry a lot more data than 14kb.
You can, but there is no promise anything will handle a larger MTU.
For networks you completely control(like say an Intranet app) perhaps a large MTU is reasonable. On the public internet, it’s unlikely to work out very consistently.
1492 is the maximum MTU when using PPPoE. Someone working it probably just decided it was better to default to that so things would work well for people with DSL.
requires a reasonably modern x86 CPU. It depends on certain performance counter features that are not available in older CPUs.
Oh? Is this no longer dependent on having an Intel CPU? Might finally be time for me to play around with this.
Does anyone know if you get not-entirely-uninteligible results with Rust binaries? A cursory search didn’t turn up and mentions of support or explicit statements on non-support.
Spending the day finishing up some DIY sliding rack rails made from some lightly modified, cheap, full extension drawer slides and 3d printed brackets for the servers that already exist in my small server rack. Once that’s done, thinking on how I’m going to make the keyholes that the new server case I just picked up requires in the slides. Right now thinking either a template for routing them directly with a dremel or just drilling a bigger hole and printing a keyhole shape.
After that, building the new server. It’s going to be my new NAS. I happened on a remarkably good deal for a 2U case with 8x hotswap drive bays that just fits the depth of my rack. So, I took this as an opportunity to make my endgame NAS. It’s a Ryzen build with ECC ram and a motherboard with IPMI (out of band remote management), native SATA connections for each drive bay. As long as nothing replaces the 3.5 inch drive as the cheapest storage form factor, I can’t imagine ever needing to replace this. (Future processors that are cheap and much more power efficient not withstanding, but even then I wouldn’t need to replace it.)
And finally, figuring out what I’m going to do for software. My old NAS is running an out of date armbian, so I might just do Debian. Most of my other stuff is Fedora, but I think I want something with a slower release cadence. It’s a shame that CentOS isn’t a thing anymore. Maybe it’s time to try out Rocky Linux?
I finally happened on a good deal on a short-depth rackmount NAS case. Now I need to decide what I’m going to fill it with. I want it to be quiet and not consume a whole bunch of power, but if I’m going to spend money on this I would really like this to be pretty much my end game NAS, so I want 10 Gb eth, ECC RAM, and probably some sort of remote management. So I’m probably going to spend the week waffling between the two extremes of getting an ASRock Rack mobo & 5th Gen Ryzen or just throwing an old mini-itx board with a crappy Atom CPU in there that I already have sitting around.
Oh, and also figuring out backing up stuff to burnable blu-ray media so that I can take my current hot and cold backup drives and make them both hot without losing having a cold backup. If anyone here does disc-based backups, I’d love to hear your process.
Starting “Day 5” now that it’s my 3rd week of working half time (and having 2 company holidays in the first 2 weeks) and trying to figure out what the product this company makes actually does. There’s clearly a lot going on in it, way more than I had figured out before I started. Part of it, I assume, is just getting used to the early stage startup lifestyle of flying blind and making it up as I go along.
Partly ending my sabbatical with a new part time gig doing odd projects, improving docs, and helping write blog posts [edit: and “turning a pile of tech and algorithms into a nice to use product”, lol] for a startup that’s making a end-to-end software/hardware testing platform. 20 hours a week for $92,500 plus benefits was an offer I couldn’t pass up.
Other than that, probably cleaning my apartment because it has gotten out of hand, and playing the wintersday festivities in guild wars 2.
I’ve been using gRPC for about the past 4 years and by far the most important thing I wish we had understood at the outset is that you can use gRPC with a variety of other encodings.
I’ve been working with Protobufs for a few years now. The author’s criticisms are valid, but I’d change the emphases. The formal “niceness” of the type system has generally not been an issue in my usage. Optionals, enums, and submessages using ints and strings are what are used for practically everything. What is more of a problem is that every field in a Protobuf definition is usually chosen to be an optional, and, I believe, an optional by default in the latest version. This is best practice to prevent deserialization failures if the field is omitted, especially if the send-side schema is a newer version that no longer requires the field. Optionals make you check for the presence, which at best is annoying and at worst encodes a brittle schema defined in code. It also makes it hard to use the type in interior code as you’re not sure which fields are filled out. The author touches on the problem of having transport and internal types; optionals pose a major problem with using Protobufs for internal types. At the crux of it, I think the problem is Protobufs is implicitly supposed to be used for senders/receivers with evolving versions of the schema, but doesn’t provide any extra features to make that process smooth. There’s been plenty of horror stories with systems crashing b/c mismatched proto expectations.
Um; I can only say since I understood it, I see all-optional as an important feature, not bug. To me it conveys the same semantics as the “zero by default” semantics of strict fields in Go. The trick is to define your fields in a way where a zero value is a perfectly sensible and meaningful common default. Now, for some values like string names zero (empty string) might not make much sense - but then you probably need more complex validation anyway, so e.g. some validation annotations with a code generator for them (protoc-gen-validator or what was the name) might be the next useful step. And for even more advanced ones, you have to check them in code anyway, unless the expectation is that protobufs would be a formal proofs language. That said, as described in https://aip.dev/203, there are annotations like ‘REQUIRED’, though they have quite nuanced semantics, and those make quite some sense to me as such. And here I do indeed miss not seeing an automatic protoc generator for their validation - but no efully one will appear sooner or later (and I sometimes wonder would it be so hard to write myself?)
I also dislike protobuf, but although it’s a mess I think the one exception to “you should avoid it if at all possible” is if you’re using java on both ends. That doesn’t make it any less of a mess, but it’s mostly a mess in exactly the same way that the java type system is a mess so it ends up being beneficial.
Or at least that’s what I heard from the cloud devs doing java on the other (unfortunately only theoretical) end gPRC interface that I was working on from Rust.
My experience comes from using it with the JVM on both ends (but Clojure rather than Java) and we still had a lot of headaches. In particular the part where you send it a nil in an integer field and it silently converts it to zero is mind-bogglingly bad. Or where you send a negative number in a field that’s defined as an unsigned int and it silently accepts it; what the hell.
sounds like annotating deserialized stuff in kotlin as non-null, only for the deserialization system to not care about that for obvious reasons - those are the times I miss rust so much
Woof. I guess I was slightly more fortunate. We ended up only sending and receiving with Rust, so everything was super explicit to get things into the types needed by the generated protobuf message structs.
So many messages with foo and foo_is_set fields though.
I believe you guys have multiple good reasons so could you tell me why you don’t just use JSON with a schema validator over this? I find the inspectability, flexibility and interoperability of JSON makes it a better choice over anything else so in my attempt to not be a frog in a well could you let me know under what constraints it makes more sense to use gRPC?
JSON would have been a big improvement over protobuf too, but being able to seamlessly encode UUIDs and Dates directly was more important to us than being able to support non-Clojure services.
I’m only talking about the encoding within gRPC; whether to use gRPC vs REST is a completely different question that unfortunately was made above my pay grade. If it were up to me we would have used EDN over REST.
continue trying to learn haskell through doing advent of code!
I’ve been doing this too! It’s been strange, mainly since I’m not used to Haskell’s approach to problems. I have some familiarity with thinking about things functionally, but not as much as Haskell requires. I guess that’s what practice is for!
In my experience this isn’t true (at least for the Framework), and the post doesn’t provide any proof for this claim.
I’ve owned a ThinkPad X230, which is almost the same as the X220 apart from the keyboard and slightly newer CPU. I currently own a Framework 13. Although I didn’t own them both at the same time, and I also have no proof for the counter-claim, in my experience the Framework is no more fragile than the X230 and I feel equally or more confident treating the Framework as “a device you can snatch up off your desk, whip into your travel bag and be on your way.”
(I remember the first week I had the X230 I cracked the plastic case because I was treating it at the same as the laptop it had replaced, a ThinkPad X61. The X61 really was a tank, there’s a lot to be said for metal outer cases…)
Confidence and security are subjective feelings, so if owning a chunky ThinkPad makes someone feel this way then good for them. Not to mention I think it’s awesome to keep older devices out of e-waste. However, I don’t think there’s any objective evidence that all newer laptops are automatically fragile because they’re thin - that’s perception as well.
I owned both a X220 and a X230 and I found the X230 to be much less durable than the X220, so the framework comparison might not quite stand up.
Oh that is good to know, thanks. I’d assumed they were mostly the same construction.
It’s a reasonable null hypothesis that a thicker chassis absorbs more shock before electronic components start breaking or crunching against each other. Maintaining the same drop resistance would require the newer components to be more durable than the older ones, which is the opposite of what I’d expect since the electronics are smaller and more densely packed.
How does the framework’s keyboard & trackpad measure up against the Thinkpad’s?
It’s been years since my x220 died, but IMO the trackpad on the framework is leaps and bounds better than the trackpad on the x220. (Though, the one caviat is that the physical click on my framework’s trackpad died which is a shame since I much prefer to have physical feedback for clicking. I really ought to figure out how hard that would be to fix.)
The x220’s keyboard is maybe slightly better, but I find just about any laptop keyboard to be “usable” and nothing more, so I’m probably not the right person to ask.
x220 keyboard is peak laptop keyboard
try x200 it’s better.
From my recollection: keyboard of the X230 about the same, trackpad of the Framework better (under Linux).
The X230 switched to the “chiclet” keyboard so it’s considered less nice than the X220 one (people literally swap the keyboard over and mod the BIOS to accept it). I think they are both decent keyboards for modern laptops, reasonable key travel, and don’t have any of the nasty flex or flimsiness of consumer laptop keyboards. But not the absolute greatest, either.
I remember the X230 trackpad being a total pain with spurious touches, palm detection, etc. None of that grief with the Framework, but that might also be seven-ish years of software development.
I’ve tried both.
The Framework’s input devices are, for me, very poor.
This is something I’ve been meaning to play around with myself. I think it would be possible to build a thing to get real signed certs to devices with the existing public infrastructure that exists. Basically it would:
<hash of public key>.foobar.example.com
.<hash of public key>.foobar.example.com
for D’s local IP.Then, as long as you’re on the same local network, you can go to
<hash of public key>.foobar.example.com
and see a lovely little padlock icon.If you’re using something other than HTTP, you might also be able to do a combination of hole-punching or UPnP port-opening and SRV records to make it work with public IPs too.
For initial connection you probably want the devices to host
foobar.local
that any device can respond to that just shows a webpage of links to all the<hash of public key>.foobar.example.com
devices that exist on your local network.The biggest downside of all this is that it still doesn’t solve the bootstrapping problem because the device needs an internet connection to get the cert. So, you still need some other secure method to connect to a fresh device to (for example) give it your SSID and PSK to even get on your network. Though, it seems like WebTransport could conceivably be a solution to that because, as I understand it, it’ll let you make a connection with just a public key that doesn’t need to be signed by a CA.
Not working! :D
The second half of the week will be: https://www.convergence-con.org/
Until then working on whatever strikes my fancy.
I want to finish getting my notes, which have been scattered to so many different locations and formats since I’ve been trying out different options for taking digital notes, all coalesced into just some markdown files in folders. I’ve got the most annoying of them almost done, it was Bookstack which stores notes in a mysql db in pre-formatted HTML with no built-in export functionality. -_-
I also finally got a standalone crate using esp-rs & esp-wifi to build and connect to the internet. I’ve got a small pile of project ideas that I want to play with now that I finally have the un-fun part out of the way.
I feel like I’m missing something in their justification for TCP. How does the message sequence number being part of the TCP header make a substantial difference from having a plaintext sequence number in a custom defined message structure sent over UDP?
In my mind, either way there’s a sequence number in plaintext in your IP packet.
The protocol is designed so that it can’t be fingerprinted. The traffic is obviously running over TCP (that can’t be hidden), but you can’t tell that it is Theseus protocol. Including plaintext in a custom defined message structure in the data part of the packet would allow for fingerprinting.
Bouncing back and forth between playing with threadiverse (threaded fediverse) and full on decentralized tech because I can’t decide between long term and short term stuff.
At the moment playing around with the rust libp2p libraries. Which, if anyone has experience using them and has any pointers I’d love to hear from you. I’m so far finding the whole thing incredibly confusing. There’s so many layers of indirection, I’m struggling with which parts I’m supposed to be using and what they’re actually doing under the hood.
I’m also toying around with stuff for an alternative Lemmy frontend. I have two different ideas and haven’t decided which I’m going to play with first. Either just a simple js-free/-lite web frontend or an IMAP (JMAP?) api that let’s you point a mail client at it interact with the threadiverse as if it’s one big set of mailing lists.
I haven’t touched libp2p in years but that was basically my experience as well. Quite disillusioning. There was a good panel on it in RustConf a few years back (2018???) that helped me a lot but I don’t remember whether or not it was recorded.
Something that watches right-wing news, stores it, captions it, and puts the scripts on a publicly searchable database so people can easily find out what X person said about Y, Z years ago, and provides the clips as needed.
All news?
You’re free to set up your own fantasy project.
Sure thing, just pointing out it is naïve to think only right wing politicians lie and contradict themselves.
At least in the UK, there’s a fairly distinct difference. The broadly centrist publications often have somewhat dubious interpretations of statistics and make unjustifiable leaps of logic. The clearly left-biased publications will cherry pick statistics that support their arguments and make huge leaps of logic that amount to ex falsie quodlibet. The right-biased publications just eschew facts entirely and make stuff up. If you compare centre-right and centre-left publications, they will often share the same statistics and will draw different conclusions from them that support their own narratives. If you compare centre-left to further-left, you’ll still see mostly the same sources just mor extreme conclusions. If you compare centre-right to further-right, you’ll see the sources that directly contradict the narrative discounted and twitter polls or equivalents replacing reputable surveys to make point (or, the more common dodge, replacing statistics about X with polls of people’s perception of X, where the sample is readers of a publication that runs scare stories about X on their front page).
I’d love for someone to figure out a way of making news publications financially accountable for the damage from intentionally spreading outright lies that isn’t also a mechanism for the government to censor a free press.
I think the media landscapes site is a good starting point if you’re thinking about how the UK media environment could be different.
I think the main issue in the UK is that most news is owned privately by very rich men. Their publications represent their class interests and so our media is very strongly biased towards conservatism and capitalism. I’d like there to be rules banning any individual or org from owning more than a small part of the news market and I’d like to see some initiatives that lead to more news orgs being co-operatives and to more good local news orgs (like The Mill in Manchester).
So it’s clear where I’m coming from, here are the large national media orgs that I think are good and what I think their political bias is: liberal/conservative: Financial Times; socialist: Novara Media, JOE; and I don’t think we have any good liberal centrist ones, but Channel 4 News and The Guardian are okay I guess. Every other large national news org in the UK is, IMO, somewhere between total crap (The Sun) and only a bit crap (The Mirror).
Thanks for that added level of detail. Good thing I’m not that naïve. But to go one level deeper, I’ll just add it’s also naïve to imagine they do so in equal quantity and magnitude.
I mean, there’d probably be room in the budget, but you should always focus your efforts where it’d do the most good.
Today’s most good may be tomorrow’s least.
Politicians lie and contradict themselves, regardless of their affiliation or “side”.
NYTimes foreign policy reporting is so bad though. It’s a shame that they’re considered the standard bearer. Before the Afghanistan withdrawal, they had a frontpage piece saying, “Generals say withdrawal could lead to civil war!” This is the exact opposite of the reality here on Planet Earth. On Earth, Afghanistan has been in civil war on and off since the 70s and the withdrawal led to the end of the civil war because the Taliban finally won after having their opponents propped up for the last twenty years. NYTimes foreign policy reporting is all fed by neocon hawks, so they can’t do basic reporting of what is actually happening in the world. :-( I mean, obviously the Taliban sucks, but please report on reality and not the idle musings of a bunch of guys with billions of dollars to slosh around that can’t beat a bunch of poor Afghani religious fanatics.
Presumably this was Goering’s argument against the US entering the second great war (“You’ll prolong the war! It would be better if we get our lebensraum!”) and this is the argument that Russia is currently making against US and allied support for Ukraine. Also the argument against supporting pro-Democracy movements in Iran and Syria.
What I’ve learned from listening to organizations like the Taliban, ISIS, Al Quaeda, the Russian government and other sundry dictatorships is that they understand both the “left” and the “right” in the US and have some rudimentary idea of how to influence some “left” and “right” voters to their advantage.
I know all this brainwashing makes medium sized waves on social media and the opinion pages of newspapers, but I don’t know if this actually moves strategy. It might actually all be a resource drain on those organizations, because they spend a certain amount of resources influencing online opinion in the US/Europe which doesn’t translate to strategic or even tactical advantage, because even God doesn’t know how democracies work, much less people whose core belief is that democracy doesn’t work.
Accurately reporting on the state of Afghanistan is not any form of Goering’s argument.
You are def correct, I guess I was thinking more about lies than credulous parroting of govt, corporate folks, and general status quo maintenance. There is a difference in that the Times will acknowledge their earlier position, tho I dunno if it will inform their future behavior. The advantage of a transcript of broadcast right wing talk radio, livestreams, broadcast, similar ephemeral communication, is that the worst shit is just lost, heard by their audience but unavailable for accountability.
Other news outlets also have interesting biases. For example, The Guardian is one of the most left-leaning newspapers in the UK (probably the most left-leaning broadsheet). If you read articles that they right on copyright law their bias towards copyright maximalism is very obvious, in spite of the fact that they’re advocating for rules that disadvantage workers and empower publishers, which is directly antithetical to their usual bias.
I think the Guardian’s positions makes more sense if you see it as mostly a liberal paper with a bit of leftism rather than a left paper. This explains their very spotty support for workers rights and public commons. Their environmental coverage is pretty good, tho.
FWIW, that kinda exists: https://www.snapstream.com/
Just not the publicly searchable database part.
Getting my desk clock to show my work calendar so I can stop missing the one meeting I have per day.
Fortunately that’s more possible than it sounds because my desk clock is an LED matrix with an ESP8266 controller backpack. If anyone has experience parsing iCal files in arduino-land, any tips?
What timing! My Desk/Homelab/‘Workshop’, which is far cleaner than it has any right to be because, purely by coincidence, I spent several hours cleaning it yesterday. Not pictured, the banker’s box full of crap that needed to not be on my desk, but didn’t otherwise have a home…
Desktop is just Gnome with a default Gnome and/or Fedora wallpaper.
Setup is 1st gen Framework Laptop w/ 2 4K monitors via a CalDigit TB3+ dock. (Some part of this setup is unreliable though, not a recommendation.) The keyboards are, (front) a first round ErgoDox Wireless from SliceMK with solar, (back left) backup main wireless keyboard for when I need to type things without the mental overhead of a learning a new layout, (back right) wired CODE Keyboard hooked up to my gaming computer in the rack that is almost entirely unused except for choosing which OS to boot since the EDW is Bluetooth’d to that computer.
The printer is a still entirely stock Prusa MK3S in an incomplete DIY lack enclosure. It’s absolutely a workhorse despite the level I neglect I provide for it.
The rack contains 4 servers. First is the aforementioned gaming computer, a Ryzen 5900X + Radeon 6900XT that I was lucky enough to get ahold of for list price in December 2019 and only had to sit outside in Minnesota for 3-5 hours before dawn each day for a week.
The one below that is my workhorse that hosts all my self-hosted services like Matrix chat for my wife an I, RSS reader, notes stuff, and what not.
The next one is a pile of parts that, until a couple months ago, when I got a new CPU on a BF sale, I could never get to be stable. It’s mostly for messing around with.
The last one is my NAS. It also has a Ryzen processor and has 64 GB of ECC RAM. It hosts just an NFS server, lol. This was built with the ambition for it to be my end game, so it’s intentionally super OP. I really need to get more moved over to that one. Right now my plan is that one will host all my internal services and server #2 will get moved to a DMZ network and only host things that I want to be publicly available.
Not visible, just sitting on top of the stack is the old NAS, an ODROID HC2, that’s doing way more on an old Samsung phone CPU. But that one is hanging out on an old unsupported debian because armbian dropped support, so I want to get everything moved off of it.
I’m a big fan of the Lack stack!
Playing around with libp2p which I’m hoping to use to create a fully p2p mesh overlay network (aka a thing to setup lots of point to point wireguard tunnels).
I’ve got a wireguard setup going for always being on my home network, but it takes too much managing for my liking, so I want to make something that can be completely automatic. But something that doesn’t require giving access to my whole network to some closed-source vc-funded control system that pinky promises it won’t do whatever. Basically, I miss OG Hamachi.
Headscale?
That’s definitely plan b. But this whole thing started because I tested out tailscale for a work thing and was unhappy with the inflexibility of the client. I’ve also got a few separate networks I’d like to have and having to put up a headscale server for each seems painful.
This is incredibly neat.
I actually stumbled on this about two weeks ago and was really excited to play with it before I realized I’m not allowed to. Do you have any plans to open source it?
Thanks!
About plans to open source it:
We are still figuring out which use cases we can bring to market, and we prefer to keep our options open for the moment.
I’ve been using difftastic occasionally for the past few months and it’s already become an invaluable tool for me when I’m trying to figure out what actually changed in big patches to rust code. The default diffing algorithm seems to love to latch onto two different single line
{
s and decide that that is unchanged line which then causes a bunch of unrelated code to get scattered between the actual changes.For anyone who wants to try it out without mucking with their normal workflow, I’ve setup
git config --global alias.difft "-c diff.external=difft diff"
which allows me to just rungit difft
when I want to use difftastic.The only thing I wish I could get it to do is be used by interactive diffing, like
git add -p
, because it’s so much better at finding unchanged lines. Though, I’m not sure that’s even possible. Anyone know?There’s no equivalent of
git add -p
yet, but several users have asked for it. I’m not sure if it’s possible yet, or to what extent git lets you override how it splits patches.I wonder if this could be hilariously sidestepped by setting your MTU to some higher value. Of course, fragmentation will occur, but those 10 initial packets will carry a lot more data than 14kb.
You can, but there is no promise anything will handle a larger MTU.
For networks you completely control(like say an Intranet app) perhaps a large MTU is reasonable. On the public internet, it’s unlikely to work out very consistently.
This comment reminds me that the XBOX 360 set some weird MTU like 1492 or something as a default.
1492 is the maximum MTU when using PPPoE. Someone working it probably just decided it was better to default to that so things would work well for people with DSL.
Oh? Is this no longer dependent on having an Intel CPU? Might finally be time for me to play around with this.
Does anyone know if you get not-entirely-uninteligible results with Rust binaries? A cursory search didn’t turn up and mentions of support or explicit statements on non-support.
Yes, there’s support for Ryzen (3??)
Also:
rr -d rust-gdb
Omg I didn’t know you could use rust-gdb, thank you!!
rr (somewhat experimentally I iirc) supports M1: https://github.com/rr-debugger/rr/pull/3144
Oh, wow, that is great news. Do you know if it works in (macos-hosted) vms?
It doesn’t work in VMs.
Spending the day finishing up some DIY sliding rack rails made from some lightly modified, cheap, full extension drawer slides and 3d printed brackets for the servers that already exist in my small server rack. Once that’s done, thinking on how I’m going to make the keyholes that the new server case I just picked up requires in the slides. Right now thinking either a template for routing them directly with a dremel or just drilling a bigger hole and printing a keyhole shape.
After that, building the new server. It’s going to be my new NAS. I happened on a remarkably good deal for a 2U case with 8x hotswap drive bays that just fits the depth of my rack. So, I took this as an opportunity to make my endgame NAS. It’s a Ryzen build with ECC ram and a motherboard with IPMI (out of band remote management), native SATA connections for each drive bay. As long as nothing replaces the 3.5 inch drive as the cheapest storage form factor, I can’t imagine ever needing to replace this. (Future processors that are cheap and much more power efficient not withstanding, but even then I wouldn’t need to replace it.)
And finally, figuring out what I’m going to do for software. My old NAS is running an out of date armbian, so I might just do Debian. Most of my other stuff is Fedora, but I think I want something with a slower release cadence. It’s a shame that CentOS isn’t a thing anymore. Maybe it’s time to try out Rocky Linux?
I finally happened on a good deal on a short-depth rackmount NAS case. Now I need to decide what I’m going to fill it with. I want it to be quiet and not consume a whole bunch of power, but if I’m going to spend money on this I would really like this to be pretty much my end game NAS, so I want 10 Gb eth, ECC RAM, and probably some sort of remote management. So I’m probably going to spend the week waffling between the two extremes of getting an ASRock Rack mobo & 5th Gen Ryzen or just throwing an old mini-itx board with a crappy Atom CPU in there that I already have sitting around.
Oh, and also figuring out backing up stuff to burnable blu-ray media so that I can take my current hot and cold backup drives and make them both hot without losing having a cold backup. If anyone here does disc-based backups, I’d love to hear your process.
Starting “Day 5” now that it’s my 3rd week of working half time (and having 2 company holidays in the first 2 weeks) and trying to figure out what the product this company makes actually does. There’s clearly a lot going on in it, way more than I had figured out before I started. Part of it, I assume, is just getting used to the early stage startup lifestyle of flying blind and making it up as I go along.
Christmas with the family and playing video games.
Partly ending my sabbatical with a new part time gig doing odd projects, improving docs, and helping write blog posts [edit: and “turning a pile of tech and algorithms into a nice to use product”, lol] for a startup that’s making a end-to-end software/hardware testing platform. 20 hours a week for $92,500 plus benefits was an offer I couldn’t pass up.
Other than that, probably cleaning my apartment because it has gotten out of hand, and playing the wintersday festivities in guild wars 2.
I’ve been using gRPC for about the past 4 years and by far the most important thing I wish we had understood at the outset is that you can use gRPC with a variety of other encodings.
Protobuf is a mess (https://reasonablypolymorphic.com/blog/protos-are-wrong/) and you should avoid it if at all possible; you can still use gRPC without it.
I’ve been working with Protobufs for a few years now. The author’s criticisms are valid, but I’d change the emphases. The formal “niceness” of the type system has generally not been an issue in my usage. Optionals, enums, and submessages using ints and strings are what are used for practically everything. What is more of a problem is that every field in a Protobuf definition is usually chosen to be an optional, and, I believe, an optional by default in the latest version. This is best practice to prevent deserialization failures if the field is omitted, especially if the send-side schema is a newer version that no longer requires the field. Optionals make you check for the presence, which at best is annoying and at worst encodes a brittle schema defined in code. It also makes it hard to use the type in interior code as you’re not sure which fields are filled out. The author touches on the problem of having transport and internal types; optionals pose a major problem with using Protobufs for internal types. At the crux of it, I think the problem is Protobufs is implicitly supposed to be used for senders/receivers with evolving versions of the schema, but doesn’t provide any extra features to make that process smooth. There’s been plenty of horror stories with systems crashing b/c mismatched proto expectations.
Um; I can only say since I understood it, I see all-optional as an important feature, not bug. To me it conveys the same semantics as the “zero by default” semantics of strict fields in Go. The trick is to define your fields in a way where a zero value is a perfectly sensible and meaningful common default. Now, for some values like string names zero (empty string) might not make much sense - but then you probably need more complex validation anyway, so e.g. some validation annotations with a code generator for them (protoc-gen-validator or what was the name) might be the next useful step. And for even more advanced ones, you have to check them in code anyway, unless the expectation is that protobufs would be a formal proofs language. That said, as described in https://aip.dev/203, there are annotations like ‘REQUIRED’, though they have quite nuanced semantics, and those make quite some sense to me as such. And here I do indeed miss not seeing an automatic protoc generator for their validation - but no efully one will appear sooner or later (and I sometimes wonder would it be so hard to write myself?)
I wish I’d known this before I left my last gig.
I also dislike protobuf, but although it’s a mess I think the one exception to “you should avoid it if at all possible” is if you’re using java on both ends. That doesn’t make it any less of a mess, but it’s mostly a mess in exactly the same way that the java type system is a mess so it ends up being beneficial.
Or at least that’s what I heard from the cloud devs doing java on the other (unfortunately only theoretical) end gPRC interface that I was working on from Rust.
My experience comes from using it with the JVM on both ends (but Clojure rather than Java) and we still had a lot of headaches. In particular the part where you send it a nil in an integer field and it silently converts it to zero is mind-bogglingly bad. Or where you send a negative number in a field that’s defined as an unsigned int and it silently accepts it; what the hell.
sounds like annotating deserialized stuff in kotlin as non-null, only for the deserialization system to not care about that for obvious reasons - those are the times I miss rust so much
Woof. I guess I was slightly more fortunate. We ended up only sending and receiving with Rust, so everything was super explicit to get things into the types needed by the generated protobuf message structs.
So many messages with
foo
andfoo_is_set
fields though.Just curious if you recommend any specifically? I’ve tried once or twice to look into gRPC, but each time I was put off by protobuf.
We use EDN at work to communicate between Clojure services; that’s the only one I have experience with, but I’ve heard people like msgpack too.
Would you link any references to using gRPC with EDN or msgpack encodings?
I believe you guys have multiple good reasons so could you tell me why you don’t just use JSON with a schema validator over this? I find the inspectability, flexibility and interoperability of JSON makes it a better choice over anything else so in my attempt to not be a frog in a well could you let me know under what constraints it makes more sense to use gRPC?
JSON would have been a big improvement over protobuf too, but being able to seamlessly encode UUIDs and Dates directly was more important to us than being able to support non-Clojure services.
I’m only talking about the encoding within gRPC; whether to use gRPC vs REST is a completely different question that unfortunately was made above my pay grade. If it were up to me we would have used EDN over REST.
Got it. Thank you. Maybe they have some magical wisdom I’m missing.
Putting in my notice at my current job, then presumably working on all the ‘urgent’ things my boss was expecting me to do but hadn’t told me about….
outside of work going to continue trying to learn haskell through doing advent of code!
Congrats! If you can afford it, I hope you’re able to take some time off before the next gig.
Congratulations! Break a leg.
I’ve been doing this too! It’s been strange, mainly since I’m not used to Haskell’s approach to problems. I have some familiarity with thinking about things functionally, but not as much as Haskell requires. I guess that’s what practice is for!