Thanks for sharing this! Balatro is one of my favorite indie games recently, and the fact that it’s written in LOVE makes it all the more amazing given the incredibly high quality of the game.
My initial reaction was “Oh god another distro” but I have to admit they’ve built some interesting looking affordances for LSP and similar installation and maintenance.
I’ve been feeling lately that while it’s fine to start with one of these canned “distributions” that if you’re a software developer or adjacent, taking the time to learn Neovim/Lua well enough to roll your own configuration is a worthy exploration to undertake.
Hi! It’s something that I actively use on my network to bridge channels and I’ve found linking-servers-to-bridge-networks-and-even-other-protocols to be a really interesting idea to begin with. (Realistically we’re trying to write such functionality into a custom IRCd but it’s been going relatively slowly.) At least, this was pretty big in the IRC world, though yeah at this moment I’m not the best at knowing the general interest of other Rustaceans
It’s the kind of game you can recommend without any qualms because the price is very reasonable and there are no inducements to get the player to spend more and more money which is so common in many other modern games.
The success of Balatro is absolutely incredible, and very well deserved. It’s a pity that it seemed to have such a physical toll on the developer.
Balatro’s probably the most accessible roguelike deckbuilder I have ever played, and it really is a ton of fun. In a lot of ways it feels like ‘outsider art’ to the genre - the blog post mentioned that the developer didn’t really play any similar games during development and it really shows. Balatro really is a breath of fresh air, and brings a lot of innovation to the table.
I genuinely wonder what do you consider to be Balatro innovations? Joker mechanics is not different from “all orcs get +1” effect cards I think. (I only have unlocked all decks)
The biggest innovation is the core game mechanic of playing poker hands and receiving a score based on a multiplier.
Most roguelike deck builders have you starting off with a fairly small, fairly weak deck that you increase in power by adding cards to. Balatro innovates here by starting the player off with a standard 52-card deck!
Another innovation is the way scaling works over time - the most powerful ways to scale are all “meta” - scaling that is preserved between different rounds.
The different types of rewards, such as the Tarot cards are also very interesting.
Not sure it counts as innovative, but Balatro is a game for the sake of it. It doesn’t have a story, doesn’t have a message, doesn’t have lore. It’s just a game.
An extremely well designed and fun game that you can play whenever you want for how long you want, which some mechanics you likely already know (poker hands) and some you don’t (jokers). It nailed the infinite replayability with Endless Mode without making it (too) frustrating, it’s just bonus challenge.
Besides that, it has all the ingredients of a rogue like, nothing really innovative here in my opinion, and that’s ok.
For me, a number of the “innovations” are simple details that make the game more compelling and fun, and the fact that these design choices aren’t oriented around getting the poor victim to open one more loot crate while draining their wallet.
Like the way the multiplier graphs burst into flames, the sound effects and the like.
Would blocking HTTP/1.0 bar people running very old hardware and browsers from viewing your site?
I think the answer is yes, and I can totally appreciate the counter of “I don’t care” but maybe you should? I feel like as technologists we project a bit and assume that everyone in the world has the same shiny x86_64/ARM64 hardware we do, when in fact there are vast swathes of the computing public living quite comfortably on our left overs.
I for one want to welcome those people to my blog if they’re so included.
Your blog is a read only service. What even is the functional advantage to blocking forged client credentials?
I know for myself, I could care if the folks have their client fingerprint set to ZetaEpsilon-Fnord. You can read my blog with an internet connected toaster if you want. I don’t care.
I’m sure I’m missing something here and look forward to being educated by my much more security savvy fellow crustaceans :)
Any website that requires HTTPS is effectively blocked to people running very old hardware and browsers, and it’s only going to get worse as time goes on; as noted by other people, at a minimum old browsers need some sort of HTTP to (modern) HTTPS proxy. I don’t know if the existing proxies for this rewrite between HTTP/1.0 and HTTP/1.1, but they probably could. If you want to specifically remain accessible to very old hardware and browsers, you need to stick to HTTP and deal with whatever fallout happens, like modern browsers looking at you funny. But you’re also going to be limited in a number of other ways, since old browsers don’t support a lot of modern CSS and so on.
The pragmatic reason to potentially block bad clients, however you detect them, is that you don’t want to allow spammers, AI scrapers, content copying farms, and other people who are up to no good to (easily) scrape and collect your website. Depending on how your site is implemented and where it’s hosted, you may also have bandwidth or CPU usage/load concerns that are reduced if you can stop the horde of scrapers and other things that are eating away at your server. (And not all blogs are read-only; consider comments and comment spammers.)
Forgive my ignorance but isn’t the mirrod approach and in particular the ‘reverse mirror’ that they discuss a giant gaping security hole?
I thought that part of the point of things like k8s was sealing your infra off in a way that makes attack more difficult because outside clients simply do not have access to the pods without using the officially sanctioned, monitored and locked down forms of ingres?
I don’t think the point of k8s-like infra is što specifically _seal off _ the infrastructure, as it is to automate all the things you might do with regards to scaling and managing it. It really does offer the sealing, but I se it more as a side benefit then the main point.
That said, I totally get what you’re saying - opening up the pod “and everything it has access to” sounds like a major attack surface.
I think that’s the consequence of everybody using kubwrnetes without fully understanding what it entails. I remember working on services that you “deploy to production” by copying the build artifacts. Then you’d go debug on production as well. I mean, production was only one server. Almost still at the “pet” level (from the meme pets-cattle-insects or however it went). So you knew your “production”, and visiting like this was neither unusual nor unjustified.
But in the decades between, I learned to build my stuff so that it’s pretty well tested before production, and the isolation is all at the service boundaries. So if my api had a problem with a service in a cluster, I could be pretty much certain that it’s either that service, or more commonly, configuration between that service and mine.
And with observability that you get almost out of the box with a lot of stuff these days, it’s usually enough to kube-forward or kube-shell into the pod, confirm your hypothesis and bug out, without the need for some sofisticated tool like the article describes.
That other thing also said, I’ve done stupid things in prod 20 years ago, I’ve done it this week, and I’ll probably be doing it 20 years from now. Sometimes it’s just refreshing to SSH into prod and drop that table by mistake, and then dig for backups for the next three days.
But the recent Mastodon upgrade has caused a significant amount of performance degradation, and I think the only way to really solve it is going to be to throw a lot of money into hardware.
I sometimes think that Mastodon is an albatross around the neck of the Fediverse’s success. I’m glad that there are now quite a number of other instance serving software packages out there, but Masto is still far and away the leader in terms of sheer numbers, and I myself had the experience of running my own instance for a few years only to have the sheer weight and complexity of the Mastodon software make me throw up my hands and stop trying.
I hope other lighter weight varieties like goto.social start to take off so we can see fewer annoucements like this and more growth in the Fedi space.
Yeah, Mastodon had a big head start, but nowadays the additional resource load and extra sysadmin load make it really hard to recommend over alternatives if you’re going to start a new server. I expect it’s mostly still just dominating due to inertia at this point.
But another big factor is open registrations; I think this makes it much, much more likely for an instance to shut down. Here’s a thread from the admin of another masto instance that’s been around since 2017: https://toot.cafe/@nolan/113394607836985100 and he credits his instance’s longetivity to having closed registrations. He also says of the Mastodon version upgrade process: “I get anxious during big upgrades and my hands literally shake” which is like night and day vs the experience I have with my own gotosocial server I’ve been running over the past 2 years: https://technomancy.us/201
I’ve been running Honk for a while now and it’s pretty good. My main complaint is that I had to patch in support for avatars, because Ted is a weirdo who prefers generating gravatar-esque (but less easily distinguished) icons instead.
I’m still annoyed by the lack of a good way to refer to “a Fediverse-based microblogging service in the style of Mastodon but not necessarily running the Mastodon software”, though.
The “not showing replies” was a deal breaker for me. This, and server
rules that change on the whim of the server admin; One day I noticed
that some rules had been added into the regulations that placed me in
the category of people who are explicitly not welcome. So, being a good
law-abiding citizen, I removed my account.
Oil has been a game changer for me. No more fancy dancy tree setups. Just stay out of my way! :)
Smartyank because I personally find the endless faffing about with the 9 potential clipboards of Neovim frustrating and pointless
Telescope - This one has been a huge productivity boost because in addition to offering superlative fuzzy find for files and folders it also helps me learn/navigate/understand help, keymaps, and a host of other things.
Conjure is the kind of seriously polished, capable REPL that Vim uses used to envy emacs users for :) It supports a million different LISP variants, Python, Rust, Lua and Julia.
I’d like to shoutout mini.nvim, which is an excellently written and well-maintained collection of small plugins bundled under one umbrella. Most of my plugins have been replaced by mini.nvim alternatives at this point.
Note that in the (soon to be released) Neovim 0.10 release, commenting (e.g. vim-commentary, Comment.nvim) and LSP inlay hints are builtin features.
We are also actively investigating how to “make LSP better”, which includes adding more default keybindings and potentially upstreaming some functionality from nvim-lspconfig (though the exact details on this are still hazy). Snippets and LSP auto-completion are both on the roadmap to be upstreamed as well (note, however, that these will be fairly minimal implementations in core, the idea being that core provides the base framework that plugins can build more functionality on top of).
The ultimate goal is to eliminate (or at least reduce) the impression that Neovim is difficult to get up and running with LSP features.
More default keybindings for LSP would be fantastic in my opinion. Getting LSP set up and configured with cmp was definitely the hardest part of getting started with Neovim for me, even as someone well-versed in custom editor configuration.
Having been soaking in the ecosystem for a while now my impression is that Neovim’s LSP integration isn’t complex at all, it’s just large in terms of surface area.
That can feel intimidating when you’re trying to get rolling and are already absorbing myriad other aspects of Neovim.
Thank you and the rest of the core contributors for all the amazing work you do. Honestly Neovim has brought me a ton of joy over my last couple of years with it.
I’d love to see the algorithm they use open sourced, but then I guess that opens it up to being evaded by the abusers of the wrold?
I’d also SUPER love to see a very robust mechanism for handling false positives. I know I’d be much more willing to consent to having this happen on my phone if I felt confident my life wouldn’t be ruined because some algorithm said I was a pederast and no human could be bothered to see that instead I was posting pictures from an art museum.
As a right to privacy fan I’m super dubious about anything like this, but I also feel pretty strongly that we may all need to bend a bit and try, rather than rejecting these ideas utterly, to find ways that could make enforcement a more constructive and less destructive process.
Not a flame, but a different perspective. I have never once looked at the number of nits as a major factor in the decision-making process when buying a laptop. Indeed, it’s not even ‘not a major factor’, it’s not a factor at all. I don’t look at that stat in the specifications at all, never have.
Hi Popey! No flame I can see, it’s always a pleasure hearing from you.
Different perspective indeed. I realize that my needs are very niche. I’m blind in one eye, low vision in the other - 20/80 but with a very restricted field of vision.
Having a nice, bright display is critically important to me. The 300 nit Thinkpad T15 I owned for a while had a display that totally washed out for me if a photon even came within shouting distance.
I recognize that my needs are not everyone’s needs, and I suppose this is why I gravitate towards Mac hardware despite the fact that I wish the world ran on Linux :)
I’m really curious about how it compares with the Carbon X1 (my current laptop, which is nice but a bit too wide/big for my taste). The keyboard on the Z13 looks a lot cheaper on the pictures.
Funnily enough, I had the X1C9 from my previous employer as a work laptop just before the new job and getting the Z13. I mostly use the Z13 as a desktop - connected via USB C dock to a couple of displays, and an external keyboard and mouse. So I am not a heavy user of the built-in keyboard.
That said, It’s certainly ‘different’ from the X1C9 keyboard - (which I also predominantly used in a dock).
I don’t find either machine to be problematic to type on. What is a problem with the Z13 is the trackpad. I am quite the TouchPoint(TM) fan, and as such use might right thumb on the middle mouse button to scroll with the nipple. The X13 has a uni-button across the top of the touchpad, with three zones, one for each button - left, centre, and right.
It’s far, far too easy for my thumb to drift and instead of scrolling, I end up selecting chunks of text on a page I’m trying to scroll. It’s somewhat maddening. I mitigate this by forcing myself to use the touchpad for scrolling and crying inside. YMMV
I really enjoyed the touch screen in my brief and ultimately doomed dalliance with a Thinkpad recently.
I wish Apple would pick up on this and include it on Mac laptops.
I’m a keyboard all the things kind of person, but if I MUST point and drag/click, being able to point at the ACTUAL thing is much easier than using a mouse with my fine/gross motor impairment.
Saving you a web search: it’s yet another failed no-code platform.
Amazon Honeycode is a fully managed service that allows you to quickly build mobile and web apps for teams—without programming. Build Honeycode apps for managing almost anything, like projects, customers, operations, approvals, resources, and even your team.
I hate to say this as I know it’s been beaten to death here but I find it incredibly difficult to take anyone who still blogs on Medium seriously, and this article does nothing to change that.
It’s visually appealing in certain respects but as @student said the author does very little to actually make their case and from my reading also contradicts themselves in several spots.
I think many of us can get behind the UNIX philosophy of small narrowly scoped tools working together. I don’t see this article expanding on that point very much.
Writing a (to most people I’m sure) boring Django application for tracking all my health related medical crap, because doing it all freeform in my notes is both a drag and not easily searchable / reusable
I’m really enjoying how easy it is to get going with Django. I’m sure there are problems at scale just like with anything else, but I was up and running with some custom tailored models and a simple CRUD interface in a couple of hours.
I installed Parabola Linux onto my Remarkable tablet, but it doesn’t have wifi firmware (Parabola being an FSF-approved distro and thus linux-libre), so the only means of getting internet access on the tablet is via USB. If I can get it to work, then NATing my USB to wifi (and setting the appropriate default gateway) should do that, at least long enough to pacman -S a compiler and whatever dependencies I need.
I’m ignoring my alternative options of 1) flashing a custom OS image with the proprietary wifi firmware included and hoping I don’t brick the device by accident, and 2) spending $80 on the Technoethical libre-firmware wifi dongle (mainly because I suspect it won’t work, just like the USB-to-ethernet adapter didn’t work).
It’s less principle, and more that Parabola was the only pre-made OS image for the RM. If there was an Arch or Debian port I’d have gladly used that. Also, I’m relying heavily on RCU to do the hard stuff for me.
Thanks for sharing this! Balatro is one of my favorite indie games recently, and the fact that it’s written in LOVE makes it all the more amazing given the incredibly high quality of the game.
kickstart.nvim co-maintainer here.
My initial reaction was “Oh god another distro” but I have to admit they’ve built some interesting looking affordances for LSP and similar installation and maintenance.
I’ve been feeling lately that while it’s fine to start with one of these canned “distributions” that if you’re a software developer or adjacent, taking the time to learn Neovim/Lua well enough to roll your own configuration is a worthy exploration to undertake.
Hi! What makes this project unique and interesting?
I only ask because it’s EOL and no longer actively developed, so I’m cuiouos why you thought other rustaceans might want to take notice.
Hi! It’s something that I actively use on my network to bridge channels and I’ve found linking-servers-to-bridge-networks-and-even-other-protocols to be a really interesting idea to begin with. (Realistically we’re trying to write such functionality into a custom IRCd but it’s been going relatively slowly.) At least, this was pretty big in the IRC world, though yeah at this moment I’m not the best at knowing the general interest of other Rustaceans
This is delightful. I have an Atari 800XL in the other room, I may sit down and type this in out of pure nostalgia :)
What an incredible journey — I immediately bought the game for iOS and spent the rest of the evening playing.
It’s the kind of game you can recommend without any qualms because the price is very reasonable and there are no inducements to get the player to spend more and more money which is so common in many other modern games.
The success of Balatro is absolutely incredible, and very well deserved. It’s a pity that it seemed to have such a physical toll on the developer.
Balatro’s probably the most accessible roguelike deckbuilder I have ever played, and it really is a ton of fun. In a lot of ways it feels like ‘outsider art’ to the genre - the blog post mentioned that the developer didn’t really play any similar games during development and it really shows. Balatro really is a breath of fresh air, and brings a lot of innovation to the table.
I genuinely wonder what do you consider to be Balatro innovations? Joker mechanics is not different from “all orcs get +1” effect cards I think. (I only have unlocked all decks)
The biggest innovation is the core game mechanic of playing poker hands and receiving a score based on a multiplier.
Most roguelike deck builders have you starting off with a fairly small, fairly weak deck that you increase in power by adding cards to. Balatro innovates here by starting the player off with a standard 52-card deck!
Another innovation is the way scaling works over time - the most powerful ways to scale are all “meta” - scaling that is preserved between different rounds.
The different types of rewards, such as the Tarot cards are also very interesting.
Not sure it counts as innovative, but Balatro is a game for the sake of it. It doesn’t have a story, doesn’t have a message, doesn’t have lore. It’s just a game.
An extremely well designed and fun game that you can play whenever you want for how long you want, which some mechanics you likely already know (poker hands) and some you don’t (jokers). It nailed the infinite replayability with Endless Mode without making it (too) frustrating, it’s just bonus challenge.
Besides that, it has all the ingredients of a rogue like, nothing really innovative here in my opinion, and that’s ok.
For me, a number of the “innovations” are simple details that make the game more compelling and fun, and the fact that these design choices aren’t oriented around getting the poor victim to open one more loot crate while draining their wallet.
Like the way the multiplier graphs burst into flames, the sound effects and the like.
I have two questions:
I think the answer is yes, and I can totally appreciate the counter of “I don’t care” but maybe you should? I feel like as technologists we project a bit and assume that everyone in the world has the same shiny x86_64/ARM64 hardware we do, when in fact there are vast swathes of the computing public living quite comfortably on our left overs.
I for one want to welcome those people to my blog if they’re so included.
I know for myself, I could care if the folks have their client fingerprint set to ZetaEpsilon-Fnord. You can read my blog with an internet connected toaster if you want. I don’t care.
I’m sure I’m missing something here and look forward to being educated by my much more security savvy fellow crustaceans :)
Any website that requires HTTPS is effectively blocked to people running very old hardware and browsers, and it’s only going to get worse as time goes on; as noted by other people, at a minimum old browsers need some sort of HTTP to (modern) HTTPS proxy. I don’t know if the existing proxies for this rewrite between HTTP/1.0 and HTTP/1.1, but they probably could. If you want to specifically remain accessible to very old hardware and browsers, you need to stick to HTTP and deal with whatever fallout happens, like modern browsers looking at you funny. But you’re also going to be limited in a number of other ways, since old browsers don’t support a lot of modern CSS and so on.
The pragmatic reason to potentially block bad clients, however you detect them, is that you don’t want to allow spammers, AI scrapers, content copying farms, and other people who are up to no good to (easily) scrape and collect your website. Depending on how your site is implemented and where it’s hosted, you may also have bandwidth or CPU usage/load concerns that are reduced if you can stop the horde of scrapers and other things that are eating away at your server. (And not all blogs are read-only; consider comments and comment spammers.)
(I’m the author the linked-to article.)
Forgive my ignorance but isn’t the mirrod approach and in particular the ‘reverse mirror’ that they discuss a giant gaping security hole?
I thought that part of the point of things like k8s was sealing your infra off in a way that makes attack more difficult because outside clients simply do not have access to the pods without using the officially sanctioned, monitored and locked down forms of ingres?
mirrordis a dev tool, it’s not intended to be deployed in a production clusterI don’t think the point of k8s-like infra is što specifically _seal off _ the infrastructure, as it is to automate all the things you might do with regards to scaling and managing it. It really does offer the sealing, but I se it more as a side benefit then the main point.
That said, I totally get what you’re saying - opening up the pod “and everything it has access to” sounds like a major attack surface.
I think that’s the consequence of everybody using kubwrnetes without fully understanding what it entails. I remember working on services that you “deploy to production” by copying the build artifacts. Then you’d go debug on production as well. I mean, production was only one server. Almost still at the “pet” level (from the meme pets-cattle-insects or however it went). So you knew your “production”, and visiting like this was neither unusual nor unjustified.
But in the decades between, I learned to build my stuff so that it’s pretty well tested before production, and the isolation is all at the service boundaries. So if my api had a problem with a service in a cluster, I could be pretty much certain that it’s either that service, or more commonly, configuration between that service and mine.
And with observability that you get almost out of the box with a lot of stuff these days, it’s usually enough to kube-forward or kube-shell into the pod, confirm your hypothesis and bug out, without the need for some sofisticated tool like the article describes.
That other thing also said, I’ve done stupid things in prod 20 years ago, I’ve done it this week, and I’ll probably be doing it 20 years from now. Sometimes it’s just refreshing to SSH into prod and drop that table by mistake, and then dig for backups for the next three days.
I sometimes think that Mastodon is an albatross around the neck of the Fediverse’s success. I’m glad that there are now quite a number of other instance serving software packages out there, but Masto is still far and away the leader in terms of sheer numbers, and I myself had the experience of running my own instance for a few years only to have the sheer weight and complexity of the Mastodon software make me throw up my hands and stop trying.
I hope other lighter weight varieties like goto.social start to take off so we can see fewer annoucements like this and more growth in the Fedi space.
Yeah, Mastodon had a big head start, but nowadays the additional resource load and extra sysadmin load make it really hard to recommend over alternatives if you’re going to start a new server. I expect it’s mostly still just dominating due to inertia at this point.
But another big factor is open registrations; I think this makes it much, much more likely for an instance to shut down. Here’s a thread from the admin of another masto instance that’s been around since 2017: https://toot.cafe/@nolan/113394607836985100 and he credits his instance’s longetivity to having closed registrations. He also says of the Mastodon version upgrade process: “I get anxious during big upgrades and my hands literally shake” which is like night and day vs the experience I have with my own gotosocial server I’ve been running over the past 2 years: https://technomancy.us/201
I’ve been running Honk for a while now and it’s pretty good. My main complaint is that I had to patch in support for avatars, because Ted is a weirdo who prefers generating gravatar-esque (but less easily distinguished) icons instead.
I’m still annoyed by the lack of a good way to refer to “a Fediverse-based microblogging service in the style of Mastodon but not necessarily running the Mastodon software”, though.
Honk is neat but being able to say “Hey I like this!” feels like table stakes to me and that’s not included in Honk’s opinionated view of the world :)
His bat and ball, so he gets to build whatever he wants, but not my cup of tea :)
The Mastodon architecture decisions seems really ill considered on many levels:
And all of this architecture is so deeply locked in that I don’t see any of it changing.
The “not showing replies” was a deal breaker for me. This, and server rules that change on the whim of the server admin; One day I noticed that some rules had been added into the regulations that placed me in the category of people who are explicitly not welcome. So, being a good law-abiding citizen, I removed my account.
On the varieties front, there’s also snac2.
I’ll be That Guy and say that I’m not really interested in increasing the number of internet-exposed C programs I run 😬
The “AI” industry is very literally a bunch of parasites, in every sense. I can’t wait to see its end.
Don’t hold your breath. Even when the current bubble bursts, that just means you’ll see a temporary ebb in activity.
This particular grift is here to stay IMO.
This seems really neat. I wonder if you could use it for performance intensive bits and Sympy for prototyping and less perf intensive tasks.
I’d like to shoutout mini.nvim, which is an excellently written and well-maintained collection of small plugins bundled under one umbrella. Most of my plugins have been replaced by mini.nvim alternatives at this point.
Note that in the (soon to be released) Neovim 0.10 release, commenting (e.g. vim-commentary, Comment.nvim) and LSP inlay hints are builtin features.
We are also actively investigating how to “make LSP better”, which includes adding more default keybindings and potentially upstreaming some functionality from nvim-lspconfig (though the exact details on this are still hazy). Snippets and LSP auto-completion are both on the roadmap to be upstreamed as well (note, however, that these will be fairly minimal implementations in core, the idea being that core provides the base framework that plugins can build more functionality on top of).
The ultimate goal is to eliminate (or at least reduce) the impression that Neovim is difficult to get up and running with LSP features.
More default keybindings for LSP would be fantastic in my opinion. Getting LSP set up and configured with cmp was definitely the hardest part of getting started with Neovim for me, even as someone well-versed in custom editor configuration.
Mini looks very interesting to me, thanks.
Having been soaking in the ecosystem for a while now my impression is that Neovim’s LSP integration isn’t complex at all, it’s just large in terms of surface area.
That can feel intimidating when you’re trying to get rolling and are already absorbing myriad other aspects of Neovim.
Thank you and the rest of the core contributors for all the amazing work you do. Honestly Neovim has brought me a ton of joy over my last couple of years with it.
(Hacking Neovim Lua just makes me happy :)
To me the issue here is transparency.
I’d love to see the algorithm they use open sourced, but then I guess that opens it up to being evaded by the abusers of the wrold?
I’d also SUPER love to see a very robust mechanism for handling false positives. I know I’d be much more willing to consent to having this happen on my phone if I felt confident my life wouldn’t be ruined because some algorithm said I was a pederast and no human could be bothered to see that instead I was posting pictures from an art museum.
As a right to privacy fan I’m super dubious about anything like this, but I also feel pretty strongly that we may all need to bend a bit and try, rather than rejecting these ideas utterly, to find ways that could make enforcement a more constructive and less destructive process.
I know I’m gonna get flombeed for this, but so be it.
An awesome laptop like that with a 400 nit display? WHY???
Macs and some PC laptops come with 1000 nit displays.
And before the “Just get Matte” crowd rears up, why can’t I have my cake AND eat it too? :)
Not a flame, but a different perspective. I have never once looked at the number of nits as a major factor in the decision-making process when buying a laptop. Indeed, it’s not even ‘not a major factor’, it’s not a factor at all. I don’t look at that stat in the specifications at all, never have.
Hi Popey! No flame I can see, it’s always a pleasure hearing from you.
Different perspective indeed. I realize that my needs are very niche. I’m blind in one eye, low vision in the other - 20/80 but with a very restricted field of vision.
Having a nice, bright display is critically important to me. The 300 nit Thinkpad T15 I owned for a while had a display that totally washed out for me if a photon even came within shouting distance.
I recognize that my needs are not everyone’s needs, and I suppose this is why I gravitate towards Mac hardware despite the fact that I wish the world ran on Linux :)
I also bought the Z13, but unlike Martin, I picked up the high-resolution touch screen version. It’s a beauty.
I’m really curious about how it compares with the Carbon X1 (my current laptop, which is nice but a bit too wide/big for my taste). The keyboard on the Z13 looks a lot cheaper on the pictures.
Funnily enough, I had the X1C9 from my previous employer as a work laptop just before the new job and getting the Z13. I mostly use the Z13 as a desktop - connected via USB C dock to a couple of displays, and an external keyboard and mouse. So I am not a heavy user of the built-in keyboard.
That said, It’s certainly ‘different’ from the X1C9 keyboard - (which I also predominantly used in a dock).
I don’t find either machine to be problematic to type on. What is a problem with the Z13 is the trackpad. I am quite the TouchPoint(TM) fan, and as such use might right thumb on the middle mouse button to scroll with the nipple. The X13 has a uni-button across the top of the touchpad, with three zones, one for each button - left, centre, and right.
It’s far, far too easy for my thumb to drift and instead of scrolling, I end up selecting chunks of text on a page I’m trying to scroll. It’s somewhat maddening. I mitigate this by forcing myself to use the touchpad for scrolling and crying inside. YMMV
Thanks! This confirms my feeling: the Z13 is more a regular modern laptop while the X1 keeps the Thinkpad mojo.
I really enjoyed the touch screen in my brief and ultimately doomed dalliance with a Thinkpad recently.
I wish Apple would pick up on this and include it on Mac laptops.
I’m a keyboard all the things kind of person, but if I MUST point and drag/click, being able to point at the ACTUAL thing is much easier than using a mouse with my fine/gross motor impairment.
Saving you a web search: it’s yet another failed no-code platform.
Thanks for this. I worked at AWS for 6 years and had never heard of this.
But that’s the AWS approach. Let a thousand flowers bloom, and crush all the ones that don’t hockey stick within a year or so of GA.
No snark, but: are any of those not snake oil?
Appreciate the description! Based on the name I thought it was AWS expanding canary token support…
I hate to say this as I know it’s been beaten to death here but I find it incredibly difficult to take anyone who still blogs on Medium seriously, and this article does nothing to change that.
It’s visually appealing in certain respects but as @student said the author does very little to actually make their case and from my reading also contradicts themselves in several spots.
I think many of us can get behind the UNIX philosophy of small narrowly scoped tools working together. I don’t see this article expanding on that point very much.
Writing a (to most people I’m sure) boring Django application for tracking all my health related medical crap, because doing it all freeform in my notes is both a drag and not easily searchable / reusable
I’m really enjoying how easy it is to get going with Django. I’m sure there are problems at scale just like with anything else, but I was up and running with some custom tailored models and a simple CRUD interface in a couple of hours.
Trying and failing to NAT my USB to my wifi. Uuuuuugh.
I don’t even know if I’ve done it correctly, the USB device might be configured incorrectly and be the real cause of the failure to connect.
Would you be willing to explain the use case for this? What problem are you trying to solve?
I installed Parabola Linux onto my Remarkable tablet, but it doesn’t have wifi firmware (Parabola being an FSF-approved distro and thus linux-libre), so the only means of getting internet access on the tablet is via USB. If I can get it to work, then NATing my USB to wifi (and setting the appropriate default gateway) should do that, at least long enough to
pacman -Sa compiler and whatever dependencies I need.I’m ignoring my alternative options of 1) flashing a custom OS image with the proprietary wifi firmware included and hoping I don’t brick the device by accident, and 2) spending $80 on the Technoethical libre-firmware wifi dongle (mainly because I suspect it won’t work, just like the USB-to-ethernet adapter didn’t work).
Wow! Your willingness to perform acrobatics in the name of principle is truly impressive. GOOD LUCK with all this!
You should write it up when you’re done.
It’s less principle, and more that Parabola was the only pre-made OS image for the RM. If there was an Arch or Debian port I’d have gladly used that. Also, I’m relying heavily on RCU to do the hard stuff for me.