Threads for cjs

  1. 1

    dealing with certificate woes at work right now. it is a miracle that any client can talk securely to a given server given the permutations of trusted CAs out there and neglected maintenance thereof.

    1. 23

      AFAICT there’s nothing Bitwarden-specific in the article.

      I disagree with the argument that storing TOTP secrets in your password manager is a bad idea. You absolutely should be using a good password or phrase on your pw manager*, but scattering your TOTP codes across Authenticator apps is more likely to cause you grief. The whole point of the password manager is to have a single place to track all of these things and a single thing to worry about backing up/recovering/being able to migrate in the future. Better to have one really secure basket than half a dozen baskets of unknown quality.

      • and possibly a second hardware factor
      1. 14

        To me it looks like one of those well-intended piece of advice we had before password managers: change your password often. In principle it was not a bad advice. But in combination with password policies that made it nearly impossible to memorize the password we ended up with password on stickies.

        In principle it is correct that if a password manager vault with TOTPs in it is compromised it’s a game over. But proposed solution adds so much friction that I’m afraid, people would’ve disabled TOTPs all together is they were required to use a different device/PM for them.

        I think the author overstates the probability of vault compromise for an average user. It’s much more likely that Facebook/Twitter/whatever db will be leaked than an average user would be targeted. In that scenario, for most it’s better to have TOTP enabled and saved in the same PM.

        1. 9

          I’ve seen people argue against TOTP, or argue against calling it a “second factor”, solely because the secrets can be stored in a password manager. But I’ve never seen someone argue against hardware keys in the same way, despite the fact that you can easily plug one in and leave it there forever.

          Personally I’d rather people use TOTP than SMS, and if people do actually go install and use a password manager and also store their TOTP secrets in it, well, they’re still better off than the average person today with reused passwords and no 2FA at all.

          1. 1

            Well, hardware keys usually require you to physically touch them - it’s a “something you have” factor, where as password + TOTP is two “something you know” factors. I’m not advocating against storing TOTP in a password manager, I agree with the top comment, but there is a lot more to be gained by using a hardware key than pretty much any other form of authentication*.

            On an unrelated note, 1Password has decided that it wants me to use it to fill in this text field on lobste.rs… not sure what that’s about!

            *this is a flippant comment I haven’t thought too hard about.

            1. 2

              If you leave the key plugged in, then access to the machine == access to the key. It’s like leaving the password on a piece of paper stuck above the monitor. And in a lot of cases, that access also gets them access to the password manager, leading to complete compromise.

              So I don’t agree with the people who judge TOTP-in-password-manager so harshly. Does it mean a compromise of the password manager is a full compromise of passwords and TOTP? Sure. Is a compromise of the password manager among the top threats to mitigate given that credential re-use and a bunch of other things are still rampant? Nope. “Password manager gets compromised” is waaaaaaaaay down the list of threats I’m going to worry about. So when I see people effectively trying to steer ordinary users away from password managers because of something that’s such a low-priority threat, I tend to push back.

              1. 1

                It’s still a second factor though, access to the machine so it’s indeed “something you have”.

                And while theoretically it might be I disagree with the paper comparison. It is if the password on the paper is both a second password and it somehow changes so it ensures the “something you have part”.

                Something you have can of course be stolen physically. It’s the whole concept of being a physical factor.

                Just a response to the first part. I am not saying it’s bad. I haven’t really made up my mind about it. The above just was about the physical/something you have factor.

                1. 1

                  Why would physical access give them access to the password manager? Now they have the “something you have” but still don’t have the “something you know” to get access to your password manager? A hardware token is pretty useless on its own, so I disagree that it’s anything. at all like leaving a piece of paper stuck to the monitor - you need username and password before you ever get to use the security key.

          1. 7

            Can someone please explain the pros and cons of wiring your home with fiber? This article seems to skip explaining why they are going to the trouble of doing this.

            1. 13

              Pros:

              • Fast. You can lay fibre today that has longer reach at the same speed or higher speed at the same reach than copper.
              • Headroom. If you lay multi-mode fibre then you can almost certainly replace the physical interfaces at the ends without replacing the fibre. In contrast, if you lay Cat-6 today, GigE is your limit, if you lay Cat-7, the same is true for 10 GigE. The bottleneck for modern fibre is the transceivers at the end, not the cable.
              • Cost. Fibre is a lot cheaper than the Ethernet cabling that will handle high speeds.

              Cons:

              • Cost. You need optical transcievers at the endpoints. These are more expensive than electrical ones, at least at the lower speeds.
              • Compatibility. Server NICs all support pluggable transceivers for optical connections but most laptop / consumer-electronics don’t. This means that you’ll probably want a switch with an optical upstream and an electrical downstream (or, ideally, a mix of optical and electrical) for rooms where you want the speed.
              • Diminishing returns. The jump from 10 Mb/s coax (shared bus) to a 100 Mb/s, full dupliex, switched network was huge. This is fast enough for multiple HD streams. The jump from 100 Mb/s to 1 Gb/s is smaller and you basically notice it only for large uploads or downloads (e.g. installing a game or backing up a disk). The jump to anything bigger needs workloads that I don’t have yet to matter. Possibly some immersive VR / AR thing will need that much, but for video the compression has improved quality a lot more than bandwidth increases in the last couple of decades. An H.264 720p stream now needs less than an artefact-ridden MPEG-1 320x240 stream used to.

              If I were doing a new build, I’d be very tempted to lay fibre and cat6, so that I never have to think about it ever again.

              1. 11

                you also can’t do PoE for things like cameras or access points.

                1. 6

                  In contrast, if you lay Cat-6 today, GigE is your limit, if you lay Cat-7, the same is true for 10 GigE.

                  Very minor nitpick – I believe 2.5GbE and 5GbE were designed to run on Cat-5e and Cat-6 respectively.
                  ref: https://en.wikipedia.org/wiki/2.5GBASE-T_and_5GBASE-T#Technology

                  1. 2

                    You can actually even do 10GbE over Cat-5e if the run is short enough.

                  2. 3
                    • Cost. You need optical transcievers at the endpoints. These are more expensive than electrical ones, at least at the lower speeds.

                    From what I can tell, if you already have SFP slots, optical transcievers are cheaper. For example

                    IMO the biggest downside is that fiber is more of a pain to work with. You can unplug/replug your copper cables as much as you want, but you have to be careful not to scratch your fiber connectors or bend the cable (yes, I know you can also kink a copper cable).

                    1. 1

                      Thanks. It’s been a little while since I looked and the price for the optical transceivers has come down by over an order of magnitude since then.

                    2. 2

                      If you’re concerned about EMI (i.e. TEMPEST), then fibre also doesn’t have those emissions. The US federal government deploys a lot of fibre for that reason.

                    3. 5

                      In case you happen to be a ham — fiber is RFI-immune. Longer copper ethernet runs can be prone to radiating wideband noise, as well as receiving RFI from nearby transmitters (which then degrades the network connection). Using shielded twisted pair is an option, but it’s nearly as expensive as fiber, and nearly as annoying to terminate as fiber. And, existing OM4 or OM5 fiber looks like it will manage 100Gbit links over in-house distances, which makes it more future-proof than Cat6A or even Cat8.

                      1. 3

                        Apparently they have a 25gbit connection so I guess you need this to even begin to take advantage of it.

                        Seems like a crazy amount of bandwidth though - 25gbit, 10gbit AND a 5G backup.

                        1. 2

                          isn’t this too fast to write on a regular SSD? if we take the 550MB/s write speed, how would you benefit from a 25gbit connection?

                          1. 3

                            Use the memory. And Init7 charges the same amount for 1G, 10G and 25G

                            1. 1

                              There’s plenty of usages I can think of that wouldn’t involve writing that to disk. Mostly to do with raw video transmission. Security systems, playing video games in a central beefy computer from a portable peripheral (Steam and Playstation support this), keeping video in a NAS and editing from around the house…

                              But yeah, that’s a ton of bandwidth.

                            2. 1

                              Here i am with my only option being a cable connection. I don’t use the fastest at 400/40mbit, but higher speed plans only give me more download.

                              1. 3

                                I use a 4G sim card here because the ADSL is so slow!

                                1. 1

                                  I have friends in more rural areas where 4G is better than anything wired they can get. i understand your pain.

                          1. 9

                            This seems to be a well thought out proposal, and is pretty close to how I’ve seen structured logging implemented in other languages.

                            The issue I have with it is that structured logging is a bit dated; all projects I have been part of have moved to using OpenTelemetry to output traces/spans. We have it configured to output to the console in local development, and in other environments send to an OpenTelemetry collector.

                            I have yet to see a situation where I have traces, and I’ve thought “what this needs is a log message”.

                            Logging also not to be confused with console output as part of a CLI etc, that’s still important!

                            1. 5

                              We have people pushing at work to replace logging with tracing, and it seems like a mixed bag. We don’t have consistent collectors for every layer of our architecture, and we apparently can’t/won’t collect traces without sampling for various reasons.

                              It also seems like the otel library people break their API on minor versions, which was frustrating as someone that had to integrate their code but definitely isn’t an expert.

                              Traces are cool when every service in your environment has the library configured, isn’t sampling, and your collectors are able to keep up.

                              The other use case that I cannot see traces ever replacing is any kind of access/request log that may be searched for audit purposes.

                              Ok apologies for the tracing rant, I’m glad that they are convenient and work for your use cases 👍

                              1. 2

                                Everyone has different opinions :)

                                I think the sampling question is a really interesting one, regardless of whether you are doing it on logs or traces. For instance, we sample a lot of http traffic based on status code (and endpoint), not all traffic is equal!

                                As for the breakages, that could be down to pre 1.0.0 versions? Not totally sure on this, although I have definitely had issues when docs haven’t matched actual APIs.

                                One other nice side affect is when a downstream service suddenly starts appearing in your traces because they’ve started adding otel themselves; we have benefit without their data, but getting more just makes it even easier to debug (“when we send this particular kind of request, it hits a different codepath which is slow, let’s talk to them and see if we can change something in one or both of our systems “)

                              2. 3

                                Not all codebase a need tracing and the simplicity of logs is still good. I originally set up a tracing client in a new product at the start of this year but never used it because logging was just easier to use with the existing datadog integration and my aggregated logs equal my local cli logs.

                                Tracing is useful and it has its place for sure but not every service is large or complex enough to warrant the additional engineering overhead.

                                1. 3

                                  I’ve found on all projects that I’ve preferred tracing, even without any real infrastructure behind it (i.e. just JSON to the console).

                                  To me, the engineering overhead of tracing is just using otel libraries Vs a structured logging library, in other words, around the same cost of implementation.

                                2. 1

                                  OpenTelemetry tracing is basically a superset of structured logging. The differences are basically:

                                  1. Optional end time (becoming a span from an event)
                                  2. Hierarchy (basically the parent span ID)

                                  In my mind I don’t see the two as fundamentally different. Tracing is just a slightly evolved form of structured logging. So I definitely agree that if you have tracing you don’t need a second set of structured logging.

                                  1. 3

                                    I think that’s pretty spot on with the hierarchy being most important. I’d also add that there is another difference: removal of duplicate properties between all the spans in a trace.

                                    I’ve also seen tracing described as “Structured Logging on steroids”, which I think is pretty accurate too :)

                                1. 2

                                  It’s reassuring to see that others are bothered by the noise. I also did the dynamat upgrade on my Advantage, but haven’t felt the need to explore different key switches (I have the Advantage 2 LF model).

                                  At this point I’m waiting for the Advantage 360 before dropping any more money on keyboard things…

                                  1. 8

                                    Emacs is not really an editor either! I believe that Emacs is the ultimate editor building material.

                                    Some people put it in terms of “it’s basically a Lisp machine environment”, and they’re not wrong (albeit it’s not a particularly good Lisp machine, I guess :-P).

                                    I’ve been trying to get myself to use VS Code on and off for a few years now, as Emacs is not exactly thriving tech these days and I don’t suppose it’s going to get much better in the next fifteen years. Aside from muscle memory and weird UI choices (why in the world are we using three lines’ worth of screen estate to show the current file name and three icons!?), the fact that it doesn’t have the kind of introspection and interactivity Emacs has is what’s giving me the most headaches.

                                    I want to automate some tedious code generation, or generate some test data? That’s five lines of Emacs Lisp, bind them to a key (I can do all of that in the scratch buffer) and there we go. As far as I can tell, if I want to achieve the same thing with VS Code I pretty much have to create a custom extension, which I then have to package and install. That’s… quite an involved process for a one-off function.

                                    1. 10

                                      Emacs is a view of a better, alternate reality, where our computing environments were completely under our control. It makes a bunch of compromises to the world-as-it-exists, and has many hideous warts, because it’s old, but it’s the best computing environment widely available, by my metrics.

                                      1. 6

                                        I have never used emacs (ok, I “used” it for about a month, ~20 years ago, but I never got the hang of it), but I use acme (from Plan 9). While acme couldn’t be further from emacs in terms of UI, UX, and philosophy, they are both equally far from “regular” editors like vscode and whatever. Once you have used an environment like this, all the “regular” editors seem equally crippled to you, equally not interesting, and equally missing the point of what’s important. Their differences become trivial.

                                        1. 6

                                          I really like acme, too. By the time I got a chance to really play with Plan 9 I already knew Emacs and I was too far gone to the other side but I do wish Emacs played better with the outside world, the way acme does. For a while I actually tried to hack an acme-mode for Emacs that would allow Emacs to interact with the outside world the way acme does but it didn’t get too far. This was like ten years ago though, maybe things are better now.

                                          IMHO, while the differences in UI are tremendous, the difference in philosophy, for lack of a better term, is not as deep. Acme is obviously more “outward”-oriented, in line with its Unix heritage, whereas Emacs is far more “inward”-oriented, partly because it comes from a different world, partly because it sought to re-create much of that different world under a different environment. But Emacs’ “internal” environment, while operating in terms of different abstractions than those of Unix (files, processes, pipes) into which Acme so seamlessly plugs, is remarkably similar to Acme’s.

                                          For example, one of the things I like the most, and which I find myself using quite frequently, is an 8-line function that replaces an S-exp with its value. This allows me to write something like One hour from now it'll be (format-time-string "%H:%M" (seconds-to-time (time-add (current-time) 3600))) in a buffer, hit F12 at the end of the line, and get all the parentheses replaced with what they evaluate to, yielding One hour from now it'll be 01:10 in my buffer. This is basically Acme’s command execution, except it’s in terms of Emacs Lisp functions, not processes + some conventional commands like Cut, Snarf etc.. It’s… either more, or less powerful, depending on how much Emacs Lisp that does what you need you have around.

                                          (I know it’s a silly example, it’s just the easiest concrete one I could come up with on the spot. Over time I used it for all sorts of things – e.g. when testing a bytecode VM, I could write something like test_payload = (bytecode-assemble "mov r8, r12"), hit F12, and get test_payload = [0xDE, 0xAD, 0xBE, 0xEF] or whatever)

                                          But ultimately it still boils down to a text editor plus an environment from which to assemble a “text processing workbench”, except with different parts – Emacs has its own, Acme allows you to borrow them from the Unix environment. IMHO the latter approach is better. Lots of things, like real code completion and compiler integration support only landed in Emacs once it got reasonable support for borrowing tools from the outside.

                                          As for VS Code… ironically, the thing that keeps me away from it the most is that honestly it’s just not a very good text editor :-D. Not only could I live without Emacs Lisp, I’d be extremely happy without Emacs Lisp, the language that embodies the simplicity of Common Lisp, the stunning feature set of Scheme, and the notorious openness and flexibility of RMS in one unholy Lispgoblin whose only redeeming feature is that at least it’s not Vimscript. Most of the Emacs Lisp code I write is one-off code that gets thrown away within minutes and I could do without it, I don’t really use much of Emacs’ renowned flexibility (in fact, last time I tried to, I yelled at rustic for like half a day). But in order to get things that I now think are basic, like easy back-and-forth navigation between arbitrary positions in different files, a kill ring, a save register, easily splitting/unsplitting views, oh and if possible a user interface that doesn’t light up and blink like a Christmas tree, I need to install a few dozen extensions, and that doesn’t look like an editor that’ll hold up for fifteen years…

                                          1. 1

                                            Did you mean to say “the simplicity of Scheme and the stunning feature set of Common Lisp” instead?

                                            1. 4

                                              No, that was sarcasm :-D. Emacs Lisp has a bunch of interesting pitfalls (e.g. for the longest time it only had dynamic scoping) and is not a very straightforward Lisp, much like Common Lisp. However, it’s feature set is fairly limited, much like Scheme (albeit IMHO in poorer taste, I’d rather write Scheme most of the time). Hence the “simplicity” of Common Lisp and the “stunning feature set” of Scheme :-).

                                              1. 1

                                                😂 Alright!!

                                            2. 1

                                              The inward/outward comparison is apt.

                                              I ended up in acme after 15+ years of emacs, partly out of curiosity after watching an rsc screencast and partly because I needed a break from my yearly cycle: juggle elisp packages to add features, tinker, tinker too much, declare configuration bankruptcy and repeat again.

                                              I’m old enough that being able to quickly reuse existing tools is more valuable than the satisfaction of reimplementing them or integrating them into an editor. I do use other editors occasionally, and they are shiny and feel cool and certainly better at some tasks. But I end up back in the pale yellow expanse of tag bars and columns, because there’s another tool I need to interact with and I know I can get it done with files and pipes.

                                              I sometimes think about how much time and effort goes in to editors and tooling integration, and wonder how we got to such a point.

                                              1. 1

                                                The “outward” vs. “inward” distinction WRT Bell Labs/Unix/C and PARC/Lisp/Smalltalk is a fascinating one. Reminds me of Rob Pike’s experience report after visiting PARC: https://commandcenter.blogspot.com/2019/01/notes-from-1984-trip-to-xerox-parc.html.

                                            3. 4

                                              This mirrors my own experiences with VS Code. It’s great, but it simple doesn’t have Emacs’ flexibility. The things that VS Code have envisioned people doing are great and easy to do, but if you try to do things besides that, you’re out of luck. VS Code does do a lot of things really well, but I keep coming back to Emacs (these days, Spacemacs) at the end of the day.

                                              1. 1

                                                That’s five lines of Emacs Lisp, bind them to a key (I can do all of that in the scratch buffer) and there we go.

                                                I found this line particularly intriguing as a vim user. Given that I also have some interest in learning common Lisp, I may just have to give emacs a try.

                                                1. 1

                                                  Is this not something you can do with vim? I’m not poking fun at it, I’m asking because it seems like a pretty small thing to change editors over. It may be far more useful to do this from the environment you already know.

                                                  I mean, I didn’t have some grand realization that Emacs is the best editor and thus attained eternal understanding of the Tao of text editors, it’s just that when I installed my first Linux distro, I installed it from one of those CDs that computer magazines came with and the CD was scratched or had sat too much in the sun or whatever, so it had a few corrupted packages that wouldn’t install. vim was one of them, emacs wasn’t, so I learned emacs (and for the longest time I didn’t really use it much – I still mostly used Windows for a couple of years, where I used UltraEdit). In 15+ years of seriously using Emacs, I don’t think I’ve ever used a feature my vim-using buddies frenemies couldn’t replicate, or vice-versa. Other than sane, non-modal editing, of course :-P.

                                                  1. 2

                                                    I installed it from one of those CDs that computer magazines came with and the CD was scratched or had sat too much in the sun or whatever, so it had a few corrupted packages that wouldn’t install. vim was one of them, emacs wasn’t

                                                    I love this.

                                                    It’s very similar to my Emacs origin story. I was a dvorak typist (still am!) when I was in university and I knew I had to pick between vim and emacs. I looked at vim and it was like “just use hjkl; they are right there on the home row so it’s easy to press” and I was like “no they’re not!” and realized by process of elimination I might as well use emacs instead.

                                                    1. 2

                                                      The reason I use vim is very similar to the reason you use emacs: it’s the first one we learned. In your case because your CD was damaged, in my case because my friend was using vim. There’s no harm in giving the other an honest try. I just don’t know when I’ll have the time.

                                                      1. 1

                                                        Oh. Well in that case, have fun with emacs! The tentacle fingers are gonna come in super handy once you grow them.

                                                        (They did tell you about the tentacle fingers right?)

                                                1. 6

                                                  A word of caution: embedding the secrets in your apps is a big NO NO in aws land. I am afraid this solution goes exactly against what you are supposed to do, in the real case:

                                                  • you’ll create a role to access the bucket
                                                  • your app will call the STS (secure token service) assume_role_with_web_identity or similar ( after successful authentication, you can use openid or saml, or other federation if that’s your thing).
                                                  • THAT will give you a set of tokens you can use to deal with the bucket.

                                                  note that likely amplify is the easiest way to deal with this currently

                                                  1. 4

                                                    came here to post basically the same thing - storing credentials is the wrong approach, assume a role that’s scoped to exactly what it needs and leave secret issuance to STS.

                                                    1. 4

                                                      I don’t understand how I can build my projects against this. If I’m going to call assume role I need to have credentials that let me call that, right? So something needs to be stored somewhere.

                                                      Here are some examples of things I have built or want to build with S3:

                                                      • A backup script that runs nightly on a VPS via cron and sends data to S3. I want to set this up once and forget about it.
                                                      • A GitHub Actions workflow that runs on every commit, builds an asset of some sort and stores that in S3. This needs to work from stable credentials that are stored in GitHub’s secrets mechanism.
                                                      • A stateless web application deployed to Vercel that needs to be able to access assets in a private S3 bucket.
                                                      • Logging configuration: I want to use a tool like Papertrail and give it the ability to write gathered logs to an S3 bucket that I own

                                                      None of these cases feature an authenticated user session or any type - they all require me to create long lived credentials that I store in secrets.

                                                      Can I use assume role for these? If so, how?

                                                      1. 1

                                                        For GitHub Actions you can now use OIDC to assume a role rather than long-lived credentials: https://docs.github.com/en/actions/deployment/security-hardening-your-deployments

                                                        1. 1

                                                          That does look like a good option for GitHub Actions - where my code is running in a contest that has an authenticated session I can exchange for another token - but it doesn’t help for cron scripts or anything where I want my code to run in a situation that doesn’t have access to credentials that can be exchanged in that way.

                                                          Confession; I’ve read that GitHub documentation on OIDC a couple of times now and it gives me the e impression that actually implementing that would take me the best part of a day to figure out (mostly at the AWS end) - it seems really hard! I wish it wasn’t.

                                                      2. 1

                                                        There was an article that went around the other week about using AWS IoT to get temporary credentials for machines in a home lab: https://ideas.offby1.net/posts/automating-letsencrypt-route53-using-aws-iot.html

                                                      1. 1

                                                        Thank you!

                                                        1. 1

                                                          Workers has a ways to go, they still don’t have any kind of production logging capabilities, and troubleshooting problems at scale can be painful.

                                                          1. 12

                                                            These look really neat. Can’t quite tell a lot from the pictures, but I expect some sort of servicing guide to be excerpted later. Lots of OCP green “touch this”, like you can see on the screws around the CPU.

                                                            (Tagged the story as illumos because it looks like it is, based on https://github.com/oxidecomputer/illumos-gate/tree/cross.vmm-vm.wip )

                                                            1. 6

                                                              Also this: https://github.com/oxidecomputer/propolis

                                                              They mentioned that a hypervisor is part of the assumed stack, and it looks like it’s bhyve on illumos.

                                                              1. 7

                                                                Does anything in particular make Illumos a better hypervisor than Linux? I have no particular reason to believe either is better, except that Linux gets a lot more developer hours.

                                                                1. 2

                                                                  I would not be at all surprised if the maturity of the Illumos ZFS implementation was a bigger consideration than bhyve vs. kvm. Storage is just as much a part of this story as CPU or chassis, so having a rock-solid underlying filesystem that can support all those VMs efficiently seems like a good default.

                                                                  1. 3

                                                                    As far as I know, ZFS on Illumos and ZFS on Linux are the same these days (as of 2.0). Of course that wasn’t true when Oxide started, so you could be right.

                                                                    Thinking on it more, Bryan Cantrill does have years of experience running VMs on Illumos (SmartOS) at Joyent, and years more experience with Illumos / Solaris in general. Although I think SmartOS mixed in Linux KVM for virtualization, not bhyve.

                                                                    Ultimately I guess it doesn’t matter. Hosted applications are VMs. As long as it works, no one needs to care whether it’s Illumos, Linux, or stripped down iMacs running Hypervisor.framework.

                                                                    Another point for Illumos: eBPF has come a long way, but DTrace already works. Since they made a DTrace USDT library for Rust I think it’s safe to assume DTrace influenced their choice to use Illumos.

                                                                2. 2

                                                                  So this has me wondering what happens if they decide they need arm64 hardware? How portable is Illumos?

                                                                  1. 1

                                                                    It supports SPARC CPUs with the SUN heritage, so it should be portable to arm when needed, no?

                                                                  2. 1

                                                                    Missed opportunity for zones, then!

                                                                  3. 3

                                                                    illumos

                                                                    Did you expect Cantrill to support epoll and dnotify?

                                                                    1. 8

                                                                      epoll is actually terrible and a a huge argument against Linux game servers where I used to work.

                                                                      io_uring is pretty sane as an interface though.

                                                                  1. 2

                                                                    Plan9 noob here: I switched to Acme about a week ago (not 9front, just Acme), and my scroll wheel broke like 2 days later. Too much middle clicking! I think this is the 2nd mouse Acme has killed.

                                                                    First, I tried AcmeSAC.app, which is ok (where are my files?), but cd /usr/local/plan9; ./INSTALL works great on OS X (easy to find files, fonts work..).

                                                                    1. 1

                                                                      Let me say I am in the same boat with AcmeSAC.app! I loved it, and was my introduction to Inferno and Plan 9. Unfortunately, it is not maintained any more, and I have no idea how to update it to the latest Inferno sources. I couldn’t even get it to compile.

                                                                      While using it on Mac, I found that the following worked if you had a trackpad: (copy = b1+b2 and paste = b2+b3 in ACME)

                                                                      • Option + click is middle click (b2)
                                                                      • Command + click is right click (b3)

                                                                      For cut, select text with trackpad, then click option without releasing the click.

                                                                      For paste, click or select text with track pad, then click command without releasing the click

                                                                      For copy, selecting the text, then click and release option first, then click and release the command, which copies the text.

                                                                      1. 2

                                                                        On newer versions of plan9port, a 3-finger tap will simulate button 2, and a 4-finger tap will do the 2-1 chord for a given text selection to send it to another command. Command-x/c/v all work for cut/copy/paste as well.

                                                                    1. 1

                                                                      Sadly this is using a ARMv7 CPU which is a 32-bit architecture and not a AArch64 CPU..

                                                                      1. 2

                                                                        the second revision is using an i.MX8M so should be 64-bit: https://mntre.com/media/news_md/2019-05-20-reintroducing-reform.html

                                                                        1. 1

                                                                          Nice, that’s the same processor (family at least) as the Librem 5.. theoretically it should be possible to run distros that support that (currently best supported being PureOS, but also some others like postmarketOS) on the laptop without making major changes too!