Threads for threkk

  1. 8

    This strongly reminds me the common misconceptions about names. Getting text messages in Antartica is a technical problem because many companies believe that every person has a mobile phone, which likely is a smartphone (and probably an App Store running on the same country as the service is located, but that is my personal pain).

    1. 7

      The most amazing part is that the whole Stable Diffusion setup, which was a complicated process to install and make work, has turned in about a month in a simple app that you download in your computer. The true power of open source.

      1. 4

        Not sure if related, but the Gemini protocol makes use of TLS certificates to handle authentication.

        1. 3

          It’s authentication, Jim, but not as we know it. TLS certs are more a replacement for cookies in Gemini than anything else. It’s because each cert is self-signed so there’s not real way to securely authenticate a certificate.

          Most clients allow a user to create a TLS keypair (they’re called identities in Lagrange) that can then be used as an identity for a specific gemsite, like for commenting or uploading.

          1. 1

            Thank you!

          1. 4

            iPhone 14 “Pro” still, for some reason, has a Lightening port which is limited to USB 2.0 speeds. Why!?

            1. 8

              I honestly can’t think of a single situation in which I would ever transfer data to my phone using a cable. In fact, I can’t think of a single situation in which the cable has been used for anything other than charging the thing since… I don’t know, maybe the iPhone 3G?

              1. 4

                It may be out of fashion but there remains a surprisingly comprehensive tool built into macOS Finder for syncing your data with an iPhone. For someone who doesn’t want to use the cloud for contacts, music, photos etc. they could get on perfectly well with a Mac, iPhone and Time Machine for backups. And they might appreciate higher speeds!

                1. 3

                  The speed of cloud syncing is usually irrelevant because it happens in the background. The problem with Apple is they don’t provide a “sync now, dammit” button for the times when you actually do want it to sync now.

                  1. 2

                    This used to bug me, but for the last couple years the syncing has been so low latency I haven’t had problems. Maybe my experience is in some way better than average, but for me syncing 50 new photos means waiting a minute tops.

                    1. 1

                      For me, it sometimes means waiting for days. Also, occasionally it just gets stuck, and the only solution seems to be to delete and recreate the photos library…

                      1. 1

                        I am using the same photo library since 2013, resynced onto new MacBooks and iPhones over the years through iCloud. The initial sync takes an eternity, as it’s now ~150 GB, but other than that no problems. I wonder what’s causing our differing experiences.

                        1. 1

                          Same here. Something else has to be going on for 4ad.

                  2. 1

                    My experience for many years has been that wifi provides substantially higher throughput than any hardwired connection less than gigabit ethernet. Faster than Lightning, faster than any USB, faster than Thunderbolt. Not yours?

                  3. 2

                    If you’re using the ProRes recording, it would be nice to be able to ingest footage to a Windows PC in a timely manner

                    EDIT: or, hell, to a Mac without resorting to AirDrop

                    1. 1

                      When I buy a new phone, I typically take a backup of the old one and restore it to the new one via cable. For the other three years, yeah, charging only.

                    2. 2

                      iPhone 14 “Pro” still, for some reason, has a Lightening port which is limited to USB 2.0 speeds. Why!?

                      I couldn’t find information on there that it is limited to USB 2.0 speeds - USB-A-to-Lightning (no e BTW) cables are USB 2.0 but they will provide a USB-C-to-Lightning cable with the new iPhone 14.

                      1. 1

                        GSMarena lists the phones as USB 2.0 proto.

                        1. 1

                          I was hoping for an authoritative source.

                      2. 1

                        Backwards compatibility. I think their plan is to move to wireless charging and wifi/bluetooth for everything eventually. Just closing off all physical connectors completely, much like they did with Apple Watch.

                        1. 6

                          They will do whatever to avoid complying with the EU…

                          1. 8

                            I’d argue it has nothing to do with the EU. Their perspective(I believe) is cables are stupid and need to die. They clearly have won that war around headsets. They haven’t managed to with charging/sync cables yet, though I’d argue they haven’t really started waging that war yet, as they don’t quite have everything they need in place yet.

                            I’ll be pretty surprised if within the next 5-ish years iPhones literally have no physical connections whatsoever. Maybe they still have a button or three, but I imagine even those will likely die eventually too. They basically said it in their presentation today: “You shouldn’t know where hardware and software ends.” -@calvin comment above.

                            1. 2

                              60 GHz is pretty sweet, per a friend who’s messed around with it, and Apple’s using it in the Watch for some limited stuff. A Qi+60 only iPhone is quite likely. It’ll take some getting used to, but I expect it’ll become normal pretty quickly.

                              1. 1

                                They clearly have won that war around headsets.

                                Have they? Wired headphones still win for shared hardware use (charging and re-pairing is annoying) and for musicians (delay is and always will be there).

                                1. 2

                                  delay is and always will be there

                                  But is it noticeable? I’d be surprised if even a 1ms delay is noticeable to a human (for audio - it’s the absolute maximum latency for haptics) and that’s at the upper bound of local wireless latency unless there’s a lot of EM noise causing retransmissions.

                                  Almost all of my gaming now is via the Xbox cloud gaming service, where the game runs on a computer in a datacenter 12ms away from my Xbox, the controller is connected to the Xbox via Bluetooth, and the audio goes via a DTS-encoded digital connection to a decoder, which then sends it to an analogue amplifier. In spite of all of the things there that are causing delays (the Internet round trip between pressing a button and getting a frame / audio sample back, in particular), I can’t tell that a game isn’t running locally (unless it’s a Series X game, in which case I notice that it looks better than if rendered locally).

                                  1. 4

                                    A 1ms delay is absolutely fine, that’s less than what you get from a MIDI-connected electronic drumkit. Also, wireless in-ear monitors have been a thing in studios for years now. I think their delays are a little higher than that. I don’t know the exact figures (I haven’t exactly been in a band since high school so I’m not exactly up to date with the latest technology :-P) but the professional musicians I know swear it’s comparable to the delay from a digital effects pedal so… 2-3ms, maybe a little higher?

                                    There’s no “minimum threshold” for musicians – the latency at which it starts to mess up with your playing/singing depends a lot on setup and instruments. Also, because any setup has delays, you kind of learn to deal with those – singers routinely deal with 10-15ms delays in studios, and it probably sucks the worst for them, because they get to hear their voices immediately through their sinuses and face bones. As long as the delays are constant, you can handle delays surprisingly well.

                                    (Edit: FWIW, the thing that really kills it, even at low-latency figures, is jitter. Way back, when I was way younger and in a band and couldn’t afford a real bass amp, I’d lug my old Pentium to practice and use… gnuitar, I think (I started with something else but I can’t recall what, I switched to gnuitar after a few years). The poor thing could give me 20ms of latency (and really bad sound) on a good day, and it was good enough. 10ms, or 15ms, are just fine, but if there’s jittering between 10 and 15ms, I’m no longer able to reliably play behind of ahead of the beat).

                                    1. 3

                                      The delay of bluetooth headphones is in the order of hundreds of milliseconds. It’s so bad that video players have to compensate for it. The wireless audio stuff for the pro market is UHF, and last I checked it was all analog.

                                      I have no idea why bluetooth headphones, including Apple’s W2 headphones are so bad and have such high latency, but they do. You can certainly have low-latency wireless audio, just not with any existing consumer standard.

                                      1. 2

                                        It really depends on the codec / os processing though. For example the difference between Windows and Linux/pipewire is insane. The latter has next to no delay for cases like media playback.

                                        1. 1

                                          Yeah, I have no idea what kind of latency Bluetooth headsets have, I’m all wired, but only as a matter of personal preference (easier troubleshooting, no batteries to die/recharge etc.). Hundreds of ms would drive me nuts just for everyday listening :-D.

                                      2. 2

                                        It really depends on what you’re doing. For example if you try to play an electric guitar with effects done on the computer with playback back into headphones, anything above 5ms is clearly noticeable. But most systems can’t achieve even that out of the box (you need either the exclusive mode on Windows or pipewire/jack on Linux).

                                        For games, I guess even if the sound is one frame behind its not going to be that easy to tell.

                                      3. 1

                                        I agree wired headsets have their use-cases, they likely always will.

                                        I would argue they absolutely have won that war, People are still buying iPhones and iPads in huge numbers, even though none of them have headphone jacks anymore.

                                        Many android phones have also followed Apple down this path as well and no longer ship with headphone jacks.

                                  2. 2

                                    Backwards compatibility with what? Charging cables? A watch doesn’t take video or photos, or store files. Fast transfer speeds for video is pretty important to most workflows, and wireless is not going to be able to keep up. The backwards compatibility doesn’t hold up when you consider that literally everything else has moved onto the USB-C standard, including their iPads and MacBooks.

                                    Edit: if they were really concerned, they’d make a USB-c male to lightening female adapter. Problem solved.

                                    1. 9

                                      The drawers full of Lightning cables iPhone users already have.

                                      1. 3

                                        100%. Plus all the various lightning devices out there in the world. I think @ngp’s definition of backwards compatibility and ours are quite different.

                                        @ngp My perspective is, they consider wires stupid. They have waged war(and won) when it comes to wired headsets. They haven’t quite started the war on charging/sync cables, but they are working on it. I imagine they didn’t expect it to take quite so long, as I agree Lightning is looking a bit long in the tooth now, but it would be idiotic of them to switch to USB-C for a few years while they finish off their cables are terrible war.

                                      2. 4

                                        USB-C port is still far more fragile than the lighting port. It’s easier to replace a lighting cable than to take apart an iPhone and desolder a USB-C port and replace it because the little pins in the middle have been damaged.

                                        1. 2

                                          This a million times! The lighting connector is pretty much bullet proof.

                                          1. 1

                                            It definitely hasn’t been in my IPhones. Earlier this week, I woke up next to a phone that hadn’t been charged despite being plugged in all night.

                                            1. 1

                                              Sounds like you need to clean the port or replace the cable. The only times I’ve had a connection issue have been when there’s been fluff in the port or the cable has a discoloured pin and will only connect the other way around. They don’t seem to be fixable when that happens but I don’t know if I’m missing a technique that works.

                                  1. 2

                                    I haven’t read through all of the post yet but it looks exactly like the tutorial I’ve thought about writing countless times while helping some colleagues with React! It clearly demonstrates lots of pitfalls I’ve seen people fall into with re-renders and even has some nice visualizations. I’ll definitely share this with my colleagues.

                                    1. 2

                                      I have to admit that I am myself one of those updating apps to avoid the re-renders.

                                    1. 1

                                      The deepness of this article and its research is equally impressive and scary. I wonder how long the whole process took.

                                      1. 4

                                        We’ve been working on some updates that will allow Deno to easily import npm packages and make the vast majority of npm packages work in Deno within the next three months.

                                        On the one hand, good on them for recognizing a major limitation and doing something about it. On the other hand…

                                        import express from "npm:express@5";
                                        

                                        This syntax introduces yet another module resolution algorithm in addition to and incompatible with the ones that already exist in:

                                        • The browser spec
                                        • Node
                                        • webpack, Vite, and other bundlers and build tools
                                        • The TypeScript language server

                                        I’m sure they’d like to avoid reinventing package.json, but it seems like there ought to be someplace outside of the source where package installations can be managed instead of hacking npm into the module name.

                                        1. 5

                                          I don’t see it THAT bad… it builds on the same logic as the node:xxx modules in Node.js. That is the closest to a standard in regard to backend JavaScript.

                                          1. 3

                                            but it seems like there ought to be someplace outside of the source where package installations can be managed

                                            I agree 100%. But that’s sort of a fundamental issue with Deno’s whole approach. In reality, it’s extremely useful to use abstract package names in the source and provide a mapping between abstract package name and concrete package implementation externally. Deno’s (and Go’s) rejection of that idea is unfortunate imo. Go has mostly reversed their direction, where the import URLs are now abstract package identifiers which are resolved using go.mod; maybe Deno should do the same.

                                            1. 2

                                              Deno does have a go.mod equivalent: https://deno.land/manual/linking_to_external_code/import_maps

                                              (This is a standard and was not invented by Deno. https://wicg.github.io/import-maps/)

                                              As well as a lock file for integrity checking: https://deno.land/manual/linking_to_external_code/integrity_checking

                                              (This is Deno-specific)

                                            2. 3

                                              Yeah, like: one thing that really confused me was why they didn’t do something like import express from npm("express@5"), which, although I suspect (honestly, I think) isn’t technically a valid import syntax, has the benefit that it could simply expand out to e.g. https://esm.sh/express@5, and therefore keep the existing, clean import system Deno already has.

                                              I feel for Deno. They’re between a rock and a hard place on innovating v. breaking all backwards compatibility. But I feel as if this is a small step in the wrong direction that’ll be very, very hard to unwind from.

                                            1. 2

                                              I was shopping for a test runner not long ago. I saw the new Node test runner and experimented with it a bit, but gave it a pass mostly because of its experimental status.

                                              It kind of reminds me of a test runner I once used called tape, which also generates TAP output. My initial reaction when I found tape was, “Brilliant! Yes, let’s use an existing, standardized test output format.” In practice, I found that I didn’t actually care that the test runner was separate from the output formatter. All I cared about was whether the output showed me what I needed when a test failed. In JavaScript, tests are frequently deep equality checks, either in data that can be serialized as JSON or occasionally in presentation components written in a template or markup language. Someone wrote a TAP output formatter to display Jest-like diffs between expected vs. actual deeply nested values, but I wasn’t able to get it to work with tape. Everything past the second or third layer of a given data structure was incorrectly rendered as NULL. I haven’t tried deep equality checks on the Node test runner yet, but that will be the first thing I check if I ever give it another look.

                                              TypeScript support (referred to in this article obliquely as transpilation) is also pretty important. I don’t expect the Node test runner to ever support TypeScript since Node doesn’t have first class TypeScript support anywhere else. Lately I’ve been using ts-jest, but I’m kind of regretting it due to some ambiguity about the correct file extension to use on TS module imports, which is arguably a flaw in TypeScript itself rather than ts-jest. Were it not for that, I would be pretty happy with my choice. When a test written in TypeScript fails, I’m far more confident that the cause is a flaw in the source rather than a typo in the test code.

                                              Is anyone else here writing unit tests in TypeScript? What’s your setup?

                                              1. 2

                                                Personally I have been using vitest. It is more straightforward to set up than jest, and supports typescript. Unless you need some very specific feature or integration of jest, I would pick vitest over jest in any situation.

                                                1. 2

                                                  And it supports esm out of the box, which I could not get working with jest at all.

                                                  1. 1

                                                    To be honest, in my experience, getting Jest to do anything but the basics like trying to break a rock with your head.

                                                    1. 1

                                                      Hah. I haven’t had to configure Jest myself until this recent experience trying to use it in a new esm-only project, but as a user it’s always worked perfectly for me in the past. 🤷🏽‍♀️

                                                2. 2

                                                  We’re using Jest, and it’s pretty terrible. Slow, memory leaks galore, and diffs in test failures appear to be text-based, so it’s often hard to understand which part of the diff is actually meaningful. First time I’ve heard of vitest, might be worth giving it a try on one of our larger projects.

                                                1. 1

                                                  I’m using node:test with node:assert for a project with minimal testing needs. One challenge I ran into was the lack of lifecycle hooks like afterEach, as my tests need to clean up after running (even on failure). Luckily, beforeEach and afterEach just landed in main.

                                                  1. 2

                                                    This is what I was taking about :) It just needs some polishing in the rough areas to become a very decent testing solution.

                                                  1. 3

                                                    I have an M1 air because I hate the touchbar, but I think those are gone now on pros. To be perfectly frank, any of them since they fixed the keyboard issues are going to be great in terms of build quality. It really comes down to what you want to spend.

                                                    1. 1

                                                      You just convinced me to get an air as next device.

                                                    1. 15

                                                      I have a Macbook Pro M1 at work, and it is an amazing machine: silent, light and incredibly powerful. I have a quite decent personal windows machine that I got during the dark ages of the macbooks that feels like a turtle next to it. The next personal machine I am buying once my windows machine passes away is going to be whatever is the latest Mx in the market.

                                                      1. 9

                                                        +1. If you need a bit of computing power, go for a MacBook Pro. The M1 in there has more cores and thus more power than e.g., the MacBook Air with M2. I’m doing fresh builds of Firefox in less than 10 minutes on an MBP. Compared to 3 minutes on a maxed-out Ryzen Threadripper or over 60 on a thinkpad x390.

                                                        1. 1

                                                          I also have an M1 MBP at work. It’s great and, yes, almost always silent. But I’d hardly call it light—that’s probably its biggest downside in my book.

                                                        1. 6

                                                          I think a framework like this will help improve the adoption (at the end, you need some product to drive the adoption of a language like RoR for Ruby or Flutter for Dart) but I am sad that the only “official” way so far to deploy your code is using their hosting platform.

                                                          1. 6

                                                            Yes, that’s what I thought too. The new open source feels like it is becoming the freemium model. “freemium source”, you heard it here first folks.

                                                            But, all sarcasm aside, I do also feel glad that the money goes to the people who actually build the things and write the code.

                                                            1. 4

                                                              I believe one can just point the deno binary at a thing and run it. There’s no magic, you could make a Dockerfile with like 3 lines and host it on fly.io or whatever

                                                              1. 2

                                                                There’s no vendor lock-in nowhere. Just run the Deno runtime wherever, however you want.

                                                                1. 2

                                                                  I htink they meant Deno Deploy, from the example project.

                                                              1. 36

                                                                I owe Atom the decision of starting using Vim. I used it around 2015 and it crashed when opening big JSON files. Tired of it, looked for and editor which didn’t freeze my computer when dealing with big files and ended up learning Vim and I have not stopped since then (I guess I don’t know how to exit :^) )

                                                                1. 3

                                                                  nvi is the next step

                                                                1. 6

                                                                  Beware when you have conflicts in the dependency files (package.json, go.mod, requirements.txt…). Do not remove dependencies until you are sure that they are not needed at the very end of the merge. Add any dependency suggested during the conflict resolution and decide at the end if it is necessary.

                                                                  Or just create file .gitattributes with:

                                                                  your.lockfile merge=binary
                                                                  

                                                                  And then it will only mark a file as conflicted without putting conflict markers. It will allow you to simply your-dep-tool update to update the lock file to newest configuration. This makes working with lock files much neater and cleaner.

                                                                  1. 1

                                                                    Wow, I’ve never heard of that. That’s a great tip!

                                                                    1. 1

                                                                      This one is a great tip! This aligns with the recommendation of regenerating files.

                                                                    1. 23

                                                                      If I had to give two real high-dollar items for making git branching less painful:

                                                                      • Rebase, not merge, your feature branch. Do not squash.
                                                                      • Rebase your feature branch against master/main/whatever at least every day .

                                                                      I’d also suggest knowing when to cut losses–if you have a zombie branch a few months old, take the parts that are super neat and throw away the rest.

                                                                      1. 7

                                                                        This is pretty much it. GitHub exacerbates this problem because its pull request workflow based on branches is broken. What you really want is a pull request on a cherry picked range of commits onto another branch. That way you can have commits from your branch flowing into main while you continue to develop along your branch.

                                                                        1. 2

                                                                          Indeed, any of these advices would have solved the problem. Instead, I ended up spending almost a month just trying to solve the mess.

                                                                          1. 3

                                                                            My condolences, friend. Happens to all of us eventually.

                                                                          2. 2

                                                                            Some time ago I setup an alias for git cherry-pull <mainline>, which (rebase style) lets you assign each commit to a new branch, pushes them, then opens the ‘new pull request’ page for each branch.

                                                                            I should dust off the code and publish it.

                                                                            [edit] A colleague pointed out that a script needs to be ~perfect because it obscures the details of the git operations, while doing it longhand keeps them front of mind. Might explain why I rarely use it anymore.

                                                                            1. 1

                                                                              Sounds nifty! Is the interface like rebase interactive? And I’d recommend a new name, because google autocorrupts it to cherry-pick.

                                                                              1. 1

                                                                                I’ve written similar scripts. The two things that I kept hitting are:

                                                                                1. Getting this to play nicely with rebase -i and merging/splitting commits is…not trivial.
                                                                                2. Automation in PRs is often triggered by the main/master branch, so all but your bottom PR doesn’t get checked.
                                                                            2. 3

                                                                              It seems to be fairly unmaintained, but I really like git-imerge for long-lived branches. It does pairwise merges of each of your commits against each of the upstream ones giving you an NxM matrix of all possible merges. It builds this as a frontier from the top-left (shared parent) branch to the bottom-right (final merge). You get to resolve conflicts exactly on the pair of your commit vs theirs that introduced it. You then have the full history for bisecting if the end result doesn’t work. You can then choose one of three ways for it to clean up the resulting history:

                                                                              • The equivalent of a git merge.
                                                                              • The equivalent of a git rebase.
                                                                              • A rebase-with-history, where it gives you a rebase but also sets the parent of each of your commits such that downstream users can still merge from your branch.
                                                                              1. 1

                                                                                I’ve tried git-imerge recently, in a situation where 2 branches had diverged from hundreds of commits, but many commits were common and cherry-picked from one side to the other.

                                                                                The performance was catastrophic. After half an hour eating my cpu with absolutely no progress information, I investigated externally and saw imerge was painstakingly creating 1 tag per commit, so had created hundreds of tags and was still not finished yet. Not knowing how long it would take after or what would be its next step and how long it would take, I understood it was simply not designed for my case, so I stopped it, cleaned the mess it created and uninstalled it.

                                                                                1. 2

                                                                                  The worst case for me was merging 8 months of LLVM changes into a tree that had had active development over that time. Several thousand upstream comments, over a hundred local ones. It took about two weeks of CPU time but, critically, only about half an hour of my time. Fixing the conflicts was incredibly easy because it showed me the precise pair of commits where I and upstream had modified the same things. I’d done a similar merge previously without the aid of git-imerge and it took well over a week of my time.

                                                                                  In general, if I can trade my time for CPU time, I’m happy. I can trivially buy more CPU time, I can’t buy more of my time.

                                                                              2. 2

                                                                                I’d recommend squashing before the rebase simply so there’s only one commit you have to resolve conflicts on.

                                                                                But yes, rebase often. This is the way.

                                                                              1. 3

                                                                                I wonder what will happen when he discovers that VS Code is everywhere nowadays…

                                                                                1. 1

                                                                                  This week I finished a small tool to help me learn the number in Korean. I shared it among my classmates and got some feedback to improve the UX, so I will spend some time this weekend fixing stuff.

                                                                                  1. 1

                                                                                    I have my AWS certification renewal exam in a couple of months, so it is time again to refresh my knowledge in hard drive types for EC2…

                                                                                    1. 7

                                                                                      Funny story: I was born and raised in a French household, went to French schools until my masters, yet I’ve been told by a few people in town that they thought I was a native English speaker because I’ve apparently developed an accent when I speak French—likely from working in English companies and having an Anglophone family—and I now find it more difficult to find the right words in French than in English.

                                                                                      1. 2

                                                                                        I can relate to that. Spanish speaker, living and working in English for the last third of my life: communication in my mother tongue has become often a challenge because I tend to forget expressions that use uncommon words that I used to know or feel more natural using English (expressions that I use all the time). At this point I start to believe I am forgetting it.

                                                                                        1. 2

                                                                                          I’m Croatian, and even though my English is flawed, I’m doing quite fine, better then a lot non-English speakers in it.I rarely have to look for words or expressions, I’m mostly even thinking in English if I’m speaking English. But I moved to Germany recently and with German, is a huge effort. I feel after 8 years I can do okay, but after a long day of taking in German, I feel mentally exhausted.

                                                                                          But the funniest thing is, I’m now sometimes getting stuck with my English, because I’m looking how to translate something from German, not from Croatian! I don’t know why is that, but it’s very interesting.

                                                                                          1. 1

                                                                                            I often get people very confused as to where I’m from, as when I speak English I have a weird mixture of accents. I have my native accent in my native language (Welsh) still, but don’t get to use it often, as I live in Norway. So at some point, I managed to pick up some accents that can’t be placed, and I’m not really sure how that happened.It doesn’t bother me much, it’s fun to hear people guess all the random places they think I’m from.

                                                                                            The big downside is that people assume I’m a native English speaker, and they’re surprised when I tell them that I did all my education up to university in Welsh. It took me a long time to learn how to write properly in English.

                                                                                            1. 1

                                                                                              Heh, seems kind of like being Icelandic almost, excpet I also did my bachelors of university in Icelandic.

                                                                                          1. 1

                                                                                            Sometime ago I created a little app to help you solve wordle puzzles. Some friends suggested some improvements so I will spend some time implementing them.