1. 9

    Absolutely nothing.

    I’m a week into putting my unlimited vacation to the test by taking a month off to figure out if I hate my job or am just super burnt out.

    I’m hoping that I have enough energy to start playing video games properly soon.

    1. 2

      Good luck on figuring it out. I know that feeling and it’s not fun. Try to take some time for yourself not job related!

      1. 1

        Thanks!

    1. 1

      After 3 years of work, we’ve finally released: https://community.smartthings.com/t/announcement-smartthings-edge-for-devices-and-automations/229555

      Now, I’m supporting the release and trying to knock off a couple of bugs before our code freeze for the next release: https://community.smartthings.com/t/smartthings-edge-developer-beta-known-issues-and-bug-tracking/230389

      It feels so good to finally be able to talk about this.

      Edit: I’m also slowly trying to figure out how to dump all the game carts I still have from my childhood so I can legally emulate them in preparation for getting a Steam Deck.

      1. 2

        At the moment, I’m sitting on my balcony while it (finally) rains, sipping on a mug of Malsala Chai, and reading Rust for Rustaceans.

        Its a good day.

        1. 12

          Nothing. I just decided that I’m too burnt out to keep going when I sat down to start working this morning. I’ve got unlimited vacation, so fuck it, I’m taking two weeks off.

          1. 4

            From someone who once took months to act on burn-out: great job on recognizing the situation and doing something about it :)

            1. 1

              Thanks, unfortunately I’ve known I was on the way to burn out for quite awhile. This two weeks definitely isn’t going to be enough to get me fully back to 100%, but I’m hoping it’s enough to get me through a big release we have coming up after which I’m either going to take a month or two off or just quit if they won’t let me do that.

              1. 3

                I don’t know you, or the details about your situation, but if you need two months, you should take two months right now. Fuck that big release. Even if it can reasonably be expected that it is your responsibility to make sure it goes well, your health is still more important and the company should be able to deal with that situation. That is their responsibility. (After all, you could also walk under a bus tomorrow.)

                Also consider consulting a professional specialized in these matters. It may help speed up recovery.

          1. 6

            Other than MacBooks, are there any UltraBook (or relatively slim) laptops using a 16:10 aspect ratio? I can work with 16:9 on a 13” display and love my PBP and ThinkPads, but I prefer a bit more vertical space (hell, I prefer my iPad Pro’s 4:3 at that size).

            1. 5

              The latest (gen 9) Thinkpad X1 carbon switches to 16:10.

              1. 4

                I use an old IBM ThinkPad with a 4:3 1024x768 display. It’s pretty great. (And I can look at it without squinting!) But it’s twenty years old, so I can’t watch videos on it or use modern web browsers.

                That said, I’m happy that vendors are finally exploring aspect ratios other than 16:9, which is arguably the worst one, at least for laptops.

                1. 1

                  And I can look at it without squinting

                  I thought that Thinkpads from the 4:3 era predated LED backlights; isn’t it extremely dim? I’ve honestly been tempted to pick up an older 1600x1200 but the idea of going back to a CCFL just seems like a step too far.

                  1. 2

                    In a sunny room, it’s pretty dim, but workable. I use light themes predominately. Not sure what kind of backlights it has exactly. Definitely worse than my X1 Carbon 3rd gen.

                    But I’d personally take a dim screen over a high-dpi screen. The X1 sees little to no use because GUI scaling never works well and everything is too small without it.

                    1600x1200 might not be too bad, though, depending on the size.

                    1. 2

                      You can run a HDPI display at a lower resolution, and it generally looks amazing since the pixels are so small you see none of them (whereas that’s all you see when running a 1024x768 ~13” native display)

                      1. 1

                        Well, you can only run it at half resolution, right? Doesn’t work out too well unless you have really high dpi. 1920x1080/2 is 960x540, which is a very low resolution for 13".

                        But I don’t know what you mean about pixels. I don’t “see” the pixels on any of my laptops, regardless of resolution. The only screen I’ve ever been able to visually see the pixels on was the pre-Retina iPhone.

                        1. 1

                          Well, you can only run it at half resolution, right? Doesn’t work out too well unless you have really high dpi. 1920x1080/2 is 960x540, which is a very low resolution for 13”.

                          HDPI is not a resolution, it’s pixel density. I don’t think you’re limited to /2 scaling.. I’ve certainly done that (e.g. 4k display at 1080p), but also have run a 4k display at 1440p or a 1080p display at 1280x720.

                          But I don’t know what you mean about pixels. I don’t “see” the pixels on any of my laptops, regardless of resolution.

                          Strange, I see them on my partner’s 1366x768 IPS thinkpad x230 display. Maybe it’s one of those things that once you see, you can’t unsee it.

                          1. 2

                            HDPI is not a resolution, it’s pixel density.

                            Yes, I know, that’s why I specified the size of the screen as well as the resolution.

                            I’ve certainly done that (e.g. 4k display at 1080p), but also have run a 4k display at 1440p or a 1080p display at 1280x720.

                            Hm. A 1920x1080 display should not be able to – properly – run at 1280x720 unless it is a CRT. Because each pixel has an exact physical representation, it won’t align correctly (and the screen will thus be blurry) unless the resolution is exactly half of the native resolution (or a quarter, sixteenth etc.).

                            Strange, I see them on my partner’s 1366x768 IPS thinkpad x230 display. Maybe it’s one of those things that once you see, you can’t unsee it.

                            Yeah, strange! As I said, I saw them on the iPhone <4, so I sort of know what you’re talking about, but I’ve never seen them elsewhere.

                            Perhaps it really depends on some other factor and has little to do with dpi after all?

                      2. 2

                        My home desktop has a lovely 1600x1200 20” monitor that we pulled off the curb for free. It’s actually such a pleasure to use; too bad so much modern software is designed specifically for widescreen.

                  2. 4

                    The frame.work laptop is 3:2 (And the pricing seems not too bad either - Looks close to double the peformance of my ‘12 Retina MBP and nicely confgured for ~1300US$ but my MBP is still running fine for what I’m using it for)

                    https://frame.work/products/laptop-diy-edition

                    1. 2

                      The XPS 13 has a 16:10 display now and even has an OLED option. Developer Edition (aka it with Ubuntu): https://www.dell.com/en-us/work/shop/dell-laptops-and-notebooks/new-xps-13-developer-edition/spd/xps-13-9310-laptop/ctox139w10p2c3000u

                      I’ve been eyeing it up for awhile now myself.

                      1. 3

                        Note that, IIRC, OLED laptop displays are kind of weird on Linux because the traditional model of ‘just dim the backlight’ doesn’t work. I don’t know what the current state of the world is, but I definitely remember a friend complaining about it a year-ish ago. I personally wouldn’t go for it unless I could confirm that there was something working well with that exact model.

                        1. 4

                          Thanks for the heads up. I’m not seeing anything that definitively says it’s fixed now, but it does sound like there’s at least a workaround: https://github.com/udifuchs/icc-brightness

                          Hopefully by the time I actually decide to get one there will be proper support.

                          1. 1

                            Huh, I thought the display panels would translate DPCD brightness control into correct dimming. Looks like I might be right: e.g. for the Thinkpad X1 Extreme’s AMOLED panel there is now a quirk forcing DPCD usage.

                        2. 1

                          Pretty much everything in the Microsoft Surface line is 3:2.

                          1. 1

                            All the 3:2 displays I’ve seen have been glossy; do any exist in matte?

                          2. 1

                            X1 nano

                          1. 4

                            All these compiler errors make me worry that refactoring anything reasonably large will get brutal and demoralizing fast. Does anyone have any experience here?

                            1. 20

                              I’ve got lots of experience refactoring very large rust codebases and I find it to be the opposite. I’m sure it helps that I’ve internalized a lot of the rules, so most of the errors I’m expecting, but even earlier in my rust use I never found it to be demoralizing. Really, I find it rather freeing. I don’t have to think about every last thing that a change might affect, I just make the change and use the list of errors as my todo list.

                              1. 6

                                That’s my experience as well. Sometimes it’s a bit inconvenient because you need to update everything to get it to compile (can’t just test an isolated part that you updated) but the confidence it gives me when refactoring that I updated everything is totally worth it.

                              2. 9

                                In my experience (more with OCaml, but they’re close), errors are helpful because they tell you what places in the code are affected by the refactoring. The ideal scenario is one where you make the initial change, then fix all the places that the compilers errors at, and when you’re done it all works again. If you used the type system to its best this scenario can actually happen in practice!

                                1. 4

                                  I definitely agree. Lots of great compiler errors make refactoring a joy. I somewhat recently wanted to add line+col numbers to my error messages and simply made the breaking change of defining the location field on my error type, then fixed compile errors for about 6h. When the code compiled for the first time it worked! (save a couple of off-by-one errors) I have to say that it is so powerful that you can trust the compiler to let you know the places that you need to make changes when doing a refactoring, and catching a lot of other errors that you may make as you quickly rip through the codebase. (For example even if you get similar errors for the missing arguments in C++ quickly jumping to random places in the codebase makes it easy to introduce lifetime issues as you don’t always successfully grasp the lifetime constraints of the surrounding code as quickly as you think you have.) It is definitely wat nicer than dynamic languages where you get hundreds of rest failures and have to map those back to the actual location where the problem occured.

                                2. 7

                                  In my experience refactoring is one of the strong points of Rust. I can “break” my code anywhere I need it (e.g. make a field optional, or remove a method, etc.), and then follow the errors until it works again. It sure beats finding undefined is not a function at run time instead.

                                  The compiler takes care to avoid displaying multiple redundant errors that have the same root cause. The auto-fix suggestions are usually correct. Rust-analyzer’s refactoring actions are getting pretty good too.

                                  1. 3

                                    Yes. My favourite is when a widely-used struct suddenly gains a generic parameter and there are now a hundred function signatures and trait bounds that need updating, along with possibly infecting any other structs that contained it. CLion has some useful refactoring tools but it can only take you so far. I don’t mean to merely whinge - it’s all a trade-off. The requirement for functions to fully specify types permits some pretty magical type inference within function bodies. As sibling says, you just treat it as a todo list and you can be reasonably sure it will work when you’re done.

                                    1. 2

                                      I think generics are kind of overused in rust tbh.

                                    2. 2

                                      I just pick one error at a time and fix them. Usually its best to comment out as much broken code as possible until you get a clean compile then work one at a time.

                                      It is a grind, but once you finish, the code usually works immediately with few if any problems.

                                      1. 2

                                        No it makes refactors much better. Part of the reason my coworkers like Rust is because we can change our minds later.

                                        All those compile errors would be runtime exceptions or race conditions or other issues that fly under the radar in a different language. You want the errors. Some experience is involved in learning how to grease the rails on a refactor and set the compiler up to create a checklist for you. My default strategy is striking the root by changing the core datatype or function and fixing all the code that broke as a result.

                                        1. 1

                                          As a counterpoint to what most people are saying here…

                                          In theory the refactoring is “fine”. But the lack of a GC (meaning that object lifetimes are a core part of the code), combined with the relatively few tools you have to nicely monkeypatch things mean that “trying out” a code change is a lost more costly than, say, in Python (where you can throw a descriptor onto an object to try out some new functionality quickly, for example).

                                          I think this is alleviated when you use traits well, but raw structs are a bit of a pain in the butt. I think this is mostly limited to modifying underlying structures though, and when refactoring functions etc, I’ve found it to be a breeze (and like people say, error messages make it easier to find the refactoring points).

                                        1. 2

                                          I’m at a cabin for the week not thinking about work and mostly not thinking about my projects. Though I now have a subscription through my work to read (AFAIK) any O’Reilly book I want, so I might read about audio electronics because my EE degree is pretty much going to waste while I do nothing but write software for work and building my own DAC sounds like fun.

                                          1. 1

                                            A matrix room with no other people in it.

                                            1. 1

                                              Matrix cat sez “im in ur room reading ur secretz”

                                              1. 10

                                                That datasheet is a thing of beauty. Things are written in plain english, diagrams are in color, there are inline code snippet examples in both asm and c most of which link to a file in a github repo that show how they fit in a larger program.

                                                Seriously that is so many orders of magnitude better than any datasheet I’ve ever had to read before.

                                                1. 1

                                                  I totally agree. They really spent some quality time on it. I think this is going to help them sell a lot of these.

                                                2. 2

                                                  This looks even better then TI datasheets (which I found to be very readable for a software guy).

                                                1. 1

                                                  Doing a few workshops from remoticon. Going to finally learn how to do something with my SDR that has been sitting in a box for too long and learn something about rf debugging.

                                                  1. 1

                                                    At work we’re getting close enough to releasing ██████████████████ that it feels like a real thing. It has been my only focus for more than a year so it’s a really good feeling. Right now I’m basically doing nothing but writing docs, guides, and examples.

                                                    At home I finally sold my old 3D printer and reclaimed the space it was taking up. I finished the base of my customized lack enclosure for my new printer this morning which means I finally have a little more desk space back. This week I’m going to work on making the top enclosure structure. I want to so it a bit different than the standard design to get a bit more height and make it look a bit more minimalist.

                                                    1. 3

                                                      A cloud-free IoT device framework/os. There’s so many cheap Chinese IoT devices out there that are just taking some off the shelf software and tossing it on lightly customized hardware. If there were some software that didn’t require a server to operate I have to imagine there’d be some that would pick it up and could slowly start to change consumer IoT from a privacy & security nightmare to what it was originally supposed to be.

                                                      Unfortunately, managed to finagle my dream project at my day job into existence so all of my mental energy has been going into that. (Which coincidentally, is making a cloud-focused IoT platform a little less cloud-focused.)

                                                      1. 1

                                                        Have you heard of/used Homebridge? I think its main thing is HomeKit-specific (so, Apple products), which works for me, but it also has a web UI available where you can manage your IoT devices too.

                                                        I have an odd collection of Philips and Xiaomi smart devices and am able to keep them all safely off the internet and controllable through all our devices at home, it’s nice!

                                                        1. 1

                                                          I absolutely agree with this.

                                                          Offline, local control is one of the big selling points for BLE, especially with the mesh spec finalized and (at least starting to) be more and more common. Getting consistent hardware/implementations/performance, on the other hand, still feels way too difficult. Similar can be said for Weave - makes a ton of sense but is genuinely not a fun thing to work with.

                                                          I’m not sure why but I find the DIY systems (Home Assistant, openHAB) abrasive and, for me at least, flaky.

                                                        1. 3

                                                          Lua also provides some functionality that could make this even more serialization-y. For instance you could have a file like this:

                                                          Body{x = 125.000, y = 70.000, x_vel = 0.000, y_vel = 1.000, mass = 80.000, rad = 4.500}
                                                          Body{x = 175.000, y = 70.000, x_vel = 0.000, y_vel = -1.000, mass = 80.000, rad = 4.500}
                                                          Body{x = -30.000, y = 70.000, x_vel = 0.000, y_vel = -1.000, mass = 0.500, rad = 1.000}
                                                          Body{x = 330.000, y = 70.000, x_vel = 0.000, y_vel = 1.000, mass = 0.500, rad = 1.000}
                                                          

                                                          That’s a valid lua program that’s just calling the Body function on each line. Lua allows you to omit the () on functions that take a single string or table as an argument, so that’s just Body({...}). That means that this would work just as well for the loop case that they also showed:

                                                          for i = 1, 32 do
                                                             for j = 1, 32 do
                                                                Body{x = 20 * i, y = 20 * j, mass = -0.1, rad = 2}
                                                             end
                                                          end
                                                          
                                                          1. 3

                                                            How small is the irreparable damage? Is there a picture of it?

                                                            1. 22

                                                              I’m fairly sure the fuses are a part of the CPU die, so they’re only several microns in size.

                                                              1. 9

                                                                @dstaley is right it’s just an extra small metal trace somewhere inside the die. Like any other fuse you put a high enough voltage across it and it pops. Then the CPU can just check the continuity with a lower voltage to check if it has been blown or not.

                                                                This has some die photos of one example: https://archive.eetasia.com/www.eetasia.com/ART_8800717286_499485_TA_9b84ce1d_2.HTM

                                                                1. 7

                                                                  Like others have said, these fuses are on the CPU die itself. Fuses like this are actually quite common on microcontrollers for changing various settings, or locking the controller to disallow it from being programmed after its received the final production programming.

                                                                  1. 6

                                                                    The Xbox360 also did something similar with its own “e-fuses.” I assume it’s standard practice now.

                                                                    1. 4

                                                                      Yup, it’s entirely standard for any hardware root of trust. There are a couple of things that they’re commonly used for:

                                                                      First, per-device secrets or unique IDs. Anything supporting remote attestation needs some unique per-device identifier. This can be fed (usually combined with some other things) into a key-derivation function to generate a public-private key pair, giving a remote party a way of establishing an end-to-end secure path with the trusted environment. This is a massive oversimplification of how you can spin up a cloud VM with SGX support and communicate with the SGX enclave without the cloud provider being able to see your data (the most recent vulnerability allowed the key that is used to sign the public key along with the attestation to be compromised). There are basically two ways of implementing this kind of secret:

                                                                      1. PUFs. Physically Unclonable Functions are designs that take some input (can be a single bit) and produce an output that is stable but depends on details beyond the manufacturing tolerances of a particular process. The idea is that two copies of exactly the same mask will produce versions that generate different outputs. PUFs are a really cool idea and an active research area, but they’re currently quite big (expensive) and not very reliable (so you need a larger area and some error correction to get a stable result out of them).
                                                                      2. Fuses. Any hardware root of trust will have a cryptographic entropy source. On first boot, you read from this, filter it through something like Fortuna (possibly implemented in hardware) to give some strong random numbers, and then burn something like a 128- or 256-bit ID into fuses. Typically with some error correction.

                                                                      The MAC address (as @Thra11 pointed out) is a simple case of needing a unique identifier.

                                                                      The second use is monotonic counters for roll-back protection. A secure boot chain works (again, huge oversimplifications follow, ) by having something tiny that’s trusted, which checks the signature of the second-stage boot loader and then loads that. The second-stage checks the signature of the third stage, and so on. Each one appends the values that they’re producing to a hash accumulator. Again, with a massive oversimplification, you may end up with hash(hash(first stage) + hash(second stage) + hash(third stage) …), where hash(first stage) is computed in hardware and everything else is in software (and where each hash function may be different). You can read the partial value (or, sometimes, use a key derived from it but not actually read the value) at any point, so at the end of second-stage boot you can read hash(hash(first stage) + hash(second stage)) and can then use that in any other crypto function, for example by starting the third-stage image with the decryption key or signature for the boot image encrypted with a key derived from the hashes of all of the allowed first and second-stage boot chains. You can also then use it in remote attestation, to prove that you are running a particular version of the software.

                                                                      All of this relies on inductive security proofs. The second stage is trusted because the first stage is attested (you know exactly what it was) and you trust the attested version. If someone finds a vulnerability in version N, you want to ensure that someone who has updated to version N+1 can never be tricked into installing version N.

                                                                      Typically, the first stage is a tiny hardware state machine that checks the signature and version of a small second-stage that is software. The second-stage software can have access to a little bit of flash (or other EEPROM) to store the minimum trusted version of the third-stage thing, so if you find a vulnerability in the third-stage thing but someone has already updated with an image that bumped the minimum-trusted-third-stage-thing-version then the second-stage loader will refuse to load an earlier version. But what happens if there’s a vulnerability in the second-stage loader? This is typically very small and carefully audited, so it shouldn’t be invalidated very often (you don’t need to prevent people rolling back to less feature-full versions, only insecure ones, so you typically have a security version number that is distinct from the real version number and invalidate it infrequently). Typically, the first-stage (hardware) loader keeps a unary counter in fuses so that it can’t possibly be rolled back.

                                                                      1. 1

                                                                        (You likely know this, but just in case:)

                                                                        What you describe above is a strong PUF; weak PUFs (that do not take inputs) also exist, and - in particular - SRAM PUFs (which you can get from e.g. IntrinsicID) are pretty reliable.

                                                                        (But indeed, lots of PUFs are research vehicles only.)

                                                                    2. 4

                                                                      Examples of fuses I’ve seen used in i.MX6 SOCs include setting the boot device (which, assuming it’s fused to boot from SPI or onboard mmc effectively locks out anyone trying to boot from USB or SD card), and setting the mac address.

                                                                  1. 6

                                                                    I use a self-hosted instance of https://tt-rss.org/, and have been for several years. Both with the standard web-ui & the android app. It’s fine. I really enjoy my read history synced between my various devices. It’s not the most elegant UI, it has some quirks, especially in the web-ui, but it’s gets the job done well enough. I’ve tried a few others, but haven’t come across anything that works quite as well.

                                                                    1. 2

                                                                      Also good luck if you wade into the official forums for support or a bug report.

                                                                      I’ve also been using it for years because it simply works. Wanted to change servers and use the docker container but I postponed that because that was absolutely not working and I am not in the mood to argue with the maintainer. Not sure what I will do, but I use it together with NewsPlus on Android and don’t really want to change that setup. (That Android app hasn’t been updated for ages but I bought it and will use it as long as it works, because I love it.)

                                                                      1. 1

                                                                        linuxserver.io had tt-rss as a container they supported but had to stop due to reasonable(?) changes asked of the repo maintainer. The forums seem to be rather hostile. I’ve taken to just cloning and building the image myself (which the maintainer IIRC argued is what they think everyone wants to do) but is categorically the opposite of what I want do. I want a trusted repository in which to pull a minimal image that is up to date.

                                                                        Sad links of despair:

                                                                        1. 1

                                                                          Yes, I also skimmed or read all of those. Some changes were integrated after weeks of discussion but for some reason or other I couldn’t get it to work, just 2-3 weeks ago (could be my setup, sure).

                                                                          1. 1

                                                                            Ahh, if all you want is an image. Feel free to use mine!

                                                                            https://hub.docker.com/r/dalanmiller/tt-rss

                                                                      2. 1

                                                                        Same here. There is an official package in Arch Linux, I use that.

                                                                        1. 1

                                                                          I also self-host Tiny Tiny RSS. On iOS I use Fiery Feeds which has a much better UI.

                                                                        1. 3

                                                                          I don’t get the excitement for the PinePhone, and I’ve owned a Palm Pre, a Nokia N900 and a Sailfish Jolla. The problem with all of these is mostly software. For the N900 and Jolla (I think) you had poor documentation and had to create RPMs. Tolerating this is far too much of an ask for the average mobile dev, and without them you’ll have 2 or 3 apps a day instead of the thousands on other platforms. You need to ship an IDE with a “Build now” button that packages it for you, and a second button to upload it to your (free) developer account.

                                                                          Succeeding on the mobile landscape enough to have a 2nd gen model, or even keeping their software updated, is going to take sales to more than hobbyists - and that means quality software tools and documentation. IMHO the hardware is mostly secondary, since most mobile chipsets these days can deliver a good enough “first version” to prove the model. I hope I’m wrong, but I’m not hopeful for the PinePhone.

                                                                          1. 2

                                                                            You need to ship an IDE with a “Build now” button that packages it for you, and a second button to upload it to your (free) developer account

                                                                            I don’t think that Jolla is really that far off with this. Using the Sailfish SDK you have a build now button, and then a run/deploy button that installs it and runs directly on the phone connected over a USB cable. You don’t even need to know what RPMs are: in fact, one of the deployment options skipped it entirely in favour of just rsyncing the files. Unless you mean the actual, „production” deployment: then yes, you need to build all the RPMs and submit it to a website manually. Harbour is criminally underdeveloped.

                                                                            As for poor documentation, agreed: Sailfish app development is full of tribal knowledge :/

                                                                            1. 2

                                                                              And it’s a shame, because Sailfish seems to have failed without having the basics in for developers - what did they expect? It’s like having a website with a malfunctioning shopping cart and wondering why you went bankrupt.

                                                                              1. 3

                                                                                Funny you should mention that specifically: over 5 years after it’s been first released Sailfish still has no support for paid apps: and I remember it being asked for at least as long as I’ve lurked on #mer-meeting for the weekly community chats.

                                                                                And then there’s the missing APIs… Qt has a standard library for displaying tiled maps with simple overlays. A Map { } is literally an import away. For some reason, that API is still not allowed in Harbour, so if you want a map-using app in the official Jolla Store, good luck rolling out your own map renderer. And examples like these go on and on: to the point where the de-facto store with state-of-the-art apps is the unofficial https://openrepos.net/, with actual depedency management, no artificial restrictions and even trivial things like being notified on user comments about your apps.

                                                                            2. 2

                                                                              I’m personally excited by it because it’s a $200 phone that’s actively manufactured with (nearly) mainline’d drivers. I’ve got the braveheart version and it’s amazing to have a phone that I know I won’t have to recycle just because google decided to stop releasing updates for.

                                                                              AFAIK, with most phone SOCs there are non-opensource drivers that are provided from the mfg, that can’t be up-streamed, which makes you dependent on the mfg to provide updates.

                                                                              Also, I don’t see the limited selection of apps as a strong negative. Can you really say with a straight face that the vast majority of those 1000s of apps are beneficial to you in any way? I don’t personally see this as a more is better situation. You just need to search any store for “flashlight” to see that it’s really more of a problem that a benefit. I’d much rather have an opinionated repository of applications that someone has done at least a minimum amount of vetting to check that’s they apps it contains aren’t actively and explicitly harmful. And with the pinephone anyway it’s not like you’re opting into a walled garden, it’s more like a selection of different gardens with paving stones that someone has laid to show where they have checked it’s safe to step. But you can always walk where ever you want (and just go pipe some curl to sh because a readme told you to).

                                                                              1. 1

                                                                                Also, I don’t see the limited selection of apps as a strong negative. Can you really say with a straight face that the vast majority of those 1000s of apps are beneficial to you in any way? I don’t personally see this as a more is better situation.

                                                                                At the end of the day, those apps are needed for the PinePhone to have a future. Or else you’ll just have a repeat of the kind of apps that F-Droid already has.

                                                                                1. 3

                                                                                  I don’t think having thousands of flashlight apps that all want to track your location and phone history is ‘needed’.

                                                                                  1. 1

                                                                                    Location and phone history tracking aside, apps that hobbyists don’t necessarily want are maybe a path to success. Otherwise it’s a repeat of WebOS, Maemo and Sailfish.

                                                                                    1. 1

                                                                                      What makes you think that Pine64 (and Purism, for that matter) are measuring success based on market share vs. Android/iOS? Dismissing alternative mobile operating systems because they don’t have a goal of immediate world domination is silly. These options can still be successful even if your grandma isn’t using it.

                                                                                      1. 2

                                                                                        I’m not advocating for world domination, just staying afloat long enough for us to see this going somewhere. As I’ve mentioned, I’ve owned a number of alt phones and they all end up folding. I see nothing different about this one.

                                                                                        1. 1

                                                                                          Sailfish/Jolla hasn’t folded. WebOS and Maemo/Meego/Tizen folded because they were trying to achieve world domination, and therefore had a massive uphill battle to win in order for the companies investing in them to see it as a success.

                                                                                          1. 2

                                                                                            Jolla doesn’t make hardware anymore, right? Would that be an acceptable outcome for Pine64?

                                                                                            Regardless, hardware doesn’t really matter in the end. It’s all about software and solving problems for users. Relying on free software is not a winning strategy long term. Hence the “year of Linux on the desktop” recurring joke. If we rely on the average FLOSS app on mobile to be the poster child for PinePhone, people will just flock to other platforms because they work better. Design is not opensource software’s forté.

                                                                                  2. 1

                                                                                    There’s clearly a large space between what’s in the google play store and the f-droid repos. I agree with you that a phone that only had access to f-droid wouldn’t be successful. (And I say that as someone that gets as many apps as I can from f-droid.) But I think the pinephone is better off nearer the f-droid end than the play store end.

                                                                                    I feel like desktop linux is a better comparison since most of the OS options for the pinephone are basically that with a compressed UI. Places like flat-hub have both open source and closed source software. There’s many more recognizable apps available. f-droid is much more focused on open source only because the play store already exists so it doesn’t need to cater to users looking to use closed source software.

                                                                                    We’re getting really far away from your original question that I was giving my answer to, I’ll just say (and I may not have made this super clear in my earlier reply): I’m excited for the pinephone, not because the ecosystem as it exists today is ready to be my one and only phone, but because the hardware that does exist seems to be a great vehicle for the software ecosystem to mature on. The fact that the kernel portions are all either already mainline or well on their way, it means that it won’t get left behind in the same way the previous best options would.

                                                                                    That combined with the fact that pine aren’t trying to do everything themselves and are leaving the software up to the community means that development of the higher-level parts of the software stack that don’t yet exist will continue to be made almost no matter what.

                                                                                    1. 2

                                                                                      I install 100% of the apps I use from f-droid. Sure, it means I miss out on the latest android app trends (at least until there’s a FOSS clone or client on f-droid), but the device I have now is still far more functional even with this ‘limitation’ than one from 10 years ago.

                                                                                      1. 1

                                                                                        Yeah, it’s definitely possible and I’d be right there with you if it weren’t for the fact that my job depends on having access to an app that requires the play store APIs (or an iPhone). However, I don’t really think that a phone that only had access to f-droid would be enough of a commercial success to sustain it’s own development costs, as much as I’d love to see one succeed. It’s a just too much of a niche of a niche of a niche.

                                                                                        1. 4

                                                                                          The key thing about the pinephone is that it doesn’t need to be a commercial success. It’s a labour of love from a company that already has a thriving income from their SoC business. Iirc they’re even selling the hardware at cost. So this isn’t a one-shot-or-bust project like most other linux phones - they can provide the breathing time for a community to gel around the platform and maybe solve the chicken-and-egg problem of not having any software because there isn’t any supported hardware.

                                                                              1. 6

                                                                                Cutoff Multivals

                                                                                The one that that kills me a little in lua comes from this: It’s impossible to set an assert message on a function that returns multiple values.

                                                                                > function getthings() return 5, 4 end
                                                                                > assert(getthings())
                                                                                5	4
                                                                                

                                                                                That all works fine, but what if I want my assert to have more details about where I was when that function failed?

                                                                                > assert(getthings(), "no things when $FOO")
                                                                                5	no things when $FOO
                                                                                

                                                                                :/

                                                                                1. 3

                                                                                  That’s frustrating, I hadn’t run into that with assert specifically! Thanks for providing that example. It’s a good instance of a case where multivals contribute to the large number of tradeoffs when determining argument order for your Lua functions.

                                                                                  1. 2

                                                                                    Please note that the “opposite” way of using assert is intended: somewhat resembling Go, it’s a common idiom for a failed function to return 2 values: nil, "some error message". You can then plug it straight into assert like below (as shown in “PIL”):

                                                                                    local f = assert(io.open(filename))
                                                                                    

                                                                                    and expect assert to print the error message in case of a problem.

                                                                                    As a side note, the freely available old edition of PIL is for a super ancient version of Lua, but I’d still resoundingly recommend reading it for all the tricks and ideas it communicates.

                                                                                    1. 2

                                                                                      Yep, for sure. I was thinking of exactly that when I mentioned adding more context.

                                                                                      I’ve got the 5.3 ed of PIL and also highly reccomend it. It’s actually one of my favorite books. It’s one of the only books I’ve ever read that manages to not be overly verbose, but also doesn’t have big gaps leaving you confused.

                                                                                      If anyone else is interested in picking it up Feisty Duck has a DRM free ebook version of it.

                                                                                    2. 2

                                                                                      What do you think of this?

                                                                                      assertm = function(x, ...)
                                                                                        assert(unpack(arg), x)
                                                                                        return unpack(arg)
                                                                                      end
                                                                                      
                                                                                      1. 1

                                                                                        Oh, interesting, message first. Yeah, that definitely seems to work. Thanks for the tip!

                                                                                        (Sidenote, that code needs a s/arg/.../g.)

                                                                                      2. 2

                                                                                        It’s not a problem unless you try to do something like this:

                                                                                        a, b = assert(getthings(), "no things when $FOO")
                                                                                        

                                                                                        and then you can just do that instead:

                                                                                        a, b = getthings()
                                                                                        assert(a, "no things when $FOO")
                                                                                        
                                                                                        1. 1

                                                                                          Oh, for sure, that’s why it only kills me a little. I didn’t mean that it was impossible to write it in a way that accomplished what I want, it was more just an example of the problem in the article that I had run into. (And that took me far too long to realize what was happening, before having a blinding realization that it simply couldn’t work the way I wanted it to.)

                                                                                      1. 5

                                                                                        Slightly off topic (especially since this spin doesn’t seem to actually have a 32 release yet), but is anyone here running Fedora Silverblue? I’m considering moving over to that, but I’ve been burned before. I’m curious for anyone that is running it, how much of an impedance do you feel it is to your everyday development?

                                                                                        The last “interesting” OS I tried was NixOS, and I eventually came to the conclusion that it was getting in my way more than it was really helping me. Mostly this came down to installing dev tools. I found I wasn’t working on things that were interesting to be because it was too much of a pain to get tools installed. (Rust (latest versions), Adruino (I think having it at all?), and LuaSocket (was getting built wrong, I gave up trying to find where it wanted the -D LUA_COMPAT_APIINTCASTS compile option after 3 hours) are what come to mind.)

                                                                                        If there is anyone running silverblue here, do you ever have similar kinds of issues?

                                                                                        1. 3

                                                                                          but is anyone here running Fedora Silverblue

                                                                                          I only tried Silverblue on a spare hard disk that I have lying around. I think it is really a bit step forward and like what they are doing. I am reading the Silverblue forums semi-regularly and it seems that Fedora Toolbox (which is used to create VMs for doing development in) breaks every now and then. It seems that Silverblue is still a second-class citizen to regular Fedora. On the other hand, given the nature of Silverblue these problems are easily solved by booting into an older snapshot when such a glitch occurs for the short timeframe it takes them to fix it. Unfortunately, I do not have more data points than that. Besides that it is not possible to run Nix on the root filesystem by default, because / is immutable.

                                                                                          I would monitor the Silverblue forums for a while, because it gives a good idea of what kind of problems to expect.

                                                                                          Rust (latest versions)

                                                                                          I know that this post not about Nix. But with the Mozilla Nix overlay, you can get the latest stable/beta/nightly: mozilla.latest.rustChannels.{stable,beta,nightly}. You can also use the overlay to get any arbitrary stable or nightly version. See the following:

                                                                                          https://discourse.nixos.org/t/pin-rust-version/5812/2 https://discourse.nixos.org/t/pin-rust-version/5812/3

                                                                                          I use NixOS on various machines, but I would really recommend newcomers to use Nix for a while on a familiar distribution. NixOS is so much more fun if you have climbed part of the Nix learning curve (know the Nix language, know your way around nixpkgs). That way you can always revert back to what you know if trying to do it the Nix way takes too much time.

                                                                                          Sorry for the digression ;).

                                                                                          1. 3

                                                                                            I have been running Silverblue on my desktop and my laptop since late January. I enjoy it.

                                                                                            I resisted the docker fad for a long time for many reasons, but mainly because I thought the implementation of docker was unfortunate, and the ways people used it was cumbersome and error-prone. Podman solves the former, and Toolbox solves the latter.

                                                                                            There are a few rough edges. Toolbox switching isn’t as nice as it could be (there could be Terminal integrations that would make this nicer), toolbox shits a lot of things in your environment (at least one of these conflicts with Ruby on Rails, so I have to unset VERSION to be able to run migrations), and a few other tiny things.

                                                                                            The documentation is still sparse.

                                                                                            Overall, I’m very happy with it and will continue to use it. This is the first time I’ve used anything other than Debian since before the bo release.

                                                                                            1. 3

                                                                                              but is anyone here running Fedora Silverblue?

                                                                                              I’d jumped around a couple distros for various reasons (temporal recounting over the last ten years):

                                                                                              • Fedora: wanted to follow along with RH (my early days of Linux)
                                                                                              • Arch: wanted to be able to consume as “pure” a systemd stack as I could to get a good feel for things
                                                                                              • Debian: wanted to converge my workstations (testing) and servers (stable+backports)
                                                                                              • Fedora: wanted to really start adopting podman+toolbox

                                                                                              I jumped back into Fedora, via silverblue, with F31 and I’ve been using F31 and F32 interchangeably as necessary (when a package in F32 wasn’t working well I could always just use my pinned F31 instance).

                                                                                              With Arch and Debian I was effectively rolling my workstation, which is a comfort as if you’re using newer hardware you want new kernels and often you want to get your hands on something without having to consider packaging yourself. Silverblue basically marries up the principles of a released system with the principles of rolling in a way that I find to be completely and utterly acceptable for my use cases. I am able to effectively “ride” the releases of Fedora without having to do a precarious upgrade or reinstall.

                                                                                              I’ll say, I am likely layering many more packages in than what you’d see people typically recommend.

                                                                                              [agd@enoch ~]$ rpm-ostree status
                                                                                              State: idle
                                                                                              AutomaticUpdates: disabled
                                                                                              Deployments:
                                                                                              ● ostree://fedora:fedora/32/x86_64/silverblue
                                                                                                                 Version: 32.20200428.0 (2020-04-28T01:00:38Z)
                                                                                                              BaseCommit: 3304e379ff5090a15816af207dbcc82f0db0cd4883216ede8f4957a499e30df8
                                                                                                            GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
                                                                                                         LayeredPackages: baobab beets beets-plugins boxes cheese chromium darktable eog evince evolution ffmpeg file-roller firewall-config gimp git-lfs gmpc gnome-boxes gnome-builder gnome-calculator gnome-firmware gnome-screenshot
                                                                                                                          gnome-shell-extension-gpaste gnome-shell-extension-pomodoro gnome-sound-recorder gnome-tweaks htop hugo ipmitool keepassxc libreoffice make mpd mpdscribble nautilus-image-converter numix-icon-theme-circle
                                                                                                                          numix-icon-theme-square oathtool opensc openssl p7zip p7zip-gui p7zip-plugins pass peek rawtherapee seahorse seahorse-nautilus simple-scan sshuttle system-config-printer vim vlc youtube-dl
                                                                                                           LocalPackages: sublime-text-3210-1.x86_64 code-1.43.2-1585036535.el7.x86_64 rpmfusion-free-release-32-0.3.noarch rpmfusion-nonfree-release-32-0.4.noarch sublime-merge-1119-1.x86_64
                                                                                              

                                                                                              I’m of the mindset that I choose to run a distribution because I trust the packaging guidelines and the packagers of the software. This means that I’m quite preferential to using the fedora packages. I’ve been using flatpak where necessary but I am only consuming packages that are either:

                                                                                              • packaged by upstream in a way that I think is better than the equivalent package in Fedora
                                                                                              • doesn’t exist in fedora in a reasonable way (e.g. mumble)

                                                                                              I am only a little bit struggling in the sense that Fedora IoT is the “headless” version of Silverblue (if you want to think that way) and it’s difficult to get kernel modules (e.g. ZFS) into. I’d love to be able to install Fedora on my servers and have ZFS available, but be able to “ride the releases” by pulling composes rather than reinstalling.

                                                                                              The last “interesting” OS I tried was NixOS, and I eventually came to the conclusion that it was getting in my way more than it was really helping me.

                                                                                              I dipped my toes into Nix right before going to silverblue and had the same sentiment.

                                                                                            1. 16

                                                                                              So Microsoft GitHub is doing the “lower the price, so the competition dies”-trick in this market as well, now. Interesting.

                                                                                              1. 27

                                                                                                A company responding to market pressures and pricing their products more competitively. Truly an evil ploy 😒🙄

                                                                                                1. 8

                                                                                                  Wouldn’t you say it’s unfair competition to be able to dump infinite money into a business area in order to drive out competitors? That’s way past aggressive pricing.

                                                                                                  1. 4

                                                                                                    Wouldn’t you say it’s unfair competition to be able to dump infinite money into a business area in order to drive out competitors? That’s way past aggressive pricing.

                                                                                                    It depends on how much you do it and for how long. Most startups start by selling below cost. The joke about Amazon in the ‘90s was that they make a loss on each sale, but make it up in volume. The typical marker for anticompetitive behaviour is whether the low price is long-term sustainable. If you are selling below cost because you expect to be able to lower your costs via economies of scale, that’s fine. If you’re cross-subsidising from another revenue stream and just trying to push your competitors out of business, that typically isn’t.

                                                                                                    As I understand it [1], GitHub is independently profitable, primarily from the enterprise offerings. The free offering is one of the highest return-on-investment advertising campaigns that any company has ever offered (Gillette sending free razors to everyone in the UK who appeared as male on the electoral roll one year is close). Pretty much everyone coming out of university with a vague interest in programming has GitHub experience and I would be shocked if that didn’t translate into a load of companies buying the enterprise offerings. Even the $21/month/dev offering is a lot cheaper for most companies than doing the same thing in-house (compare that to even the salary of one person full time maintaining the infrastructure and you need quite a lot of devs for that to reach the break-even point).

                                                                                                    [1] Disclaimer: I work for Microsoft Research, so may be considered biased, but I have no visibility into GitHub.

                                                                                                    1. 2

                                                                                                      Bitbucket’s been like this forever right?

                                                                                                      “Offer basic service for free, advanced features behind paywall” is not really an odd concept, and it doesn’t require infinite money pits. As a (relatively small, granted) team we evaluated this change and decided to keep on paying for the paid service because we wanted the feaetures it was providing.

                                                                                                      I also remember a thing about how GH makes a bunch of money on its on-premise thing, and I imagine that pricing is not changing at all

                                                                                                    2. 9

                                                                                                      A company responding to market pressures with no regard for profit against competitors that don’t have vast resources backing them is a net detriment to the market. Similarly large companies (Google, Facebook) have no reason to get into the market and smaller companies (GitLab, sourcehut) can’t easily compete with Microsoft operating at a loss. This a classic monopoly tactic.

                                                                                                      1. 5

                                                                                                        I’m not so sure if it’s the case that GitHub “has no regard to profit”; in the HN thread Nat said they’ve been wanting to do this for a while, but had to wait for revenue in the enterprise to be high enough. The existing pricing for BitBucket and GitLab are similar to the new GitHub pricing; GitHub was actually quite expensive before. The new pricing seems reasonable and fair to me, and is competitive. I see no evidence of it being sponsored by Windows sales, for example.

                                                                                                        GitLab seems to be doing quite well with $100M revenue, Atlassian has $1.2 billion revenue (can’t find numbers for BitBucket specifically), sourcehut will always remain a niche product due to its idiosyncrasies (which is not just fine, but great; niche markets deserve good products too). So I’m not especially worried about any of those.

                                                                                                        I’m also not hugely enthusiastic by large companies becoming ever larger, and would have preferred if GitHub had remained independent. I think we probably have some common ground here. But what I’m a little bit tired of is that everything GitHub does these days is seen as part of some sort of malicious plan, and the assumption that everything they do is done in bad faith. Certainly in this case, it seems like a normal common-sense business decision to me.

                                                                                                        Is there a potential for Microsoft to abuse their power with GitHub? Sure! But thus far I’ve seen no indications of this. I agree we should be watchful for this (and ideally we should have better anti-trust laws), but I think we must also keep a level head and not jump to conclusions over every small thing. As someone who started using Linux/BSD systems in the early 2000s I have plenty of gripes with Microsoft (being sent a .doc file was a proper hassle back then), but pretty much all of the leadership has changed and Microsoft is not the same company. Referring to long-since abandoned strategies like EEE is, quite frankly, just inappropriate. I have actually flagged that comment as “unkind”, because random accusations without evidence are not appropriate IMO, even when directed at companies.

                                                                                                        CC this this also replies to your comments: @nomto @caleb @azdle

                                                                                                        1. 2

                                                                                                          I wrote a whole in-depth response but then, upon re-reading, I realized that we pretty much have no common ground on which to discuss this.

                                                                                                          I have actually flagged that comment as “unkind”, because random accusations without evidence are not appropriate IMO, even when directed at companies.

                                                                                                          Y’all are on some real bootlicker shit over here.

                                                                                                          1. 1

                                                                                                            I can see you’re committed to constructive discourse where everyone is free to voice their opinions without fear of being insulted; not so much to convince each other, but to at least understand each other’s positions better. Thank you!

                                                                                                          2. 1

                                                                                                            But what I’m a little bit tired of is that everything GitHub does these days is seen as part of some sort of malicious plan, and the assumption that everything they do is done in bad faith.

                                                                                                            Everything that GitHub does these days is part of some sort of malicious plan. That’s how business works (at this scale and in this part of the economy, at any rate).

                                                                                                        2. 4

                                                                                                          It’s a ploy to eliminate competition and expand private control over the infrastructure used by developers. Whether you think it’s evil depends on your values.

                                                                                                        3. 5

                                                                                                          The interesting part is that they chose to do it after their Enterprise business got big enough to subsidize it, not as a loss-leader using Microsoft money. It seems like the strategy to keep GitHub and Microsoft relatively separated has allowed GitHub to continue to connect very well with their target audience. Someone on HN mentioned Cloudflare as another company that has done a similarly good job of understanding who they’re marketing to and making changes that makes their target market happy.

                                                                                                            1. 12

                                                                                                              Do you have any examples of GitHub or Microsoft extending git so that it’s incompatible with non-GitHub/Microsoft clients?

                                                                                                              1. 11

                                                                                                                I don’t know if/don’t think that this is a case of EEE, but FWIW, I’ve had a lot of trouble explaining people past a certain level of management (read: who have not programmed for more than some amount of time) that git and Github are different things. I’ve worked in a place where virtually everyone with a word to say in terms of budget, tooling and whatnot hadn’t used a version control system since back when SVN was pretty fresh, and some of the things that I had lots of trouble (read: needed countless hours and countless meetings) were:

                                                                                                                • Git is a VCS, Github is a tool that uses git. (This was all happening while I was lending a hand with a very tortuous transition to git and virtually everyone referred to it as “the transition to github”, even though we were actually using Gitlab!)
                                                                                                                • git is not developed by Microsoft.
                                                                                                                • Github is not the enterprise/SaaS version of git, git is not the free/community version of Github.
                                                                                                                • Gitlab is not a free/self-hosted/community edition of Github.
                                                                                                                • You don’t need something like Github or Gitlab to use git.
                                                                                                                • The pull request-oriented workflow of Github is just one of the possible workflows, and you can do it without Github or Gitlab.

                                                                                                                Some of these I’m pretty sure I never managed to really get across. The last meeting I attended before leaving that place saw a bunch of questions like “can we upgrade from Gitlab to Github” and “Can the CLI version of Github (NB: git. That guy meant git.) create pull requests?”

                                                                                                                I don’t really follow the politics of these things because I can’t really say I care – VCSs come and go, I self-host git for myself but otherwise I use whatever my customers want to use and I’m happy with it. But if Microsoft wanted to do the EEE thing, the fruit is definitely ripe.

                                                                                                                1. 3

                                                                                                                  The fact that github run the git.io URL shortener is pretty darn deceptive, IMHO.

                                                                                                                  1. 2

                                                                                                                    I’m not so worried about that in the case of git/GitHub to be honest, since it’s primarily a development tool. If devs decide they want a different tool en-masse, then usually they will get it (…eventually). This is pretty much what happened with svn → git.

                                                                                                                  2. 11

                                                                                                                    It’s not git, but the other various services tacked on (issues, the workflow, CI, etc) that have basically become synonymous with ‘git hosting’, which require more and more effect to break free from once you become invested in using it.

                                                                                                                    1. 21

                                                                                                                      That’s not “Embrace, extend, extinguish”, that’s just building a successful product that people find pleasant to use. There is no “Microsoft git” and you can download all your data from GitHub. If you want to make the argument that there should be more competition in the market, then okay, fair enough. But again, very different from EEE.

                                                                                                                      There is a massive difference because EEE is all about forcing people in to using a product and is malicious, whereas building a very popular product isn’t. There is nothing forcing you to use GitHub. If you want to use any competitor, then you have 100% freedom in doing so.

                                                                                                                      GitHub is also quite far removed from being a monopoly. If anything, then lowering their prices is proof of that; monopolists don’t lower prices.

                                                                                                                      more and more effect to break free from once you become invested in using it.

                                                                                                                      This is true for anything. I stuck to tcsh for years because converting my extensive tcsh config to zsh would be a lot of work, as would re-learning all the tcsh tricks I knew. Even now I just stick with Vim even though Spacemacs is probably better just because I’m so invested in it.

                                                                                                                      1. 4

                                                                                                                        There is a massive difference because EEE is all about forcing people in to using a product and is malicious, whereas building a very popular product isn’t. There is nothing forcing you to use GitHub. If you want to use any competitor, then you have 100% freedom in doing so.

                                                                                                                        But if you want to contribute to a project, and their workflow is centred on Github (push requests, CI, etc.) then you are kind of required to comply. And all that infrastructure is also not that easy to move around – or at the very least it’s an effort that would require a great dissatisfaction with GitHub.

                                                                                                                        1. 6

                                                                                                                          But if you want to contribute to a project, and their workflow is centred on Github (push requests, CI, etc.) then you are kind of required to comply.

                                                                                                                          In Microsoft’s defense, that was true of GitHub long before Microsoft took over.

                                                                                                                          1. 1

                                                                                                                            I wasn’t “attacking” Microsoft, but rather GitHub. The change in ownership is more of a formality to me ^^.

                                                                                                                          2. 4

                                                                                                                            But if you want to contribute to a project, and their workflow is centred on Github (push requests, CI, etc.) then you are kind of required to comply.

                                                                                                                            This is true for any workflow. I really don’t like mailing lists or IRC for example, but if that’s what a project uses then I’m “required to comply” just as much as you are “required to comply” with my GitHub workflow (although I won’t turn down patches sent over email, if that works better for you).

                                                                                                                            Unfortunately, there is no way to satisfy everyone here; different people just have different preferences, and the GitHub workflow works well for many.

                                                                                                                            1. 1

                                                                                                                              Sure, but you don’t need an account for mailing lists, you don’t have to sign anything. Also, due to it’s decentralized nature, it’s easier to prevent a lock-in.

                                                                                                                              GitHub workflow works well for many.

                                                                                                                              Exactly! This pushes developers to adopt GitHub, as they fear (and I have experienced myself) that any other platform will have less interactions (bug reports, patches, etc.).

                                                                                                                              1. 1

                                                                                                                                You need an email account, and you typically need to subscribe to the email list (resulting in a lot of email in my inbox I don’t care about). It also doesn’t offer things like a good code review UI, which are IMO much easier in a GitHub-like UI, especially for larger patches. I appreciate it works better for some, but there’s a lot of friction involved for many.

                                                                                                                                If you’re really opposed to the GitHub-style UI, then my suggestion would be to work on an alternative which doesn’t have the downsides you see, but also removes the friction and UX issues that many really do experience. “Everyone is doing it wrong” is not really very constructive; people usually do it “wrong” for a reason, so best to address that.

                                                                                                                                This pushes developers to adopt GitHub, as they fear (and I have experienced myself) that any other platform will have less interactions (bug reports, patches, etc.).

                                                                                                                                The same applies not just to GitHub, but also git itself. I much prefer mercurial myself, but there’s much more friction involved for (potential) contributors. Related thing I wrote a few years ago: I don’t like git, but I’m going to migrate my projects to it

                                                                                                                                The problem with these kind of tools that everyone needs to use, is that a lot of people don’t really like using and learning multiple of them, so there may be kind of a natural tendency to go towards a single tool. There are certainly some advantages with having these kind of “industry standards”.

                                                                                                                                1. 1

                                                                                                                                  It’s true that subscribing to mailing lists can be annoying. But personally, I don’t have a “everyone is doing it wrong” approach, as I think that sourcehut is building towards a very good system that both works for web-oriented and mail-oriented users.

                                                                                                                                  And regarding git, I think that main difference is tool vs service. Git is free software, I don’t need permission to use it, not could it be revoked. GitHub is a platform with their own interests. But other than that, I understand your point. I too find hg interesting, but what keeps me from transitioning is manly that in Emacs, Magit is too comfortable to git up.

                                                                                                                          3. 2

                                                                                                                            That’s not “Embrace, extend, extinguish”, that’s just building a successful product that people find pleasant to use. There is no “Microsoft git” and you can download all your data from GitHub. If you want to make the argument that there should be more competition in the market, then okay, fair enough. But again, very different from EEE.

                                                                                                                            There is a massive difference because EEE is all about forcing people in to using a product and is malicious, whereas building a very popular product isn’t. There is nothing forcing you to use GitHub. If you want to use any competitor, then you have 100% freedom in doing so.

                                                                                                                            Everything you say also applies to the classic examples of EEE like extending HTML in IE. Every example of EEE is “building a successful product that people find pleasant to use,” so I don’t know why you juxtapose those things. Users of IE in the 90s had 100% freedom in switching to Netscape too. If you think these are fine justifications, you simply have no problem with EEE.

                                                                                                                            And there is “Microsoft git,” it’s called “hub.”

                                                                                                                            1. 2

                                                                                                                              Extending HTML is different because it forced Netscape and other vendors to “catch up” or their product would be “defective” (in the eyes of the user, since it didn’t render websites correct). This is the devious part of the “Extend” phase because it seems like it’s adding useful helpful new features, but it’s done with the intention to make the competitor look “broken”.

                                                                                                                              As I said, GitHub has made no attempts to extend git in that way, or even hinted at attempts to do so.

                                                                                                                              1. 1

                                                                                                                                Adding helpful new features always has the effect of making the competitor look broken, and we have no way of evaluating intentions in either case. Extending git with pull requests makes repo.or.cz look defective because you can’t send pull requests with hub to a repo hosted there. It’s not different.

                                                                                                                                1. 1

                                                                                                                                  It’s just some UI to improve the process, not a incompatibility. To me it sounds like you’re basically saying “you can’t improve your product to make it easier to use, because that will make competitors seem bad”, which I find a rather curious line of thinking.

                                                                                                                                  1. 1

                                                                                                                                    I’m not saying anything about what a company can and can’t do. Hub is not compatible with standard git hosting, so that seems like an incompatibility to me.

                                                                                                                                    You seem to have decided that EEE is inherently bad and malicious, yet it was a phrase originally used proudly by Microsoft employees. They were proud because they viewed their actions exactly the way you view the current GitHub developments. If you have no problem with proprietary git extensions, what’s wrong with upgrading a browser with proprietary extensions to enable video playback in a web page?

                                                                                                                                    1. 1

                                                                                                                                      Yeah, a solution that works for both would be best. I’m not entirely sure of SourceHut will be that – at least from the perspective of a “web hipster” like me – but I’m keeping an eye on it. You can already do that with GitHub to some degree as well btw; for example Vim sends all issues to the mailing list, and you can (and many people do) reply from there. You can probably do something similar with PRs if you want.

                                                                                                                                      You seem to have decided that EEE is inherently bad and malicious, yet it was a phrase originally used proudly by Microsoft employees. They were proud because they viewed their actions exactly the way you view the current GitHub developments. If you have no problem with proprietary git extensions, what’s wrong with upgrading a browser with proprietary extensions to enable video playback in a web page?

                                                                                                                                      Like I said, I don’t think it’s the same since the git protocol isn’t modified. It’s more similar to the video popup thingy Firefox added a while ago: it didn’t modify anything about the underlying protocols and standards, but it did modify the UI based on those standards.

                                                                                                                                      I can see where you’re coming from since you’re “forced to use GitHub”, but isn’t that the case for any issue tracker I add? If I self-host some Ruby on Rails issue tracker, and maybe a code review system, then you’re “forced” to use that too, right? I’m not sure how different that would be to GitHub?

                                                                                                                                      At the end of the day, I think by far the most important issue is that git remains the open and free protocol and tool that it is today; issue tracker, code review, and whatnot are all very convenient and nice, but they’re really just auxiliary features of relative low importance to the actual code. By far the most important thing is that everyone is able to clone, share, and modify the software freely, and GitHub doesn’t stand in the way of that at all as far as I can see.

                                                                                                                                      1. 1

                                                                                                                                        I’m still not clear what problem you have with using otherwise-ignored HTML to embed useful features in a web page. Microsoft didn’t modify HTTP.

                                                                                                                                        1. 1

                                                                                                                                          A webpage is inaccessible if I view it in a browser which doesn’t implement the feature (how inaccessible depends on the details), whereas git is still the same git with GitHub.

                                                                                                                                          1. 1

                                                                                                                                            That is true of any advance in web standards. Web pages which use those standards are inaccessible from browsers which don’t implement those features.

                                                                                                                            2. 1

                                                                                                                              That’s not “Embrace, extend, extinguish”, that’s just building a successful product that people find pleasant to use. There is no “Microsoft git” and you can download all your data from GitHub. If you want to make the argument that there should be more competition in the market, then okay, fair enough. But again, very different from EEE.

                                                                                                                              There is a massive difference because EEE is all about forcing people in to using a product and is malicious, whereas building a very popular product isn’t.

                                                                                                                              If we ignore the pricing, it’s not “extinguish”, but it’s pretty clearly “embrace” and at least a little bit of “extend”.

                                                                                                                              There is nothing forcing you to use GitHub. If you want to use any competitor, then you have 100% freedom in doing so.

                                                                                                                              Yes, currently that is true. But if Microsoft is pricing GH below cost, it will make it hard for those commercial competitors to make enough money to continue existing.

                                                                                                                              GitHub is also quite far removed from being a monopoly. If anything, then lowering their prices is proof of that; monopolists don’t lower prices.

                                                                                                                              Pricing yourself lower than your costs is exactly how you use money to build a monopoly though.

                                                                                                                              All the being said, I don’t think anyone is worried about them “extinguishing” git, because you can’t extinguish open source software. But, it definitely doesn’t look good for GH’s commercial competitors.

                                                                                                                          4. 2

                                                                                                                            Applied to a service, what they’d do is something to get people to put their critical assets in it, build their business processes on using it, eliminate the better competition somehow if possible, and lock-in results. Once locked-in, they start jacking up prices, reducing quality, selling them out to advertisers, etc.

                                                                                                                            Microsoft has a long history of that for its own products and its acquisitions. I decided to recommend nobody depend on Github the second that… they were a SaaS startup. They usually become evil after acquisition or I.P.O.. If not a startup, the second Microsoft bought them.

                                                                                                                      1. 3

                                                                                                                        For all my home systems I just pick something from https://en.wikipedia.org/wiki/List_of_Greek_mythological_figures They started off very punny/topical, but I’ve been doing this for at least 15 years, so they’re getting to be more and more of a stretch. My first super powerful gaming computer was Zuse, but my recent VM server is Hera (goddess of childbirth). My NAS, where all my backups go, is Soter (male spirit of safety, preservation, and deliverance from harm).