1. 5

    I’m guessing I’m one of not that many developers who has gone through the waterfall-to-agile transition multiple times, at several different companies.

    I like a lot about agile. First, emphasizing automated testing was huge. Developers have always liked writing tests, but we didn’t have a way to prove to our managers that it was important, first-tier work. Second, I loved getting rid of overly complicated project plans, and just switching to a backlog with small/medium/large tasks. The up-front project plans were never made in consultation with the people doing the work, and became increasingly divorced from reality as time went on.

    There are plenty of things I dislike, too. When a company switches to agile, they tend to end up with a whole fleet of project managers, product managers, product owners, scrum masters, consultants, etc. – all with questionable credentials and who charge a staggering amount of money. Worse, when a company switches to agile, they tend to lay off or fire all their most experienced developers, who are narrow subject matter experts and don’t fit cleanly into a team. This has almost universally disastrous results.

    I think agile played a huge role in getting traditional business-people to take software seriously, and assign adequate resource to it. But it’s also a tremendously destructive force that can destroy functioning organizations. I wouldn’t necessarily call it a failure, but it’s not a straightforward success either.

    1. 1

      When a company switches to agile, they tend to end up with a whole fleet of project managers, product managers, product owners, scrum masters, consultants, etc. – all with questionable credentials and who charge a staggering amount of money. Worse, when a company switches to agile, they tend to lay off or fire all their most experienced developers, who are narrow subject matter experts and don’t fit cleanly into a team. This has almost universally disastrous results.

      I’ve never “switched to agile” but these kinds of problems smell like exactly the kind of agile-not-agile failure mode that the article is talking about

      1. 2

        Seconding: Firing developers in order to hire more managers sounds like the exact opposite of Individuals and interactions over processes and tools and Responding to change over following a plan.

        The No True Scotsman Fallacy doesn’t apply if the “Scotsman” in question turns out to be German.

        1. 2

          Honestly, No True Scotsman is wildly over-applied, and when a term has been co-opted it’s even stickier. Maybe “true agile” or “true OOP” are uncommon, but since they do exist and are wildly different from their pop-sci variants then either they need their own new names or something. Saying “ah, well, in practise agile doesn’t work because no one can do it” when you’re only willing to include in the analysis people who aren’t even trying or have no idea what they should be trying, is basically a fallacy the reverse of No True Scotsman.

    1. 13

      This is neat, and I think it’s great that folks are doing the work - hopefully by the time that Apple stops supporting these devices, Linux will be there so anybody wanting to make use of the hardware as long as it lives can do so.

      But what I would really love to see is ARM hardware that rivals the Apple M1 that supports Linux out of the box. I keep hoping that the Raspberry Pi folks (or somebody) will finally release a version that’s beefy enough for real work with support for NVMe drive and so forth.

      It’s probably a fantasy, but I’d really love to see hardware that’s as good as or at least close to as good as Apple’s but for Linux. Especially WRT battery life…

      1. 3

        I agree so much. I’d be fine with something less beafy even as long as it allows for a crazy battery life. Like an 32 core ARM with really good throttling and power management.

        1. 3

          I agree that we need a good ARM competitor, but it’s going to take a long time. None of the other ARM vendors are pursuing the laptop market at all. There are some strong server ARM companies, but they’ve got a way bigger power budget and don’t have to make laptop-compatible device drivers. Microsoft has already made ARM Surface notebooks for years but they never pushed for a high-performance processor and never captured enough of the market to get much third party software support.

        1. 10

          I’ve also been totally consumed by the same obsession. Lots of money, too: I want to be able to distribute VQGAN models on multiple gpus, which is far beyond my minimal pytorch knowhow. So I’m now looking to pay a contractor.

          I have this dream of making 2000x2000 pixel murals and printing them to canvas. AWS has EC2 configs with 96 gigs of gpu. I can’t stop thinking about this, and it’s disrupting my life.

          But it’s also exhilarating. I know it’s “just” an ai generator, but I’m still proud of the stuff I “make”. Here are some of my favorites:

          My daughter wants to be an artist. What should I tell her? Will this be the last generation of stylists, and we’ll just memorize the names of every great 20th century artist to produce things we like, forever?

          I worry about this too, but also am excited to see what artists do when they have these tools. And I think it’ll make artists turn more to things like sculpture and ceramics and other forms of art that are still out of the machine’s reach.

          EDIT: also, a friend and I have been making games based off this. “Guess the city this from this art” or “guess the media franchise”. It does really funny stuff to distinct media styles, like if you put in “homestar runner”

          1. 5

            Just my random observation but “your” pieces and the post’s all give the vague appearance of something running through endless loops of simulacra. Said another way, they all share similar brush strokes.

            I think we’re headed into the (while looking at a Pollock) — “humph, my AI could have painted that!” era

            1. 4

              There are a bunch of known public Colab notebooks but one is very popular. It’s fast but has this recognizable “brush stroke” indeed. Some GAN jockeys are tweaking the network itself though, and they easily get very different strokes at decent speeds. You don’t even need to know neural network math to tweak, just the will to dive in it. Break stuff, get a feel for what you like. If this is to become a staple artist’s tool it’ll have to be like that, more than just feeding queries.

            2. 3

              These are cool. The “Old gods” one especially… if that was hung in your house and you told me you’d purchased it from an artist I wouldn’t blink. When you make them, are you specifying and tweaking the style, and then generating a bunch, and then hand-picking the one you like?

              1. 3

                Starting out I was just plugging intrusive thoughts into colab to see what I’d get. If it didn’t produce something interesting (not many do) I’d try another prompt. Recently I spent a lot of time writing a “pipeliner” program so I can try the same prompt on many different configs at once. I got the MVP working on Monday, but I’m putting it aside a while so I can focus on scaling (it only works on one GPU, so can’t make anything bigger than 45k square pixels or so)

                1. 1

                  Are you saying you’ve managed to get this to run locally? All the guides I’ve found are simply how to operate a Colab instance.

                  1. 2

                    I got it running locally, but I don’t have a GPU so upload it to an EC2 instance. I recently found that SageMaker makes this way easier and less burdensome, though.

              2. 1

                There are neural nets intended specifically for upscaling images. Pairing one of these with VQGAN image generation (which is pretty low res) might let you make larger scale art without a huge unaffordable GPU.

              1. 2

                This is neat! For now, there are still lots of older laptops from mainstream brands that are user-repairable. I still use a 2014 Macbook and have replaced parts in it several times. But, in a few years, all these will be very out of date and there will be a real need.

                1. 1

                  At one point, there was a decent amount of research on software engineering practices. For example, I have this book on my shelf (I needed it for a class I took in grad school). However, it seems to have gone out of style around roughly 2000 with the mass switch to agile/scrum. Part of the problem is a lot of this work was funded by the US DoD and by IBM, neither of which embraced agile/scrum along with the rest of the industry. The newer crop of tech superstar companies seems to be much more secretive about its engineering practices, and also less interested in systematically studying them.

                  1. 2

                    Thank you for posting this, I’ve been having so much fun for the past week playing with it

                    1. 2

                      I’ve been writing a program to run batches on AWS. I hope to make finding useful adjectives (like unreal engine) easier to find if you can try ten candidates on ten different prompts and see if you get any patterns.

                    1. 3

                      This is pretty neat. The cool part isn’t that the POWER ISA is open source, but that the entire toolchain is open source, including the HDL and the cell libraries. I gather these have traditionally been extremely secretive and expensive.

                      1. 4

                        I made a similar setup in an old house when I was selling it and it was unoccupied for about 2 months in the middle of winter. Believe it or not, someone touring the house actually turned the heat off when it was far below freezing, so it was extremely valuable!

                        1. 1

                          The Raspberry Pi 4 is about as fast as a current-generation Celeron (used in lots of low end laptops or fanless PCs), so it’s actually not too underpowered for this…

                          1. 2

                            I worked on a horrible product where QA people were expected to file two bugs a day, and engineers were expected to fix two bugs a day. It occurred to me that the QA people and engineers could team up and create easy busywork for each other indefinitely….

                            1. 2

                              I distinctly remember having read once that this actually happened at Microsoft (?) in the 1980s (?) but I can offer no evidence.

                              1. 3

                                Maybe this? Not exactly what you describe, but close.

                                https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-steps-to-better-code/

                                What they realized was that the project managers had been so insistent on keeping to the “schedule” that programmers simply rushed through the coding process, writing extremely bad code, because the bug fixing phase was not a part of the formal schedule. There was no attempt to keep the bug-count down. Quite the opposite. The story goes that one programmer, who had to write the code to calculate the height of a line of text, simply wrote “return 12;” and waited for the bug report to come in about how his function is not always correct. The schedule was merely a checklist of features waiting to be turned into bugs. In the post-mortem, this was referred to as “infinite defects methodology”.

                            1. 2

                              Just a note – this paper is from 2012. Methods for identifying clients surreptitiously have gotten way, way more advanced since.

                              1. 4

                                Great info!

                                I once saw a project that dynamically adjusted the number of processes/threads in the server in order to maximize request throughput using gradient descent… I wish that were easier to do…

                                1. 1

                                  You can do this with any web API that lets you mint a JWT, which is a lot, including every cloud provider. That said, the usability of JWTs leaves a lot to be desired: Vault isn’t really designed for this purpose and neither are most command lines and other tools that access the APIs.

                                  1. 1

                                    Amazing that they’ve kept that capability for 30 years!

                                    1. 3

                                      Open source serves many purposes:

                                      • as a building block for many other things (in which it suffers the tragedy of the commons, no one wants to pay for it)
                                      • as a strategic lever for both big and small companies to force adoption of their preferred tools and approaches
                                      • as a “standards organization” where companies can cooperate on non-strategic components so they can lower costs and compete better on the strategic components (not that different from IEEE or ANSI in that way)
                                      • as a low cost R&D laboratory where companies can try out lots of things and see if they gain traction, without the stigma of a commercial project failing
                                      • as a way for people new to the tech industry to prove their skills and get jobs
                                      • finally, as an outlet for brilliant and creative people to learn new things and express themselves outside corporate rules

                                      It’s ok to have different opinions about each one of these.

                                      1. 1

                                        I’m not really a serious electronics hobbyist, but I know a lot of them, and the most popular board for basic hacker projects is the Adafruit Feather. It has a lot of IOs, it doesn’t need Linux (and all the pains of running Linux), and it’s really fast (enough to drive displays and speakers directly). The Raspberry Pi Pico is sort of similar but it has a lot fewer IOs. As you start building bigger projects, IOs become the limiting factor, and you have to start using multiple MCUs and connecting them, which gets more and more complicated….

                                        Another one I want to see succeed is WebFPGA. It turns out the software used to program FPGAs is notoriously clunky and huge (intended for professionals), so WebFPGA is a cheap custom FPGA board + a web interface that lets you just edit VHDL in the browser and see the results immediately.

                                        1. 1

                                          I’m curious how much of this is perhaps driven to derail screen scraping.

                                          1. 4

                                            I’m scratching my head here. Do people actually screen-scrape Google Docs?

                                            1. 3

                                              Even if they did, why would Google want to disable this? Are there docs out there being accessed by thousands of bots at the same time?

                                              1. 1

                                                Answer is: probably

                                            2. 4

                                              Or adblocking.

                                              1. 3

                                                I am wondering if it is to get decent performance on low-spec Chromebooks.

                                                1. 1

                                                  As someone who works on rich editors for the web (https://notion.so), this move makes complete sense for both performance and reliability. With Canvas/WebGL & a custom framework, Docs has much complete control over rendering, and render latency. They have much lower exposure to user agent differences. And, as a megacorp with infinite engineers, they can bear the much higher cost of building everything themselves.

                                                1. 2

                                                  One thing that’s helped me is having my whole dev environment set up with a Vagrant script, so I can use it in a VM on various work, home and cloud computers and still have access to all the tools I’m used to.

                                                  Another useful thing is code-server, which is a browser interface for VSCode (which is already an Electron app so it’s not too hard to put in a browser). You can run it inside Vagrant (maybe on a cloud instance) and have access to a familiar IDE.

                                                  1. 1

                                                    That’s a fantastic idea! If I end up using VSCode at some point – I’ve bounced off it a few times, mostly because “emacs” :) – I may mimic that.

                                                    1. 1

                                                      A little late to the discussion, but I recently switched to VSCode and I am using https://github.com/whitphx/vscode-emacs-mcx for key bindings. It’s not quite perfect of course, but nothing that trips me up on a day to day basis.

                                                      I’ve been using Emacs since ’97 or so off and on (mostly on - I used Eclipse when I was doing Java work) and so the muscle memory is quite hardwired at this point ;) I do miss recording macros on the fly (though I might guess something like that exists for VSCode already).

                                                      I do drop into a terminal and run emacs -nw or mg (usually for writing git commit messages).

                                                  1. 1

                                                    OK, I have a question about this design.

                                                    First, the key is deterministically derived using the U2F device. (It’s always the same key.) That means the key could be stolen if you’re accidentally using a compromised SSH client, for instance. Unlike a key on a smart card or a Yubikey in PIV mode, where the root key never leaves the device.

                                                    Presumably to mitigate this risk, Github also requires a TOTP one-time token if you’re using U2F. You have to push the button on your device, it spits out a one-time token that GitHub can verify.

                                                    But then what value does U2F add in the first place, if you still need to also use TOTP?

                                                    Maybe I’m misunderstanding something here.

                                                    1. 3

                                                      The key is generated via FIDO2, and it’s not deterministic. With FIDO2 (the successor to U2F, with backwards compatibility) for registration the key takes as parameters the relaying party name (usually ssh:// for ssh keys, https://yoursite.com for websites), a challenge for attestation, the wanted key algorithm, and a few extra optional parameters. The key responds with an KeyID, an arbitrary piece of data that the should be provided to the key when wanting to use it, the public key data, and the challenge signed with that key. This data usually holds the actual private key, encrypted with an internal private key of the security key, which is then decrypted to actually use it. It’s assumed that the KeyID is unique, therefore it should(and generally is) generated using some sort of secure RNG on the security key. The flow with ssh is quite simple, when you create an SSH key with a security key, a key gets generated on it, the KeyID gets stored as the private key file, and the public key file stores the returned public key. When connecting with SSH, a challenge is issued by the server you are authenticating to, which gets passed to the key with appropriate KeyID, and the response is sent back to the server. No need for any additional TOTP tokens, which are less secure.

                                                      1. 1

                                                        First, the key is deterministically derived using the U2F device. (It’s always the same key.) That means the key could be stolen if you’re accidentally using a compromised SSH client, for instance. Unlike a key on a smart card or a Yubikey in PIV mode, where the root key never leaves the device.

                                                        How can this make sense? Surely the U2F device has access to a suitable CSPRNG.

                                                      1. 1

                                                        Java is actually the fastest, and Go only the middle of the pack despite being the original and flagship implementation of gRPC? Very surprising.

                                                        1. 2

                                                          No, the benchmark redults are sorted by the average latency, of which the Java implementation has the lowest. When you look at the other latencies (and memory usage) the Java implementation isn‘t the best. (I think the Rust implementation does very well considering all metrics.)

                                                          I wish those tables would be sortable by different columns. Also a nice visualization could communicate the benchmark results much better.