1. 11
  1.  

  2. 7

    My main concern is two fold for ML products:

    1. Code interviews are going to be a bigger mine field for the interviewer.

    2. Developers will become code janitors full-time, developers part time. As in the project/product manager will generate a half working program and the developers are there to fix it.

    1. 6

      Re. 2: that transformation has already largely taken place. I started coding in 1987, professionally in 2000.

      When I’m coding in 2023 most of the time I’m gluing together preexisting libraries, and dealing with bugs in other people’s code. That was not at all the case when I started out.

      1. 1

        Gluing together pre-existing libraries was what OOP advocates like Brad Cox were promising in the ‘80s for most software engineers (with a smaller group writing those components). Brad’s vision was always that end users, not professional programmers, would do a lot of the final assembly.

        1. 1

          That’s the promise/premise of “low code/no code” as well. The public-facing rationale is shortening the lead between customer wishes and software solution, the hidden one is lessening the reliance on scarce, expensive software developers.

      2. 1

        Code interviews are going to be a bigger mine field for the interviewer.

        Just last week I put a few of our interview questions into ChatGPT and it solved most of them fairly reasonably. I received a slightly misguided answer that would have been okay for a live coding interview. The questions were all basic to intermediate SQL questions. Basically a series of stakeholder questions with a some stub data to query.

        We decided that we couldn’t let people do them as a take home. There was too much risk that someone could fake it. Even a screen sharing session might result in some trickery on a second screen.

        1. 3

          I think the solution is interviews that ask for broad understanding.

          I personally have always been interviewed in a quite open discussion (leet code interviews are very uncommon in Germany), only one out of six interviews in my career gave me a take home assignment. As a senior, when I myself was asked to join an interview I asked mostly questions that (imo) should show whether a candidate understands general concepts, e.g. “We have problem X, which technologies do you think could solve them?”, “You mentioned Y and Z, what advantages/disadvantages does each of Y and Z have in this use case?”.

          At least I guess that it would be hard to hear the question, type the question into ChatGPT, wait for and understand the answer and respond with the answer with a small enough latency that it would not seem weird in the interview.

          1. 1

            Just last week I put a few of our interview questions into ChatGPT and it solved most of them fairly reasonably. I received a slightly misguided answer that would have been okay for a live coding interview. The questions were all basic to intermediate SQL questions. Basically a series of stakeholder questions with a some stub data to query.

            We had someone scam us for a week at my last employer (I never interviewed the candidate), but they refused to turn on their camera and started giving different responses.

            So now does the interviewer need to keep in mind things like that, but I also need to ensure that the person isn’t using ChatGPT and CodePilot on their end to deliver answers to my problems. Maybe this will give rise to killing working remotely. “Sorry, I need you to come into our office for the interview.”

          2. 1

            Perhaps it would promote the habit of designing better-constrained interfaces to rein in the complexity, and force more effort into writing comprehensive unit tests.

          3. 3

            GitHub Copilot X is on the horizon, and with it a new generation of more productive, fulfilled, and happy developers

            I don’t know, man. All this AI makes me feel incredibly stupid and generally unskilled. Every time the AI just solves something effortlessly the only conclusion tIhst comes to mind is how pointless learning that thing is now because the computer can do it on it’s own. I should’ve gotten a degree in marketing or psychiatry. At least the client in those is always human.

            1. 3

              Once the AI has enough purchasing power do you really think we will be selling to Humans anymore?

              1. 1

                I agree. Alternatively, even if human skill will still be required to review AI generated code, trawling through endless AI PRs looking for bugs all day long doesn’t sound enticing at all.

              2. 2

                I still think this misses what I want. I want to say something like: edit this existing project to do xxx, add the tests, and run them. Then create a PR for me with a good merge message. I will review the pr.

                1. 2

                  I use GitHub Copilot (sans X) most often for writing tests and other boilerplate. I agree with your take!

                  Sometimes it reminds me of Prolog. We define the types, objects, relations, etc. Then we ask the tool for something based on what is already there.