1. 5

    PHP: Got me started on a different career path :)

    1. 3

      This Matt Might post takes a bit more in-depth look at how to implement some db operations in bash: http://matt.might.net/articles/sql-in-the-shell/

      He uses awk to implement joins here, but the join tool makes an appearance in a footnote.

      1. 2

        Starting a collaborative music project with a friend several thousand miles away. Excited to see what we come up with musically, but also excited to see if we can find a reasonably efficient way of sharing ideas/assets in the near absence of real-time collaboration. Could be really cool!

        1. 2

          These types of posts are great! Logic has a special place in my heart. I studied it several years prior to getting into programming so I really enjoy when it all comes full circle.

          The first three examples can also be analyzed and solved with a good old truth table. I wrote a little sentential logic “prover” project[1] that I’ll use to demonstrate. It’s a Python library called sentential.

          Example 1: Both of us are Knaves.

          The author writes: “A and B are both False, if and only if A is True.”

          This can be translated into sentential logic as: a <-> (¬a & ¬b). Alternatively: (¬a & ¬b) <-> a

          from sentential import Proposition
          
          both_are_knaves = Proposition('''a <-> (¬a & ¬b)''')
          both_are_knaves.pretty_truth_table()
          +-------+-------+-----------------+
          | a     | b     | a <-> (¬a & ¬b) |
          +-------+-------+-----------------+
          | True  | True  | False           |
          | True  | False | False           |
          | False | True  | True            |
          | False | False | False           |
          +-------+-------+-----------------+
          

          In order to find our satisfying condition(s), we inspect the table for rows where the whole expression evaluates to True.

          # Filter rows to find only cases where the whole expression is True
          both_are_knaves.pretty_truth_table(cond=lambda row: row['expr_truth_value'] == True)
          +-------+------+-----------------+
          | a     | b    | a <-> (¬a & ¬b) |
          +-------+------+-----------------+
          | False | True | True            |
          +-------+------+-----------------+
          

          And there’s our first solution: a = False, b = True

          Example 2: Both of us are Knights or both of us are Knaves

          You can build these up piecewise as well – it need not all be done in one shot.

          “both of us are knights”: (a and b)

          “both of us are knaves”: (¬a and ¬b)

          A is making this claim, so we use the biconditional (<->) again: a <-> ((a and b) or (¬a and ¬b))

          both_are_knaves_OR_both_are_knights = Proposition("""a <-> ((a and b) or (¬a and ¬b))""")
          
          both_are_knaves_OR_both_are_knights.pretty_truth_table(cond=lambda row: row['expr_truth_value'] == True)
          +-------+------+----------------------------------+
          | a     | b    | a <-> ((a and b) or (¬a and ¬b)) |
          +-------+------+----------------------------------+
          | True  | True | True                             |
          | False | True | True                             |
          +-------+------+----------------------------------+
          

          Two possible solutions this time: [a = True, b = True], [a = False, b = True].

          Example 3: A claims to be a Knave, then claims B is a Knave

          This example can be expressed as: (a and ¬a) and (a and b).

          As the author notes, the initial contradiction prevents this proposition from being satisfiable, so there are no rows of the truth table where the expression = True.

          I actually picked up Smullyan’s To Mock a Mockingbird in the past year or so hoping I could tackle some of the problems with provers. Perhaps I should take a look at that again…

          Notes:

          [1] I suppose one may say it is an “interpreter” (it computes truth tables and understands how to interpret logical expressions in a sense) or a “prover” (it implements resolution using the set of support of strategy). I wrote it a few years ago for personal edification and because I just had to scratch an itch. Weird itch, I know. (Disclaimer: It is a pet project – should not be relied on for anything approaching real work!)

          1. 2

            Spending some time writing/recording music and geeking out with folks over on linuxmusicians.com.

            1. 13

              Some of the psychology you cite is very pertinent and interesting but I really disagree with some of your conclusions.

              I think that if you as a candidate freeze up when asked to perform a basic example of what you’d be doing for your job, then that’s an interview skill you should consider building rather than advocating doing away with that class of interview question.

              I know a LOT of talented people who have great trouble with whiteboarding, and I get it. I REALLY do because I used to be in that category.

              But like any other fight / flight response where no actual mortal peril is involved, you can work through the anxiety and learn to process the situation differently, and once you do, it’s incredibly empowering!

              When I’m doing an interview and I can tell that my candidate is freaking out, I typically ask them to talk me through what they think a solution would flow like, without any of the syntax involved.

              And honestly, what job anywhere is free of this kind of pressure?

              [Full Disclosure: I work for a Major Cloud Company and our interview process is highly data driven and also includes some whiteboarding. I think this hiring strategy has been extremely successful, but obviously YMMV and different companies are different.]

              1. 15

                The problem is, its not only about anxiety. It could be, and is, any number of things, like active stressors (disease, divorce etc.), biochemical stuff (low sugar, hormones), meteorology (too hot/cold) etc. I remember, 10y ago that when I had an interview @booking.com, my head hurt so strong that I could hardly speak.

                And honestly, what job anywhere is free of this kind of pressure?

                Almost any job is without that kind of pressure. Its totally artificial.

                When you measure people this way, you have no option but to ask academic stuff. For example, local Microsoft tech interview I was part of were created by assistants of local faculty of Mathematics, and had questions such as what is the probability that frog will survive the road run and other crap like that. I finished the same math university 5 years before and I still couldn’t manage many questions. Not to mention, just finished students nailed it, with 0 practical experience (it was 0). So, this type of interview is biased toward fresh graduates and academy itself has entirely different goals and agenda far removed from usual everyday IT reality (since then I had number of professors working in my team and each time with disappointing results). Or, member the interview of author of brew for Google ?

                I used exclusively lets-speak-about-random-IT-stuff type of interview and I think it worked OK. I also ask for online profiles - Github, Gitlab, etc… there is no substitute for seeing (or not seeing!) code itself. There are few persons that were great in that while sucked on job a lot, but overall, the initial feeling after the conversation was at least 90% on the spot.

                1. 2
                  And honestly, what job anywhere is free of this kind of pressure?
                  

                  Almost any job is without that kind of pressure. Its totally artificial.

                  Couldn’t disagree more. I’ve worked in this industry for 30 years, admittedly only 5 or so of those years as an actual ‘software developer’ (mostly release engineer, devops, and before that highly code-centric sysadmin).

                  I can’t think of a single job I’ve ever worked that didn’t have “OK. We need you to produce. NOW” moments.

                  If you’ve worked jobs that don’t have that kind of pressure, I envy you, but I’m unsure whether or not your experience is one that many others share.

                  1. 15

                    Certainly many jobs require deploying a patch, developing a new schema, reacting to a fire in the moment on deadline, etc., but I think few require you to be able to converse about it to strangers who can fire you, in those circumstances. ;)

                    My work is not complicated. My experience with whiteboard interviewing has been with insignificant companies that do not do hard engineering. When I froze up and couldn’t think of the name for modulo after having just used the operator, the interviewer decided I wasn’t a good fit… I think less technical or complicated jobs use whiteboard just as much as Google, and that’s frustrating.

                    1. 11

                      Most of them won’t then stare at you while you do your job on an unfamiliar tool you’re only using so they can decide whether to promote or fire you.

                      Instead, they give you some requirements, you leave, you work alone on them at a computer, maybe talk to team members, maybe status reports (mostly typed), and eventually deliver the results to then.

                      These are the skills the interview process should assess. That’s why a lot of shops do take-home assignments or just not whiteboarding. Now, whiteboarding in non-hostile environment can be great for testing presentation and collaboration skills.

                      1. 4

                        Exactly.

                        When multiple people STARE AT ME while working (even in familiar environment), I can’t work productively.

                        Also, nobody ever comes with the random problem and ask you to solve it yesterday in 1 hour. You always have some context, some continuancy.

                        1. 2

                          I can appreciate where you’re coming from here and I’ve done take home assignments in the past. I stand by my comments around whiteboarding as a valuable interviewing tool. I think if interviewers are getting hung up on details they’re missing the point, and I also think that as a prospective interviewee having the expectation that you won’t be asked to give a sample of your work on the spot doesn’t seem realistic to me.

                          However, if all of you can restrict your job search to companies that never do whiteboarding, good on you!

                          1. 2

                            The problem with whiteboarding is that while it measures something, the something it measures is not the thing you’ll care about if you hire them. Which in turn is why there are books out there that teach whiteboard interview coding as a separate skill from actual programming, and why even prestigious universities now include a unit on interview coding as a separate skill from programming.

                            Which raises the inevitable question: why not actually test for the skills you’ll care about on the job? If you don’t test for the job skills you’ll hire people who don’t have the job skills.

                            1. 3

                              why not actually test for the skills you’ll care about on the job?

                              That is what they’re trying to do. It takes a lot of time to find out if someone can actually do a good job as part of your team, and the only way to really test it is to employ them for a few months (A few months of probation is quite common for this). Given that you can’t afford to make that sort of investment in every candidate that applies, companies use whiteboarding and other forms of technical interview to try and guess whether a candidate might have suitable skills before investing more time in them.

                              1. 0

                                That is what they’re trying to do.

                                Well, no. What they’re trying to do is cargo-cult what they think Google is doing, because they think “Google is big and successful, so if our company does what Google does, our company will be big and successful too”.

                                Of course, Google openly admits their process has a high false-negative rate (it flunks qualified people), but they get enough applicants that they can afford to reject some qualified people. The average company isn’t in that position and can’t afford it.

                                And Peter Norvig has explained that Google found a negative correlation between doing well on competitive programming tasks and performance on the job, which throws a wrench in any argument that on-the-spot coding under pressure measures something useful.

                                Interviewing as typically practiced in tech is fundamentally broken. It measures the wrong things, draws the wrong conclusions from what it does measure, and is more or less random. I think it’s long past time to stop defending that.

                                1. 2

                                  I never said that the processes were effective, nor am I defending them. I am merely pointing out that they are ultimately an attempt (however ineffective) to select candidates with relevant skills and reject those without.

                                  “Why not actually test for the skills you’ll care about on the job?” is unhelpful in that the intention is blatantly obvious, but offers absolutely no suggestion of how to achieve it.

                                  1. 5

                                    “Why not actually test for the skills you’ll care about on the job?” is unhelpful in that the intention is blatantly obvious, but offers absolutely no suggestion of how to achieve it.

                                    Tthere are a ton of articles and talks floating around on how to do better tech interviews – I’ve even written/presented some of them, based on my own experience helping to rebuild interview processes at various places I’ve worked – and people could quite easily find them if they wanted to.

                                    But here goes. Again.

                                    As I see it, there are several fundamental problems, and various ways to avoid them.

                                    Problem #1 is playing follow-the-leader. People implement processes based on what they think bigger/more successful companies are doing, without considering the tradeoffs involved or indeed whether those processes had anything to do with the size/success of those companies. Google’s admitted high false-negative rate is the quintessential example: they really can afford to throw away qualified applicants, because tomorrow another hundred will have submitted applications. The typical tech company can’t afford this, or can’t afford other unquestioned assumptions baked into large-company interview processes.

                                    The solution here is to question the assumptions and expose the tradeoffs. The extremes are “Never hire an unqualified person” and “Never pass on a qualified person”. Google optimizes for the former at basically all costs. Many companies, on the other hand, need to push the needle further toward the latter, which means a more forgiving process that doesn’t flunk people as quickly or for reasons as minor as large companies do. True story: I know one person who flunked Google because they had to write and then read out a bash script over the phone and the interviewer mistranscribed it. I know another person who flunked at Twitter because they provided one of two textbook algorithms for the problem posed, but the interviewer only knew the other and didn’t bother looking more deeply than just “that’s not the answer I know”.

                                    Those should be unforgivable interviewer errors at any company, but are especially unforgivable at companies which can’t afford to just throw qualified applicants into the trash can.

                                    Problem #2 is poor targeting. A lot of interview processes, especially at BigTech, explicitly or implicitly target fresh graduates, by quizzing on the sorts of things fresh graduates will have top-of-mind. Many of those things are not top-of-mind for working programmers who’ve been in the industry a while, since they’re rarely used in actual day-to-day programming (this includes a lot of algorithm and data-structure questions, since most working programmers don’t implement those from scratch on a routine basis). This biases away from experienced programmers, and creates a self-reinforcing cycle: you hire a bunch of recent grads, and they come in and start interviewing people which pushes even more toward preferring recent grads, so you hire even more of them, and… then one day you look around and wonder why it’s so hard to find experienced people. This is especially bad in the places that do algorithm challenges, because usually they’re posing things that want you to come up with a solution that took top people in the field decades to come up with while not under any particular pressure, and they want it from you in 20 minutes. On a whiteboard. While they watch. The only way to pass these is to “cheat” by already knowing the answers in advance, which you do either by reading a book about interview problems, or by being a recent grad who just passed a bunch of exams on the material.

                                    The solution here is to interview based on things that are actually relevant to day-to-day programming. You can, if you want to, find out about someone’s problem-solving skill while using questions and problems that involve things real working programmers actually do.

                                    Speaking of which, problem #3: far too many interview problems are contrived and unrealistic.

                                    You can do interviews based on real-world practical problems. Two of the best interview processes I’ve ever gone through did exactly this: one had a work-sample test where they gave you a simplified version of an actual problem from the domain they worked in, the other did a collaborative session where you had to debug a piece of code extracted from their real system and find the real problems in it. Putting together an interview based on these kinds of problems doesn’t take a ton of time, and gives you a much more realistic idea of how someone will perform in your company than the million and first shibboleth problem that tries to test for “fundamentals” but really only checks whether someone was taught the test.

                                    Problem #4 is measuring things that don’t matter. Whiteboard design can be useful, but whiteboard coding isn’t. Algorithm regurgitation isn’t. Trivia isn’t. Having open-source contributions on GitHub isn’t. Having lots of free time to do competitive “challenges” isn’t. And a lot of “soft” factors like confidence aren’t.

                                    Measure the things that matter. Measure how well someone can ask questions about a problem or communicate ideas on how to solve it. Measure how well someone works with others (pair programming can make a great interview session if you do it right). Measure how well someone finds and uses resources to help them work through a problem. Measure how well someone interacts with non-engineer colleagues. We’ve all worked with people who were good and people who weren’t so good; figure out what the good ones had in common and measure for that. It’s almost never going to be things like “they were really good at writing linked-list implementations on a whiteboard”.

                                    Here are some concrete ideas for more useful interviews.

                                    First, always let the candidate use a real computer with real programming tools and access to references, Google, and Stack Overflow. I make a point of telling people that I’ve written significant chunks of Django’s documentation but I still have that documentation open in tabs when I’m working; it’s outright nonsense to forbid that while also claiming you measure realistic performance.

                                    Second, use technical sessions that avoid the problems outlined above. Here are ideas:

                                    • Code review. Bring a piece of code (as realistic as possible) and have the candidate work through it with you. Have them demonstrate that they can read and understand it, ask productive questions about it, and offer constructive and useful feedback on it.
                                    • Pair programming. Bring something with a known bug, have them debug and fix it with you. Have them demonstrate that they understand how to approach it, search for and identify the problem, and work up a fix.
                                    • Give them notes from a real problem, and be prepared to answer questions about it, and have them write a brief post-mortem for it. Have them demonstrate they can take in the information you’re giving them, usefully probe for anything missing, and synthesize it all into an explanation of what happened.
                                    • Bring them a rough feature spec and ask them to refine it and break it down into assignable units of work. Have them demonstrate they can ask good questions about it, figure out the needs and the tradeoffs involved, and come up with a sensible plan for how to go after it.

                                    Third, use non-technical sessions! And not just a “culture fit” lunch with the team. Have them do a session with a PM or designer or other non-engineer colleagues to see how they interact and watch for signs of whether they’ll have productive working relationships with those folks.

                                    Finally, standardize your evaluations. It’s OK if there are different possible combinations of sessions (some people may prefer to do a take-home work sample, others may prefer to pair live and in person, etc.), but it’s not OK if interviewers have different rubrics for grading. Write out, explicitly, the qualities you’re looking for, in specific terms (i.e., not “confidence” or “professionalism” – those are vague weasel words that shouldn’t be allowed anywhere near an interview scorecard). Write out how interviewers are supposed to look for and evaluate those qualities. Set explicit baseline and exceeds-expectations bars for each session. Write scripts for interviewers to practice on and follow when presenting problems. Have interviewers practice running the sessions with current employees, and have some of your acting “candidates” try to pull sessions off-script or fail, to make sure interviewers know how to handle those cases gracefully.

                                    And finally, treat candidates like people. Someone you’re interviewing should be seen as a colleague you just haven’t worked with yet. Designing a process to be adversarial and to treat everyone as a con man will yield miserable and unproductive interviews.

                                    Now, I got voted “-2, troll” in my previous comment for citing sources that the typical coding interview doesn’t measure things that actually matter and in fact selects for things that correlate negatively to on-the-job performance. But I could cite plenty more. This video, for example, is a former Google employee who at one point recounts the story of a frustrated recruiter who revealed to a hiring committee he served on that they’d all just voted “no hire” on slightly-anonymized versions of their own interview packets and how it exposed the brokenness that was going on there. This article from a company that provides interviewing tools goes into detail on just how unreliable it is to use something that seems like it might be a proxy for real on-the-job skills (key takeaway: scores assigned to candidates by interviewers were all over the place, and inconsistent even for the candidates with the highest mean performances, fully one-third of whom outright bombed at least one interview).

                                    Interviewing is broken. It can be fixed. Both of these should be uncontroversial facts, but the fact that multiple people here saw them as “trolling” is indicative of the real problem.

                                    1. 2

                                      I regret I have but one upvote to give for your comment.

                                      1. 1

                                        Excellent comment. I particularly like the part that goes from practices focusing on recent grads that becomes self-sustaining. That could be actionable under anti-discrimination laws. I’ll try to remember that.

                                        If -2 was what I think it was, then that might be how intro had a tone that looked aggressive, dismissive, or something else. People here are more sensitive to that than most places both in terms of most of us wanting civility and what a large group deems as inclusive speech. Your comments will get better reception if you present your counterpoints without any rhetoric that comes off as a personal attack or dismissal.

                              2. 0

                                In my CV you can see that I have created number of large services to be used by entire countries, 24/7. This can easily be checked even real time (I have admin access to all of them which I can demonstrate immediately ). You want me to whiteboard ? No!

                                Furthermore, I will make sure all of my senior IT friends and colleagues know how much you suck as a company (in this land, you can count seniors on fingers ATM) so good luck for you finding one.

                                It looks to me that many people do not get it - senior developers are celebrities today.

                                Me? I’ll rather collect peanuts then make somebody else rich(er) on my work without fair compensation, respect and professionalism.

                                1. 6

                                  Me? I’ll rather collect peanuts then make somebody else rich(er) on my work without fair compensation, respect and professionalism.

                                  Your response made me take a step back and think about what we’ve all been saying here, and I came to a couple of conclusions.

                                  1. I am not a software developer per se. I work in the devops space. I do write software, but it’s nothing even remotely on the order of magnitude that the average Crustacean does. I write simple process automation scripts in Python and occasionally Ruby or Bash. This informs both my world view and the kinds of things I would ask people to whiteboard. As in, they are not at all algorithmically hard, things like “Print the duplicates in this list of numbers” and the like.

                                  2. Reading all of you express such vehement opposition to the idea certainly has me questioning its wisdom when interviewing software devs, and also wondering if the experiences you’ve all had were at the hand of people who weren’t very mindful of candidate experience in how they were conducting their interviews in general.

                                  In any case, it’s all very good food for thought, and I will now shut up on this topic and think on all of this.

                                  1. 5

                                    Based on my own experience, it is not that uncommon to find someone with “years of programming experience” on their resume, but has trouble solving basic programming tasks. This is because experience comes in a ton of different shapes and sizes. For that reason, before I can be okay with hiring someone in a technical capacity, I really need some kind of evidence that they can write code well. Whether that’s looking at code they wrote previously, giving them a new coding assignment, a white board interview, a presentation or maybe just prior experience working with them, I don’t really care. But there needs to be something. If I just took peoples’ word for it, the results would be pretty bad.

                                    I think people in this thread generally underestimate just how many people are out there that claim coding experience but fail to meet some really minimal standards. I’m talking about things like Fizz Buzz. Some kind of evaluation process is necessary in my experience.

                                    Personally, I think the person you’re talking too is being way too unreasonable.

                                    Everyone gets way too hung up on this shit. There’s a saying that goes something like, “all models are wrong, but some are useful.” It applies just as well to hiring practices. You can’t get it right all of the time, and some of your techniques might indeed yield false negatives. This is basically impossible to measure, but since a false positive is generally perceived as being much more costly than a false negative, you wind up trying to bias toward rejecting folks that might be otherwise qualified in favor of avoiding false positives.

                                    The whole thing is super hand wavy. People just seem to get obsessed with the fact that Hiring Practice X is wrong one dimension or another. And they’re probably right. But so what? All hiring practices are wrong in some dimension. And even this differs by experience. For example, a lot of people love to poo-poo whiteboard interviewing because that’s not reflective of what the job is actually like. But that’s not true for me. I use the whiteboard all the time. It’s a pretty useful skill to be able to go up to a whiteboard and communicate your ideas. Obviously, the pressure of evaluation isn’t there when you’re on the job, but I don’t see how to relieve that other than by limiting yourself to hiring people you already know.

                                  2. 1

                                    In my CV you can see that I have created number of large services to be used by entire countries, 24/7.

                                    I want to remind you that this doesn’t matter at all in terms of good, software engineering. There’s both lots of crap tech out there with high usage and countries that demand even crappier tech. High uptake doesn’t mean anything except you worked on something that had high uptake due to you or more likely someone else given how businesses/governments decide what to use. If you doubt this, I hit you with my COBOL argument: why not hire someone how knows what billions of lines of mission-critical code are written in? Must be better than almost everything else if argument by uptake and net worth in critical areas is meaningful.

                                    Or that’s just social-economic forces making that happen in a world where we need to consider other things. ;)

                                2. 1

                                  I think the whole point of the article here is that there is no one best type of interview. Some candidates have anxiety attacks when mild pressure is applied, and do much better in a lower-pressure situation like a take-home assignment, and don’t mind spending the time on it. Some candidates have families or other obligations and can’t spend (unpaid) hours outside of their normal job writing up a solution to somebody’s take-home assignment, but do just fine solving problems on whiteboards. Probably some others have issues with both of those and need something else again.

                                3. 9

                                  I think that’s a very different situation when you’re already comfortable with your environment.

                                  1. 9

                                    Couldn’t disagree more with you, either. I’ve also been in this industry for 30 years with almost all of them being an actual ‘software developer’ and I can count on one hand the number of times that I’ve had “produce now!” moments. Perhaps I’ve been lucky, but in my experience, these are rare, 1% times, when there’s a demo or something the next day and you’re trying to get something out the door. Given that, why exactly should we measure this? Even given those high pressure situations, you’re not standing alone, at a whiteboard or in front of a panel, with no resources ( google, library, other engineers ), no context ( mystery problem… you don’t get to know until you have 2 hours to solve! ) and no backup plan. Even with all of those caveats, I have NEVER had to cough up a random data structure with no reference material/resources/etc. in less than an hour or two EXCEPT in an interview.

                                    1. 1

                                      What I’m hearing is that a lot of people have had really bad experiences with whiteboard interviewing.

                                      If the interviewer is hung up on syntax, they’re IMO doing it wrong.

                                      1. 2

                                        If someone presents a problem for you, do you immediately recall all algorithms you’ve learned and start whiteboarding that? That’s generally what happens in these whiteboarding sessions that I’ve been in. I don’t remember a ton of stuff off-hand and search for it all the time. Should that count against me? If it does, I don’t really want to work at that place anyway.

                                        1. 1

                                          I realized (as I wrote in another reply to someone else) that there’s a disconnect here.

                                          I’m not a software developer. I do devops work which means I write a lot of VERY simple process oriented automation scripts in Python and occasionally Ruby or Bash.

                                          When I do whiteboarding with candidates, the most algorithmically complex I get is “print the duplicates in this list of numbers” but mostly it’s things like “Here’s a json blob that lives on an http server. Write code that grabs the blob and connects to each host in a group given on the command line” type things.

                                          But even so I certainly see where you’re all coming from, and can appreciate the idea that there are definitely better tools out there, especially for cases where you’re not doing hiring at scale.

                                          Which makes me wonder, how might one apply all this anti-whiteboarding sentiment in a large corporate environment? How do you get take home exams, pairing, or whatever to be effective when you need to hire 50 people in a month?

                                          1. 2

                                            I used to work at Capital One and we were always trying to hire people. Each team largely handled its own hiring, though, and we had .. so many teams. Some teams do a whiteboarding session, some do a take home assignment, some do a little pair programming exercise. It really depends on the team and the people.

                                            I’m not anti-whiteboarding for things like what you mentioned, but if someone is asking me to regurgitate a CS algorithm that I haven’t touched in decades, I don’t really get the point.

                                    2. 3

                                      Hmm. I’m not up to 30 years experience at this point (closer to 20), but I’ve never had this happen to me. Even when production bugs hit, it’s not a “We need you to produce. NOW” scenario, but an all-of-us circling the wagons type of deal to track it down. Even in startups where I was the only person writing code, it was never like that. That seems really unusual to me. I don’t know what country you’re in, but for context, I’m in the US.

                                      1. 4

                                        OK. We need you to produce. NOW

                                        Oh, you work for those type of guys who think programming is like manufacture so you can measure it by the number of bricks in the wall, LOC, or something. I don’t accept that kind of job. I know better. I witnessed it dozens of times for other companies I consult for and that kind of job is always substandard.

                                        I envy you

                                        Its better idea to fight for your rights (our rights, really).

                                        but I’m unsure whether or not your experience is one that many others share

                                        I am sure its not. I am sure its opposite. But that is where I draw the line. I don’t mind fixing simple things here and there ASAP, but designing entire/parts of app/service without adequate time is no bueno.

                                        1. 4

                                          Oh, you work for those type of guys who think programming is like manufacture so you can measure it by the number of bricks in the wall, LOC, or something. I don’t accept that kind of job. I know better. I witnessed it dozens of times for other companies I consult for and that kind of job is always substandard.

                                          This is kind of a cheap shot and also? It’s not true.

                                          But even if it was, people work for different reasons. I live in a society that doesn’t provide much of a net if you fall, and I have some medical conditions which require some fairly expensive care.

                                          So, yes I work for a company that pays well. I put a LOT of blood, sweat and tears into my job, and I won’t apologize for that.

                                          Might it not be the kind of place you’d want to work? I don’t doubt that, and I’m glad you’ve been able to find work that suits your particular needs and wants, but please consider that not everyone is you before making broad statements about people and workplaces when you have zero information.

                                          1. 4

                                            Besides the broad generalizations, I find myself between both of you. majkinetor strikes a cord when he says one must fight for their rights. I’d add that once one finds ones self (if ever) in a position wherein one can enlighten the company masses than one should do what one can. But, feoh, isn’t incorrect either. Sometimes you just gotta play the game. I have only worked for places that tend to think of programming in terms of lines of code. It’s really why I am always looking for a new place to work. And, yes, I hate interviewing as well, which makes my conundrum annoying.

                                            1. 1

                                              But even if it was, people work for different reasons. I live in a society that doesn’t provide much of a net if you fall, and I have some medical conditions which require some fairly expensive care.

                                              We do generalization here, any kind of behavior may be adequate in specific context.

                                              So, yes I work for a company that pays well. I put a LOT of blood, sweat and tears into my job, and I won’t apologize for that.

                                              Who asked you to do this ? When I speak, I speak about myself and what I think, I am not giving lectures or judgement.

                                              when you have zero information

                                              Not zero, I am in this business more then 30 years (also with medical conditions). Don’t be angry :)

                                              1. 2

                                                Not zero, I am in this business more then 30 years (also with medical conditions). Don’t be angry :)

                                                I can see where I came off as defensive there. I’m not angry, I’m just surprised at the expectations some folks have around the interview process.

                                                If you can find work with those expectations, that’s fantastic.

                                        2. 1

                                          I think you’re combining two different things here - having a candidate write things on a whiteboard in an interview, and interviewing a candidate based on solving highly abstract problems that have little relation to the actual day-to-day work of building software, stuff like writing binary tree manipulation algorithms.

                                          I think the second is a ridiculous thing to judge candidates based on almost all of the time, unless of course you’re doing one of the few jobs that actually involves doing stuff like that.

                                          The first though, is a perfectly good way to determine whether candidates can actually produce code.

                                          1. 0

                                            The first though, is a perfectly good way to determine whether candidates can actually produce code.

                                            Producing code on whiteboard? I had that on university on exams and many “programmers” came out of it without using a computer single time during the entire studies (I know, my sister was one of them, never turned a computer on until she finished computer studies! and got a job). Doesn’t inspire…

                                        3. 4

                                          Mine is kind of like the protagonist’s job in Office Space, but with more SIP trunking.

                                          1. 3

                                            At most places I’ve worked pressure was 99% of the simmering kind. Less than 1% I’ve had that flashpoint pressure but frequently it was artificially created to “increase productivity”. Basically the only time when you have to deliver NOW is during major production incidents.

                                            I’d bet that even at your “Major Cloud Company” the percentage of people involved in solving a major production incident are in the low double digits.

                                            I have frozen during whiteboard interviews and I’m far from a shy person. Day to day I’m more likely to be the kind of person that makes others freeze - and I don’t say that as some sort of self-praise, quite frequently it’s not a good trait.

                                            In my opinion, even with your highly data driven process, if you work for a “Major Cloud Company” you can just ask for a painting competition among your candidates and you’d get the same results. Your company probably pays a very high salary compared to the market so competition would be fierce and people willing to train for 6 months on the intricate details of painting while holding a full-time job at the same time would probably make at least half-decent coders :)

                                            TL;DR: It’s the high salaries and very competitive recruitment pipeline that produce results, not necessarily the actual interviewing details, in my opinion.

                                            1. 1

                                              I share your opinion - note that, in my mind, the “increase productivity” methods actually are very harmful in the long run

                                            2. 1

                                              But like any other fight / flight response where no actual mortal peril is involved, you can work through the anxiety and learn to process the situation differently, and once you do, it’s incredibly empowering!

                                              I’ve been thinking about this point for several days now.

                                              I haven’t been in the habit of thinking about interviewing in terms of a set of skills one can acquire/improve. However, perhaps if I can get myself into this mindset, it will help me to do something other than just walk into a new place and fly by the seat of my pants :)

                                              1. 1

                                                I’ve been thinking about this point for several days now.

                                                I haven’t been in the habit of thinking about interviewing in terms of a set of skills one can acquire/improve. However, perhaps if I can get myself into this mindset, it will help me to do something other than just walk into a new place and fly by the seat of my pants :)

                                                I am sincerely grateful for the fact that at least one person took the advice I was giving in the spirit in which it was meant :)

                                                It most definitely is a skill, and here’s how I know that:

                                                There was a period in my career when I was much better at interviewing than actually doing the work. My work ethic has never been in question, I will work to the point of stupidity at times, but I had made some rather near sighted choices around which skills to build (learning new technologies in a very shallow way VERY quickly) over skills I should have focused on (attaining mastery in a few key tools and subject areas)>.

                                            1. 2

                                              I was getting started with shellcheck recently and liked it. I started wondering how I might be able to get some form of “integration” with vim using whatever builtin facilities were available. I found this SO answer. I hadn’t known about :read (or :0read for that matter), or expand(’%’) before.

                                              I like to use tabs, so I swapped tabnew for new, then wrapped it up in a command in my .vimrc.

                                              command Schk execute 'tabnew | 0read ! shellcheck' expand('%')

                                              Now I can run shellcheck on a bash script I’m working on with the results appearing in a new tab.

                                              Nice that the building blocks for something like this are exposed in a functional (if ultimately not highly discoverable) way.

                                              1. 1

                                                “But it’s possible to run two protocols over the same port with some smarts in the endpoint.”

                                                This had simply never occurred to me before. But then seeing the proof-of-concept, it immediately clicked.

                                                Thanks for sharing this.

                                                1. 3

                                                  Thanks for sharing! I personally find this stuff quite interesting.

                                                  I worked through the first several sections of Udacity’s Knowledge-Based AI course around this time last year. Then a lot of life happened, so I can’t say I finished, but I really enjoyed the overview.

                                                  In particular, I still want to do a deep dive on planning. There was some discussion of Patrick Winston’s work I still want to go back and take a closer look at.

                                                  1. 1

                                                    For planning, the older ones that were really cool were Procedural, Reasoning System and Firby’s Reactive Action Packages. You can also DuckDuckGo conferences on planning to find state of the art research. Many forms just use constraint solvers, genetic algorithms, stochastic optimization, etc.

                                                  1. 4

                                                    I spent this weekend writing a Forth designed for music synthesis […]

                                                    I would love to hear more about this part specifically!

                                                    I wonder how the author’s audio-focused Forth compares to, say, Sporth.

                                                    1. 3

                                                      Developers are obsessed with the notion of “best practice”. It implies that there is one correct way of doing things, and all other solutions are either imperfect or, at worst, “anti-patterns”. But the definition of best practice changes everytime a new technology arises, rendering the previous solution worthless garbage (even though it still gets the job done).

                                                      Developers should be concerned with best practices. We are constantly learning better ways to do the jobs that we are doing. Some of the things that we used to do are no longer done, because there are better ways to do them; some of the new ways to do things are quite complex. The definitions change, because our understanding, or our underlying technology, changes. Similarly, we updated things like “our model of the atom” when we had a better understanding of how it should actually work. Complexity isn’t necessarily bad, just as simplicity isn’t necessarily good.

                                                      It’s much more important to think for yourself than it is to strive for simplicity because some guy on this blog told you to try to be simple. Sometimes it’s fine and great to have a static site with no database, but pretty often it turns out that databases are important and do things that you kind of need. Sometimes it’s great to go for a simple tech stack and remove moving parts, but sometimes it turns out that a more complex tech stack is actually important to do the kinds of things that you need to do. The thing that’s actually important is not simplicity, it’s understanding why should approach something a particular way.

                                                      Don’t get me wrong; there are certainly developers who find answers and then go looking for problems to solve with them. Sometimes developers make things much more complicated than they need to be. However, there’s a counterjerk to this happening where people make things much more simple than they should be, while questioning the utility of doing more. I think the simplicity counterjerk is just as destructive as thoughtless complexity.

                                                      1. 11

                                                        The issue with obsessing over “best practice” is that we end up building a certain way because it’s “best practice”, not because we understand why it’s appropriate to our situation.

                                                        1. 0

                                                          This is a great point. I think sometimes we fail to consider the full context for any given “best practice”, especially while in a development-intensive mode.

                                                          I think it’s important to have a touch of skepticism about best practices, especially in environments that change rapidly.

                                                          Ultimately, I think the intention comes from a good place. It seems to follow that if one is interested in following best practices at all, they want to do so out of a sense that it is the correct thing to do. This applies technically as well as socially. On its face, this may not seem problematic. However, issues arise once correctness (as a whole) and “best practices” become muddled.

                                                          Are best practices good because they are correct, or are they correct because they are best practices? This is starting to look a lot like the Euthyphro Dilemma.

                                                        2. 1

                                                          People should be concerned with which practices are the best, but not with ‘best practice’. The problem with the concept is that it’s used as a justification for doing something. ‘We’re doing X, Y and Z because it’s best practice’ is nonsensical. There’s someone that things that A, B and C are best practice as well. Justify why it’s the best thing for you to do based on its actual merits. When new, superior approaches arise, they might become YOUR best practice immediately, but for a lot of people ‘best practice’ is synonymous with ‘the way we’ve always done it’.

                                                        1. 0

                                                          Can anyone here provide recs as to SQL books/resources they’ve gotten value out of?

                                                          I recently cracked open a copy of Stephane Faroult’s The Art of SQL, though it’s too early at this point for me to say anything more substantive about it.

                                                          1. 17

                                                            https://www.python.org/dev/peps/pep-0572/#syntax-and-semantics

                                                            # Handle a matched regex
                                                            if (match := pattern.search(data)) is not None:
                                                                # Do something with match
                                                            
                                                            # A loop that can't be trivially rewritten using 2-arg iter()
                                                            while chunk := file.read(8192):
                                                               process(chunk)
                                                            
                                                            # Reuse a value that's expensive to compute
                                                            [y := f(x), y**2, y**3]
                                                            
                                                            # Share a subexpression between a comprehension filter clause and its output
                                                            filtered_data = [y for x in data if (y := f(x)) is not None]
                                                            
                                                            1. 9

                                                              I missed this whole debate over the summer, so thanks for posting the summary.

                                                              I’ve hit #1 and #4 more times than I can count. I didn’t consider #2 a problem, but there’s a lot of code I’ve written that would be affected by that.

                                                              #1 and #2 look good to me. #4 makes me skip a beat. Maybe I could get used to it. But I sorta don’t like how #3 and #4 mix “declarative vs. imperative”. They are mixing models. It would be nicer for a language to have dataflow or lazy evaluation / caching rather than having the programmer force evaluation and binding. (Although I realize Python is not this language for several reasons.)

                                                              Over time I’ve found myself using list comprehensions only for the simplest of cases. A plain “for loop” in Python is general, powerful, and easy to write :) It stands up better to editing and debugging, e.g. to add a print statement.

                                                              Anyway, it’s an interesting issue in language design.

                                                              1. 4

                                                                Sometimes list comprehensions are more elegant looking. I use them a lot because I heard a rumor they are faster than extending a list with a for loop. However this may be just wrong gossip.

                                                                In response to @modusjonens

                                                                s1 = """y =[]
                                                                for x in range(1000):
                                                                    y += [x**2]"""
                                                                
                                                                s2 = """y = [x**2 for x in range(1000)]"""
                                                                
                                                                import timeit
                                                                s1_times = [timeit.timeit(s1, number=1000) for n in range(50)]
                                                                s2_times = [timeit.timeit(s2, number=1000) for n in range(50)]
                                                                
                                                                import numpy as np
                                                                
                                                                np.mean(s1_times)
                                                                # -> 0.4136016632785322
                                                                
                                                                np.mean(s1_times) - np.mean(s2_times)
                                                                #-> 0.059135758097691005
                                                                
                                                                import scipy.stats as ss
                                                                
                                                                ss.ttest_ind(s1_times, s2_times)
                                                                # -> Ttest_indResult(statistic=7.744078660754616, pvalue=8.882270186123814e-12)
                                                                

                                                                ever so slightly better in the simplest case …

                                                                1. 1

                                                                  Could be a fun exercise to craft some examples using both methods, profile the code, and present the results!

                                                              2. 3

                                                                I added these examples to the article as well, they do add some good context.

                                                              1. 2

                                                                I wouldn’t even know how to go about formally proving code is correct.

                                                                For instance (for a personal project)—given a string of ASCII characters (32 to 126 inclusive) break it into a series of lines no longer than N characters wide (for display purposes). Each line is to be as long as possible, but no longer than N characters. The original string can be split at space (character 32) or dash (character 45). If broken at a space, the space character is not included in either half; if broken at a dash, the dash is to remain at the end of the (and here, I want to say left hand side, but I haven’t formally defined that yet, but I will anyway—see how hard this is getting?) left hand string. If there is no break point in N characters, then just break at N characters.

                                                                So how would I 1) formally specify that, and 2) formally prove my code works?

                                                                And this problem is the simplified version, as I’m dealing with UTF-8, which is another level of madness.

                                                                1. 5

                                                                  I totally recognize that problem (it’s the problem of “What’s the right spec?”) and think it’s a deep issue in FM and program design, but just for fun I wanted to try writing some pseudo specifications in weird Dafnyish-pseudospecs

                                                                  def line_splitter(str: string) returns (out: seq<string>)
                                                                  requires
                                                                    ∀ char ∈ str: 0 ≤ ASCII(char) ≤ 126
                                                                  ensures
                                                                    concat(line) == str
                                                                    ∀ line ∈ out: len(line) < N //max size
                                                                    ∀ i  ∈ 0..len(out)-1: //break point
                                                                       either 
                                                                         1) out[i].last = "-"
                                                                         2) space_exists_at_boundary(out, i)
                                                                         3) let extra := N - len(out[i]) in //there was no better point to break at
                                                                           ∀ char ∈ 0..extra: 
                                                                             either 
                                                                               1) out[i+1][char] ≠ " ", "-"
                                                                               2) Len(out) < extra
                                                                  

                                                                  “As long as possible” is a weird one, because that potentially implies global optimums. If you’re doing greedy matching you could move the postcondition about extra to also hold even if the last character is - or =. Note that this would be impossible to verify as you can’t ensure the user won’t pass in aaaaaaaaaaaaaaaaaaaaaaaa, which breaks the length invariants.

                                                                  (the above is super janky and I just whipped it up improvised, there’s probably a bunch of holes in that spec)

                                                                  1. 2

                                                                    One bug—requires should be:

                                                                    ∀ char ∈ str: 32 ≤ ASCII(char) ≤ 126
                                                                    

                                                                    The “as long as possible” is to avoid breaking too soon, like at the first possible break point. So that with an input of “one two three four” and N=10, you don’t get

                                                                    one
                                                                    two
                                                                    three
                                                                    four
                                                                    

                                                                    You last case of “aaaaaaaaaaaaaaaaaaaaaaaa” I did hint at with “[i]f there is no break point in N characters, then just break at N characters.” So if we have N = 5 we end up with

                                                                    aaaaa
                                                                    aaaaa
                                                                    aaaaa
                                                                    aaaaa
                                                                    aaaa
                                                                    

                                                                    But I was able to follow the logic. Thank you.

                                                                    1. 2

                                                                      that would be something like len(out) == N => "-" not in out and " " not in out

                                                                      Where I don’t have fancy existential operators on my keyboard. Also => is the implication operator.

                                                                    2. 2

                                                                      This pseudo-specification that you’ve whipped up is exactly the sort of thing that I’d like a programming language to let me define inline, right next to the code it specifies, and check at compile-time that the code I wrote matches the spec. If there was some tooling that let me do this for some programming language, that would be ideal, even if this pseudo-specification is off-the-cuff and “wrong” in some way.

                                                                      1. 4

                                                                        How about VeriFast?

                                                                        1. 1

                                                                          … a programming language to let me define inline, right next to the code it specifies, and check at compile-time that the code I wrote matches the spec.

                                                                          Absolutely. I recall watching an @hwayne talk about this sort of thing. I believe it was this talk.

                                                                          There’s also this post on contract programming which shows some demo contract code in a few languages. The D and Eiffel examples look pretty interesting.

                                                                          1. 1

                                                                            SPARK Ada and Frama-C are the ones with biggest industrial adoption that mix specs and code. Check them out. Anything you can’t prove it might turn into a runtime check.

                                                                          2. 2

                                                                            Wow, this thing, as well as the Dafny thing, really look reasonably approachable! The Let’s Prove Leftpad repo of yours, that you linked in the article, is also super interesting. That said, based on the repo, I find Dafny the most friendly (to the extent of surprisingly so), followed by Why3. The TLA+ is already stretching it, and then all the remaining ones look super alien and waaaay over my head. At first glance, the Dafny looks like something I could try using. Which leads me to a question: is there some catch? Is Dafny notably worse than the others in some way? Or is it equivalent, and is just some recent development where its authors tried hard to make it friendly to a “common programmer”?

                                                                            Also, can I just go and use Dafny now, when writing my regular day-to-day CRUD/CLI/GUI/… apps?

                                                                            Also, is there something like Dafny for some other languages than C#?

                                                                            1. 1

                                                                              Ill also note Why3 is a backend targeted by SPARK Ada, Frama-C, and many in CompSci. It feeds to multiple solvers with different strengths and wesknesses. You get that plus possible porting of features from other languages if you pick Why3.

                                                                          3. 3

                                                                            Formal proof is most useful when implementation is significantly more complex than specification. A classic example is sorting: merge sort (especially optimized one) is more complex than what sorting is. A more involved example is register allocation: you specify “live ranges should not conflict”, and prove graph coloring implementation.

                                                                          1. 1

                                                                            Congrats! Always makes me smile to see an awesome project by a Philly dev.

                                                                            1. 1

                                                                              Thanks!

                                                                            1. 1

                                                                              I’ve been able to spend some of my free time on writing/recording music at home. I run Ubuntu, so I’ve been learning more about the audio ecosystem there.

                                                                              I’ve been learning how to create drum beats with Hydrogen (http://hydrogen-music.org/). I’ve found it intuitive so far. Nice pattern-based UI lets you visualize the structure of the song. A lot of fun to use.

                                                                              I’ve been trying to wrap my head around Ardour (https://ardour.org/) for recording. I find DAWs pretty intimidating in terms of sheer number of options/modes. This one’s no different in that regard. But there are good docs and there’s an activity user community, so that’s a boon.

                                                                              1. 3

                                                                                Still working my Exercism problem. My mentor on there crafted a positively gorgeous hint response that laid out the thought process behind solving the problem, including the use of truth tables and peeling apart the problem and solution with abstract logic.

                                                                                I look forward to digging in and applying that to my wetware in hopes of not just solving the problem, but growing the necessary machinery to get better at solving such problems in the future which is my goal.

                                                                                1. 1

                                                                                  including the use of truth tables and peeling apart the problem and solution with abstract logic.

                                                                                  That sounds very satisfying! What’s the problem in question?

                                                                                  1. 2

                                                                                    You’re gonna be totally disappointed :) It’s a leap year calculator. Yes I am very much a slow learner for this crowd :)

                                                                                    I’d implemented my solution in the naive way using 3 conditionals, and my mentor wants me to implement it as a single conditional using nothing but boolean logic.

                                                                                    I’m having trouble figuring out how to do that, so he picked the problem apart and expressed it in non syntactical terms using logic, with a supporting truth table.

                                                                                    1. 3

                                                                                      I’d implemented my solution in the naive way using 3 conditionals, and my mentor wants me to implement it as a single conditional using nothing but boolean logic.

                                                                                      Yeah, this a thing I’ve been trying to give more and more notice to in my own code.

                                                                                      I had the fortune of studying formal logic before really getting into programming. It’s nice to be able to fall back on those skills, esp. when refactoring complex conditionals, but sometimes it’s easy to just crank out the first thing that works.

                                                                                1. 1

                                                                                  I feel compelled to share that this is one of the most elegantly-written pieces I’ve seen posted here. I really enjoy these philosophical posts.

                                                                                  This section, in particular, struck a chord:

                                                                                  Any potential addition to the codebase as a whole confers a particular anxiety, as each addition threatens to disturb the aesthetic harmony. In considering features and additions to a particular codebase, it is possible to apply this principle to the whole of the system in which that codebase will live, to ask what is the ideal harmonization of all constituent parts. It therefore becomes the job of the programmer to balance the aesthetic motivation, on one hand, with the utilitarian and technologically contingent nature of the activity, on the other.

                                                                                  I feel this anxiety frequently. The classic C.A.R. Hoare quote comes to mind: “Inside every large program, there is a small program trying to get out.”

                                                                                  1. 2

                                                                                    This sort of “mini interpreter in an object/module” stuff is so cool. Really interested to see the next installment.

                                                                                    One thing I found myself curious about was the change from the

                                                                                    return lambda value: op(value, nodes)
                                                                                    

                                                                                    in the original implementation of the build_evaluator method to the explicit declaration (and return) of the _op function in the updated code. These two methods achieve the same goal, correct?

                                                                                    1. 2

                                                                                      They achieve the same goal, that might be just the change of form from lambda to a def that confuses you I imagine? Other than that, the idea is the same indeed.