One of my least productive traits is my propensity for getting caught up in YouTube binges. Every once in a while, a scene from Moneyball will pop up in my recommendations, and I have no choice but to tumble down the rabbit hole. For the uninitiated, the movie is based on the book Moneyball: The Art of Winning an Unfair Game and chronicles the Oakland Athletics baseball team’s journey to build a competitive roster using sabermetrics. There’s just something about this movie that makes its scenes endlessly rewatchable. I have only watched Moneyball from start to finish once, but the amount of time I have spent watching snippets online would make one think it was my favourite movie of all time1. Ironically, I don’t even understand the rules of baseball and have no particular interest in the sport.
I do have an interest in player analysis, though, and watching the scenes in Moneyball, I can’t help but long for a similar revolution in the software industry. Imagine how much more effective hiring would be if candidates could be boiled down to a couple of metrics that correlate strongly with performance. I am not sure that it is possible to define such metrics. Unlike in sports where there is a single clear objective, software engineering is considerably more amorphous. The goal of most companies is ultimately to turn a profit, but engineers are often so far removed from that goal that it is difficult to gauge their contribution. Engineer A made an effort to make a central CI/CD pipeline faster, while engineer B increased the reliability of a service. Which contribution benefited the company’s bottom line more?
In my experience, I have found that the ability to communicate well - especially in writing - is a great indicator of engineering expertise; I have yet to meet an excellent engineer who is a poor communicator. Granted, I personally enjoy writing, and this no doubt influences my opinion. Although I try, I find it difficult to not overemphasise skills in candidates that I possess myself or treasure highly.
The advent of AI has hardly made hiring easier. Choosing the right candidates is tricky even if one knows exactly what qualities to look for, but I feel that AI has already shifted the attributes that make a good engineer, and that this shift will continue as the technology develops. For example, I pride myself on knowing the ins and outs of idiomatic Go, but as AI turns code into a commodity, I’m not sure that this kind of skill will remain valuable. I suspect many engineers - myself included - are due for some introspection about how their current skills apply in the age of AI, or if they apply at all.
There’s also the question of whether candidates should be allowed to use AI tools during interviews. I discussed this with my colleagues and was surprised to hear that most of them were against it. The principal argument was that AI tools can make candidates seem more skilled than they are and let them solve problems that they would not be able to without assistance. I don’t buy this argument. If an organisation offers its employees access to AI tools2, then candidates should also be able to use them during interviews.
Those concerned about candidates using AI should consider whether the real problem is that their hiring process probes for skills that correlate poorly with what is actually needed for the role. As an example, I might be disappointed that a candidate who could solve a coding challenge with AI is not also able to do great code reviews. In this case, the problem is with me: if I want to hire great reviewers, then the process should probe for that. I realise I am being a bit idealistic, and that there are many skills that are too difficult or time-consuming to measure in an interview setting. Still, I think that the concern around candidates using AI is overblown.