Conducting Better AI-proof Programming Interviews
June 01, 2025
The Broken State of Programming Interviews
Programming interviews have long focused on trivia or common tutorial style questions that reward candidates who are good at memorization and interviewing practice, but not necessarily job performance. These types of questions are inefficient at assessing a candidate’s actual job abilities because they’re not evaluating realistic work scenarios.
Rote memorization is not the most important skill you need on the job. Most development revolves around learning and problem solving. There’s too much to know and too many options to choose from to know it all off the top of your head. Interviews should reflect that and focus on how candidates learn and adapt as opposed to implementing algorithms that require memorization and practice.
AI Further Breaks Interviews
AI excels at memorization problems, which will make the situation worse. Cheating is becoming a bigger problem and it can be hard to detect with standard questions. There are now companies that offer real-time AI interview assistants that can produce live instructions for the candidate to follow to pass an interview, telling them what to type and even what to say.
To combat cheating, companies are trending toward draconian and dehumanizing measures like eye tracking, spyware, invasive room tours, and even disallowing bathroom breaks. Anti-plagiarism tech is also very inconsistent and can lead to false accusations against legitimate candidates. These invasive acts immediately put the interview on the wrong foot and starts the company relationship in a negative light.
A return to in-person interviewing is one solution, but it still doesn’t help with the screening phase, and relying on resume content rewards those who use AI to tailor their resumes to each position’s requirements. Flying out a candidate who faked their way through the screening process is an incredibly expensive mistake to make.
The solution isn’t an arms race between cheat and anti-cheat tech. That will further dehumanizes candidates and create a more negative interview experience. We need interviews that can break AI, and the way you break AI is the same way you create better interview questions that are more reflective of actual work skills.
The interview process was already broken and AI is just exacerbating the problems. We shouldn’t be putting so much effort into protecting something that’s already broken.
Test Learning, Not Memorization
The most effective technique I’ve found for finding high quality candidates is to focus on learning style questions. If a question requires the candidate to learn new information before they can solve the problem, then they won’t be able to prompt an AI properly. AI can solve problems for you, but they can’t learn for you.
I’ve found the best way to test for this is to give the candidate a somewhat obscure and interesting API that requires them to read documentation and understand a more unique programming construct. Unlike recalling the answer to a puzzle question, it’s hard to fake that you’re learning. It’s also harder to provide an AI with API documentation context than to copy paste a puzzle description, which is basically already a perfectly structured AI prompt.
I’ll point candidates to specific methods they’ll need to use. Even if the AI is already up to date with a lesser known API, it will still have a hard time looking like it’s learning. If the solution includes methods you didn’t provide links to and the candidate didn’t look up then that’s an obvious red flag.
I’ve found the Canvas API to work particularly well for interviewing. It uses a less intuitive stack pattern for paths that must be cleared to avoid overdrawing the same line again. So far I have yet to encounter a candidate who could remember it off the top of their head. For these questions I make it totally open book and encourage them to ask me any reference questions they have. The more they are encouraged to ask questions the better.
To help compare candidate skill levels I’ll progressively add requirements after prior requirements are completed. This establishes a measurable metric of how many requirements could be completed in the given time frame, and also tests a candidate’s ability to refactor their code and adapt to changing specifications and further test their comprehension of the APIs they’ve been introduced to.
Proprietary Interview APIs for Large Companies
If you’re a big company that needs to deal with question leaks you can take this another step further and create your own unique internal API to use just for interviewing. This means the API won’t be in an AI’s training data, making it much harder to get the context needed to feed into the AI. Even if they managed to get your internal API documentation, having specific knowledge of an internal proprietary API is a huge red flag.
By using your own API you gain control over the process that can help you guard against leaks. You can dedicate some resources to gradually changing that API so even if it does leak, it will end up getting out of date. In a large organization, the relative investment for doing this would be small and provide a large impact. You could even set up the API to make it obvious if it’s being analyzed by an assistant by setting up fake methods you don’t point out to candidates in the interview as honey pots.
Focus on Problem Solving
Learning quickly is incredibly important, but you also need to be able to apply what you’ve learned and find elegant ways to integrate your research into solutions. This requires strong problem solving skills to deal with unique problems and technology combinations.
AI knows nearly everything there is to know, but it’s still poor at architecting solutions and designing elegant systems on its own without proper guidance and prompting. Tightly encapsulated algorithm questions don’t expose these skills.
Real World Problems Over Puzzles
Leetcode style problems will certainly be documented already in AI training data. Instead of re-using common problems, start keeping tabs on the real world problems you encounter during work and try to find ones that are encapsulated enough to put in an interview. Solutions that were hard to find will require stronger problem solving skills from candidates, and AI will be less likely to solve it easily.
Your utility functions are probably a good place to look for examples of real programming problems your team had to solve without a readily available library because the problem was either too obscure or too use case specific.
Debugging Style Questions
Debugging style questions are also effective at testing real problem-solving skills. Take a working program and introduce unique bugs then ask the candidate to fix it to comply with provided behavior requirements. By default, AI is not hooked up to the output of your programs, making it near impossible for it to solve this kind of question if you form the bugs correctly to require more complex observability. Without the AI iterating on the output of the program or using a debugger, it will have a hard time narrowing down bugs. Even if AI gets advanced enough to do live screen grab input to be able to debug programs, the way it would do so would be extremely unnatural. If a candidate seems to magically solve multiple bugs without iteration, that’s a huge red flag.
As an example, I’ve used a simple whack-a-mole react app with common react mistakes introduced into the components, hooks, effects, and game loop. While an AI might be able to solve those issues, it would not be able to do so in a natural way by iteratively testing and interacting with the game without proper setup. A candidate solving these problems will do so by identifying specific issues, and then analyzing the program state by debugging.
Go Deep on Past Work
Learning about a candidate’s prior accomplishments is also a great way to find out what they’re capable of solving, but I find that unless you conduct this kind of conversation properly, it ends up turning into non-informative buzzword soup. The central failing here is focusing on quantity over quality. When learning about someone’s prior work it’s tempting to try and learn about a bunch of projects they’ve worked on in the past and their results, but going broad will make it harder to tell whether they have legitimate skills.
The best information I’ve been able to extract from these questions has been from doing a deep dive into a specific difficult problem they’ve solved. I’ll ask them to explain a specific project that they are particularly proud of or found especially challenging. It takes a lot more focus to dive into a tough technical challenge and properly understand their solution, especially when it’s something done in a different domain than you’re used to. It’s tempting to accept the high level explanation and move on, but it’s important to dig deep and properly understand their thought process and solution.
Understanding what they did will expose a lot of very important aspects about the candidate. Can they take on complex problems? How deep can they dig into hard technical spaces? Are they capable of applying those solutions in practice? Do they do a good job at explaining their work to other people?
It’s important to follow your genuine curiosity instead of using scripted questions. If you hear buzzwords or concepts you’re not familiar with, ask what they mean and keep asking until it’s clear. If a candidate can teach you something complex, that’s revealing of their expertise. When someone really understands their work, they can explain it clearly and go deep on the projects they’ve worked on. At high levels, it’s hard to tell what they actually did themselves versus what was part of a larger project or handled by the rest of the team. Asking for lower level details will expose how much they truly know and understand.
Conversational, context heavy questions with follow-up questions guided by your natural curiosity will expose the true knowledge of the candidates you’re interviewing, and is something that can’t be effectively replicated by AI.
Focus on Process, Not Results
With all these approaches it’s important to make the focus on the process not the result. Encourage discussion, ask about their thought process and for them to narrate what they’re thinking. Don’t be a blank slate and just observe. Get involved, encourage them to ask questions, and make the format more collaborative with open-book question and answer sessions. In addition to being harder to game, these approaches also put developers on a more level playing field because they aren’t standard puzzle questions that reward a candidate’s random knowledge.
Developer Strengths are AI Weaknesses
AI is getting more and more capable, but it is still not a replacement for highly skilled developers with strong problem solving, software design, and debugging skills. If it gets there, all office jobs will be gone, not just programming. Testing against AI weaknesses not only prevents cheating, it also finds you a better team. You’re not looking for someone who copy pastes requirements into an LLM, you want someone who can do things the LLM cannot.
We should also consider embracing the role of AI and try adapting the interview process to suit it. AI is getting used more often on the job, so having AI coding skills isn’t a bad thing. Interviews could focus more on explaining code and discussing what it does and how it could be improved. Talking about code naturally is very hard to do if you’re being fed information instead of speaking from your own understanding. If you’re inviting candidates to work with AI in the interview process, it becomes harder to abuse it to gain an advantage over other candidates.
Learning to solve unique problems and thoughtfully iterating on them to find more elegant implementations is a vital skill on the job, not easy to fake, and something AI is not good at. Dump the leetcode questions, encourage discussion, and focus on real world skills.