Stop Quizzing Senior Engineers — Start Talking to Them
Stop Quizzing Senior Engineers — Start Talking to Them
I have failed interviews. Multiple times. Not because I did not know my craft — I have been shipping production frontend for well over a decade. I failed because I could not recite the exact behavior of Promise.allSettled under pressure, or because I blanked on a tree traversal algorithm I had not hand-written since university. Meanwhile, the things I actually do every day — making architecture decisions, untangling legacy code, mentoring developers, designing systems that survive contact with real users — none of that was being tested.
This article is about what a good senior interview looks like. I know because I experienced one, and it changed everything.
Context: The Interview Problem Nobody Talks About
There is an uncomfortable truth in our industry. Many experienced engineers — people who have built and maintained complex production systems for years — routinely fail technical interviews.
Not because they are bad engineers. Because the interview tests a different skill than the job requires.
I work primarily with TypeScript and React. Promises? I use them every day — through async/await, through React Query, through framework abstractions. But when someone puts me in front of a whiteboard and says "chain three promises, handle partial failures, and explain the microtask queue," I stumble. Not because I do not understand the concept. Because I do not carry that specific API surface in my active memory. I have not needed to.
Tree traversals and graph algorithms? Honestly — I do not enjoy them and I rarely encounter them in my daily work. When I need something like that, I reach for a library or write a straightforward recursive approach. I do not sit and think about BFS vs DFS complexity tradeoffs while building a dashboard for an enterprise client.
Logical puzzles? Same story. I can solve them given time and a calm environment. Under interview pressure, with someone watching me and a timer ticking? I freeze. And I know I am not alone.
Here is what bothers me most: I have seen engineers who drill LeetCode for months pass these interviews with flying colors, then struggle to make a single meaningful architecture decision in their first sprint. The interview selected for the wrong thing. It tested preparation discipline, not engineering capability.
I am not saying algorithmic knowledge is useless. I am saying that for a senior role, it should not be the primary filter. And yet — at most companies — it still is.
The Interview with John
Let me tell you about the best technical interview I have ever experienced.
It was at a large Canadian company. Well-known products — automotive industry. The kind of place where the frontend work is complex, the user base is massive, and the decisions you make on Monday are still affecting the product two years later.
My interviewer was John. I remember his name because what happened in that interview stuck with me permanently. I still feel a kind of gratitude toward him years later — not because he went easy on me, but because he showed me what a respectful, effective technical interview looks like.
We were scheduled for ninety minutes. We went over two hours. I did not notice the time passing. Not once did I feel like I was being tested. It felt like I was having a deep technical conversation with a colleague I respected.
John did not ask me what a closure is. He did not ask me to reverse a linked list. He did not ask me to explain the event loop.
Instead, he asked things like:
- "You join a team and inherit a React app with 200+ components and no design system. What is your first week like?"
- "The product team wants to support three different brands from the same codebase. How would you approach the frontend?"
- "You notice the CI pipeline takes 40 minutes and developers are complaining. Where do you start?"
- "Two teams depend on the same component library but ship on different schedules. How would you handle versioning?"
- "A junior on your team writes overly abstract code — creates a generic wrapper for everything. How do you handle that conversation?"
Every answer I gave led somewhere. John would nod and then say, "Okay, but what if the team pushes back on that approach?" or "What if the deadline is in two weeks?" or "What happens when the third team joins and they use a different state management solution?" He was not looking for a specific answer. He was mapping how I think.
I got the job. I stayed for about three years. The team was strong, the work was meaningful, and as far as I could tell, the company did not regret the hire. That interview — with no algorithms, no trivia, no trick questions — predicted my actual job performance far better than any whiteboard session ever could.
That experience left me with a conviction: you learn more about a senior engineer in thirty minutes of genuine conversation than in two hours of trivia.
Why Trivia-Based Senior Interviews Are Fundamentally Broken
Let me be specific about the failure modes.
The wrong skill is being tested. When you ask a senior engineer to write a debounce function from memory, you are testing recall. But a senior's actual value lies in knowing when to debounce, where it matters for user experience, and whether the framework already provides a better abstraction. The API signature is a Google search away. The judgment is not.
False negatives destroy your pipeline. The most experienced candidates — the ones who have been building real systems instead of doing interview prep — are the ones most likely to fail trivia questions. They have been too busy shipping code to memorize it. You are systematically filtering out the people you want most.
False positives are expensive. A candidate who spent three months on an interview prep platform will ace your quiz. They might know every closure edge case, every prototype chain gotcha, every this binding trap. But put them in front of a real architecture decision — "should we build a micro-frontend or keep the monolith?" — and they might have nothing to say. That mismatch costs you months of onboarding and potentially a bad hire.
Stress is not signal. There is a pervasive belief that "if they can perform under pressure, they will be good in production." This is wrong. Interview stress and production pressure are fundamentally different experiences. In production, you have context, tools, colleagues, and time to think. In an interview, you have a stranger staring at you while you try to remember if Array.prototype.flat takes a depth argument. These are not comparable situations. You are measuring anxiety tolerance, not engineering skill.
It does not match how we actually work. In my daily work, I look things up constantly. I read documentation. I discuss approaches in Slack. I use AI to scaffold boilerplate. I do code reviews where I catch things I would not have remembered in an interview. The isolated, no-tools, no-references interview environment tests a scenario that literally never occurs in the actual job.
The Conversational Interview: What It Looks Like in Practice
The alternative is not "just chatting." A conversational interview has structure, evaluation criteria, and clear goals. But it evaluates what matters: how the candidate thinks about problems, makes decisions, and handles complexity.
Let me walk through two complete examples. These are realistic — the kind of conversations you could have with a senior frontend candidate next week.
Example 1: The Innocent Question That Reveals Everything
Imagine you start the interview with this:
"Hey, so — your team has just been told that you need to support dark mode across the entire product. How do you approach this?"
This sounds almost too simple. Dark mode. Every junior knows about prefers-color-scheme. But this question is a depth charge. Watch what happens when you follow the thread.
A junior will talk about CSS variables and a toggle button. A mid-level will mention design tokens, maybe theming with styled-components or Tailwind.
A senior? A senior will start asking questions back:
- "How big is the product? How many pages, how many components?"
- "Is there a design system, or are styles scattered?"
- "Does the design team have dark mode designs ready, or are we deriving them?"
- "What is the CSS architecture — utility-first, CSS modules, CSS-in-JS?"
- "Are there third-party components that do not support theming?"
- "Is this a gradual rollout or a big-bang release?"
Now you are in a real conversation. Follow up with:
-
"The design team says they will provide dark mode tokens, but only for the core components. The rest — about 60% of the UI — you have to figure out." This tests whether they can handle ambiguity and make pragmatic decisions about imperfect situations.
-
"One of the senior developers on the team insists on doing a full CSS refactor before implementing dark mode. He says the current styles are too messy. What do you do?" This is no longer a technical question — it is a leadership and collaboration question. How do they handle disagreement? Do they see the tradeoff between ideal and practical?
-
"Product says they want to A/B test dark mode with 10% of users first." Now you are in feature flags, runtime theming, and state management territory. Do they think about persistence? Server-side rendering? Flash of unstyled content?
One question. Fifteen minutes. You have learned more about this person's seniority than any algorithm challenge could reveal. You have seen how they decompose problems, how they handle incomplete information, how they think about people and process alongside code.
Example 2: The Deceptively Simple Scenario
Here is another one:
"A product manager comes to you and says: users are complaining that the app feels slow. No specific page, just... slow. What do you do?"
Again — sounds easy. But a senior engineer's answer to this question reveals an enormous amount about their experience.
The junior answer: "I would use Lighthouse and fix the scores."
The mid-level answer: "I would profile the app, check bundle size, look at network requests."
The senior answer starts with: "Before I touch any code, I need to understand what 'slow' means to the users."
- "Is it initial load time? Navigation between pages? Interaction responsiveness? All of the above?"
- "Do we have any metrics — Core Web Vitals, custom performance tracking, session recordings?"
- "Is it slow for all users or specific segments — mobile, specific regions, specific browsers?"
- "When did it start feeling slow? Was there a recent deployment that correlates?"
Follow up:
-
"Turns out it is mainly mobile users in regions with slower networks. Desktop users are fine." This pushes toward real-world constraints — code splitting, server-side rendering, image optimization, CDN strategy. Do they think about the network layer, or only about JavaScript?
-
"The VP of Engineering wants a fix in two weeks because there is a board meeting." Now you are testing prioritization under pressure. Do they panic and promise everything? Do they push back with data? Do they suggest a phased approach — quick wins first, structural changes later?
-
"The backend team says the APIs are fine and the problem is purely on the frontend." How do they handle cross-team accountability? Do they accept that at face value, or do they know how to prove or disprove it with data?
Two questions. Two conversations. Thirty minutes total. You now have a detailed picture of how this person operates in the real world. You know their technical depth, their communication style, their leadership instincts, and their ability to work with ambiguity. No whiteboard required.
What Makes These Questions Work
Notice what both examples have in common:
- The initial question is approachable. It does not trigger "I need to remember the right answer" anxiety. It triggers "let me think about this problem."
- Depth comes from follow-ups, not from the question itself. The interviewer's skill matters. You need to know where to push.
- There is no single correct answer. Multiple good approaches exist. What you evaluate is reasoning, awareness of tradeoffs, and the ability to adapt when the situation changes.
- They test real-world skills. Ambiguity handling, stakeholder management, prioritization, team dynamics — these are what senior engineers actually deal with every day.
The AI Question Every Senior Interview Needs in 2026
Let me say something that might be controversial: if you are interviewing a senior engineer in 2026 and you do not ask about AI, you are missing a critical data point.
Here is the reality. I know engineers — good ones, experienced ones — who write maybe ten lines of code by hand per week. The rest is orchestrating AI agents. They prompt, they review, they refine, they iterate. And they are productive. They ship features, maintain quality, and make good architectural decisions. They are still seniors — arguably more effective seniors than before.
I also know engineers who barely use AI. They write everything by hand, they are meticulous, and they produce excellent work. Also valid.
The interesting question is not "do you use AI?" It is everything that comes after.
But here is the problem: candidates will not talk about this honestly unless you make it safe. There is still a stigma. Engineers worry that admitting heavy AI usage will make them look lazy or less skilled. So they downplay it. And you miss the real picture.
How to ask:
- "Walk me through how you built the last feature you shipped. Tools, process, everything." This is open-ended enough that they can naturally mention AI without feeling judged.
- "When you use AI-generated code, what is your review process? What do you look for?" This tells you whether they are using AI thoughtfully or blindly.
- "Have you seen AI-generated code cause a problem in production? What happened?" A senior who uses AI regularly will have a story. And how they handled it tells you about their quality standards.
- "A developer on your team submits a PR that is entirely AI-generated. They understand it, but they did not write any of it by hand. How do you feel about that as a reviewer?" This tests their philosophy about code ownership, maintainability, and team standards.
What you are really evaluating is judgment. Does this person use AI as a power tool while maintaining engineering responsibility? Or are they outsourcing their thinking? There is a big difference, and a conversational interview is the only format that can surface it.
Where Conversational Interviews Break Down
I am not going to pretend this approach is perfect. It has real limitations.
It requires skilled interviewers. Running a conversational interview well is hard. You need someone who knows the domain deeply enough to improvise follow-ups, challenge weak answers, and recognize strong ones even when they differ from what they expected. A trivia quiz can be run by anyone with an answer sheet. A conversational interview needs a senior engineer who is also a good communicator. Not every team has that.
Consistency is genuinely difficult. When different interviewers ask different questions in different ways, comparing candidates becomes harder. Two candidates might both be strong, but the scores look different because one interviewer went deep on architecture while another focused on team dynamics. You need a shared rubric with explicit evaluation dimensions to mitigate this.
It takes more time. A meaningful conversational interview needs 60 to 90 minutes. You cannot rush it to 30. For companies doing high-volume hiring, that is a real constraint.
Charisma bias is a real risk. Open-ended conversations naturally favor people who are articulate, confident, and personable. Brilliant engineers who are introverted, or who communicate better in writing than in speech, or who think slowly and carefully, can get undervalued. Your rubric needs to account for this — evaluate the substance of answers, not the delivery.
Some candidates genuinely prefer structured tests. Not everyone thrives in ambiguity. Some great engineers feel more comfortable with a clear problem, a clear solution, and a clear evaluation. Dismissing their preference entirely is its own form of bias.
Common Mistakes When Running Conversational Interviews
Asking scenario questions but having a "correct" answer in your head. If you already decided that the right approach to the dark mode question is "use CSS custom properties with a ThemeProvider," you are just running trivia with extra steps. The goal is to evaluate reasoning, not to check if they chose the same solution you would.
Being too casual. "Let's just chat" produces zero usable signal. You need a plan: which scenarios, which follow-ups, what evaluation criteria. The conversation should feel informal. The evaluation should be rigorous.
Not going deep enough. The first answer to any question is the surface layer. The signal is in the second and third follow-ups. "Why that approach?" "What could go wrong?" "What would you do differently with more time?" If you accept the first answer and move on, you are wasting the format.
Failing to calibrate. Three interviewers running three different conversational interviews without shared criteria will produce three incomparable evaluations and an argument in the debrief meeting. Calibration sessions — where interviewers evaluate the same recorded interview and compare notes — are not optional.
Testing outside the candidate's experience. If someone has spent their career in React and you ask them to design a distributed backend system, you will get a weak answer that tells you nothing. Tailor your scenarios to the role and the candidate's actual background.
Treating AI usage as a negative signal. This one deserves its own bullet point because it is still happening everywhere. If a candidate openly says "I use AI for 80% of my initial code generation," the correct follow-up is "how do you ensure quality and maintainability?" Not a raised eyebrow.
When NOT To Use Conversational Interviews
This format works best for senior, staff, and principal-level roles where judgment, systems thinking, and leadership matter more than implementation speed.
For junior roles, you need to verify fundamental skills. A coding exercise — even a simple one — is reasonable. You need to know they can write a function, debug an error, use basic data structures. A conversational interview would not produce enough signal at this level because junior candidates might not have enough experience to discuss.
For mid-level roles, a hybrid works well. A short practical task — maybe a small feature implementation or a debugging exercise — followed by a conversational segment about their approach and decisions.
For high-volume pipelines processing hundreds of applicants per week, conversational interviews do not work for initial screening. Use them in final rounds where signal quality matters most and the candidate pool is already filtered.
And if you honestly cannot train your interviewers to run these well — if they will default to "just chatting" without structure or evaluation criteria — then a structured technical assessment with a clear rubric will give you more consistent, if less insightful, results. A mediocre conversational interview is worse than a well-run structured one.
How I Apply This
When I interview senior candidates now, the first thing I say is: "This is not a test. I want to understand how you think about problems." You can visibly see people relax. The dynamic shifts from "exam" to "conversation." And that shift is where the good signal lives.
I pick two or three scenarios from projects I have actually worked on — problems where I already know the trade-offs, the failure modes, the "it depends" factors. This lets me push on follow-ups authentically instead of working from a script.
I pay close attention to how candidates handle what they do not know. Do they acknowledge the gap and think through it? Do they ask clarifying questions? Do they try to bluff? That single behavioral signal — comfort with uncertainty — correlates more strongly with real-world senior performance than any technical test I have seen.
I always ask about AI. Always. The answer tells me whether they are adapting to how our craft is evolving or clinging to how it used to be. Both types of engineers can be effective, but I want to know which one I am hiring.
The best senior hires I have been part of all came through conversational interviews. The worst mismatch I ever witnessed was someone who aced every technical quiz but could not have a productive discussion about trade-offs in their first sprint. They knew all the answers. They just could not apply them to real problems.
Practical Recommendations
If you are an interviewer or hiring manager:
- Build a library of 5–8 scenario questions drawn from real challenges your team has faced. Rotate them across interviews.
- Write a rubric with clear dimensions: problem decomposition, trade-off awareness, communication clarity, technical depth, collaboration instincts, leadership signal. Score each dimension independently.
- Run calibration sessions at least quarterly. Have two or three interviewers evaluate the same mock candidate (or a recorded session) and compare scores. Discuss disagreements.
- Let interviews run long if the conversation is productive. Cutting off a great discussion at exactly 60 minutes because the calendar says so is counterproductive.
- Ask about AI openly. Normalize it. The candidates who are honest about their AI workflow are giving you more useful information than those who claim they write everything by hand.
If you are a candidate:
- When you do not remember an exact API or detail — say so, explain the concept, and describe how you would find it. This is what you would do at work. A good interviewer will not hold it against you. A bad one will, and that tells you something about the company.
- Prepare stories, not answers. Think about real projects: problems you encountered, decisions you made, trade-offs you accepted, things you would do differently. Stories are memorable and specific. Rehearsed definitions are not.
- Be honest about your AI workflow. If you use Copilot, Cursor, Claude, or any other tool heavily — own it. Explain how you review, validate, and maintain what gets generated. Hiding your actual workflow means the company is evaluating a version of you that does not exist.
- Pay attention to how you are being interviewed. If it feels like a university exam — question after question, right or wrong, no discussion — that is a signal about the engineering culture. Use that information when you decide whether to accept or decline.
- Ask your own questions aggressively. What is the team's architecture like? How are decisions made? What was the last big technical bet that did not work out? Your questions reveal your seniority more than your answers do.
Summary
The best technical interview I ever had involved zero algorithms, zero API trivia, and zero whiteboard coding. It was a two-hour conversation with a guy named John at a large Canadian automotive company, where he asked me real questions about real problems and followed every answer with "what if?" I learned more about how interviews should work from that single experience than from dozens of standard ones on both sides of the table.
Senior interviews should test judgment. They should surface how a candidate thinks about ambiguity, trade-offs, team dynamics, and the realities of production systems. Conversational formats — scenario-based discussions, simple-sounding questions with deep follow-ups, collaborative problem-solving — do this better than any quiz.
They require better interviewers. They require shared rubrics and calibration. They take more time. But the trade-off is clear: you hire people who can actually do the job, not just people who can pass the test.
In an era where AI writes a significant chunk of our code, the thing that makes a senior engineer valuable is not what they can recite from memory. It is the quality of the decisions they make. Test for that.
If you have feedback, alternative approaches, or just want to discuss this topic — feel free to reach out.