Open-ended Interview Questions

One of my friends linked to Alex Yumashev’s essay about Jitbit’s SQL interview questions.

The brunt of the essay is certainly true: that asking “trivia” questions in interviews isn’t such a great practice–despite its wide adoption. The set of questions about target queries given a little data model seems quite good for practicing candidates. (That is, candidates who are actually writing code rather than designing things.)

The essay suggests that asking “open-ended questions” is the way to go for non-query questions — questions that aren’t actually about writing a query, in the context of interviewing a database professional. While I agree that asking open questions is a good idea, I think the advice given in the essay was rather shallow.

Trivia questions aren’t so bad in a screening situation — where we want to get some filter for candidates. Ask a bunch of them, see how many the candidate gets, and make a decision about if they should proceed to a more intensive loop or not. Oh-for-six? Probably not. four or five good answers? All six? Then, if the questions are sensible, it’s likely the candidate will do well at the next interview phases.

When I interviewed at Google, the screener asked me a question about Linux signals. The question ended up actually being about the signal numbers, not even their names. Maybe there are people who know that SIGINT is 2 and SIGKILL is 9 and SIGTERM is 15, but I can’t remember that. And I can barely imagine a scenario where not knowing those numbers is an indicator that I won’t be successful at a particular role.

(Of course, the context here is database work — and the signals question is a lot more about systems programming. Maybe a similar question in the database genre is about exactly how many bytes are on each page in a certain DBMS, or what the default precision and scale of a particular datatype might be.)

On the other hand, if we ask about informal definitions (“describe normalization in your own words”) or enumerations (“tell me all the join operators you know”) then we can get information that’s appropriate for screening a candidate. Those results are easily evaluated, and indicate a level of knowledge or experience that is relevant to the actual role.

In more involved interviews, trick questions aren’t so useful because they probe just a tiny area of the candidate’s knowledge. On the other hand, the interviewer’s responsibility is a lot more than just “shut up and listen”; and a good interview question is a lot more than just an open-ended conversation starter.

It’s possible to ask a question that’s simply too open-ended. It’s too unspecific, has too much room for interpretation. Such a question is going to start a conversation — about nothing in particular. The candidate might have a hard time deciding which story to tell, or how to frame that story, or what details the story should or should not contain. (We’ve got to worry about bias against people who aren’t extroverts, or people who simply aren’t great story tellers.)

For the interviewer, they’ve got to interpret the results of the question and use that information to make a hiring decision. (That is really the interviewer’s job: to collect that information and interpret it. Repeatably, and without bias.) Since the question is unstructured, the answer is going to be unstructured. What is the interviewer looking for in the answer, specifically?

Even with a tighter but still open-ended question, the interviewer needs to have some framework for evaluating the answer. They have to think about when they might step in to guide the conversation, what they’ll try to direct the candidate to do or say, or how to hint them if they get a bit stuck.

The interviewer can’t simply “shut up”, then. They have to participate and interact. Say the candidate has claimed they invented some technology, or were pivotal in their development. An interviewer almost assuredly should not accept that at face value and instead push a bit. Did they really invent it? Or were they just in the room when someone else did it? Did they really make decisions critical to the success of the implementation, or were they just following orders?

There is a lot of talent in framing questions and in evaluating the answers. Doing it right involves soft skills, like making the candidate comfortable and open to talking. It also involves technical skills in evaluating the answer, particularly in the context of the candidate’s experience. And it involves true people skills in eliminating bias.

Repeatably is also important. Say we interview two candidates, and both are asked about what they did to solve a database performance problem. How do we compare their answers? With a closed question, evaluating the answers is usually pretty simple. Evaluating an essay or story or explanation is much more difficult and nuanced, and therefore hard to objectively compare. Which of the two candidates gave a better answer when the answers are amorphous and not normalized?

Coming up with good open-ended questions isn’t so hard … until you also consider how to evaluate and structure them. Most teams I’ve worked with are pretty bad about evaluating answers, and are doing a disservice to themselves.






Leave a Reply

Your email address will not be published. Required fields are marked *