A blind, automated evaluation like a structured AI interview should always be your first step to completely remove human bias from initial screening. Then, follow with a structured evaluation process that assists interviewers in staying objective.
A solid candidate evaluation form is one of the simplest ways to improve hiring quality. It gives interviewers a shared structure, helps you compare candidates on evidence rather than instinct, and protects the candidate experience by keeping outcomes timely and transparent.
A candidate evaluation sheet is a single source of truth that captures how interviewers assess skills, behaviours, and values against a role profile. It usually includes a rubric, anchored questions, numeric ratings, and space for short notes that quote what the candidate said or did. Consistent use reduces unconscious bias, improves the quality of discussion, and speeds up offers without sacrificing fairness.
Before you jump to templates, it helps to align on what good looks like.
You can lift any of the templates straight into your ATS or an Excel or Google Sheet. Use one form per interviewer, then combine into a single view for the panel.
Use for phone screens or first interviews.
Candidate:
Role:
Interviewer:
Stage: First in-person or phone interview, Task review
Criteria and rubric (1–5)
Behavioural prompts
Ratings
Evidence notes
Red flags or concerns
Accommodation provided
Overall recommendation: Hire, Hold, No hire
One-sentence rationale:
Use this to standardise second-round or panel interviews.
Stage: Panel interview
Weighting: Problem solving 30 per cent, Collaboration 25 per cent, Role knowledge 25 per cent, Values 20 per cent
| Criterion | Anchor for 1 | Anchor for 3 | Anchor for 5 | Score |
| Problem solving | Jumps to solution without clarifying | Breaks problem into parts, tests assumptions | Builds options, quantifies impact, selects best route | |
| Collaboration | Talks about “I” only | Shares credit, uses feedback | Shows conflict resolution, proactive support across functions | |
| Role knowledge | Vague, general terms | Working knowledge, some gaps | Confident detail, up to date on tools and trends | |
| Values and motivation | Misaligned drivers | Basic alignment | Clear alignment to mission and ways of working |
Task evidence
Panel notes
Decision and rationale
Use post-interview to measure the candidate experience.
Ideally, sent after the interview, giving candidates time and space to respond. Keep feedback anonymous to protect candidate privacy and encourage honesty.
Ask 3 quick questions on a 1–5 scale:
Add one open question:
Clear rubrics are the difference between opinion and evidence. Keep them short and behaviour-based.
Include two or three sample notes to coach interviewers on good evidence capture: quote a sentence the candidate used, or list the concrete steps they took, rather than summarising with adjectives.
Role-specific forms help interviewers probe the right work. Adjust prompts and tasks to fit.
Top criteria: Service recovery, pace, reliability, teamwork.
Task: Prioritise five stockroom tasks for the last 30 minutes of shift and explain the order.
Prompt: A queue forms and stock is low. What is your first move and why?
Top criteria: Problem solving, code quality, collaboration, learning mindset.
Task: Short refactor or debugging exercise with unit tests.
Prompt: Walk me through a recent system you redesigned. What trade-offs did you consider?
Top criteria: Empathy, written clarity, resilience, product learning.
Task: Draft a short response to a delayed order with two policy constraints.
Prompt: Tell me about a time you turned a frustrated customer into a promoter.
These examples double as a candidate evaluation sample set that teams can iterate.
A little process discipline goes a long way.
Structured forms help reduce unconscious bias and affinity bias by keeping assessors focused on job-related evidence. Use consistent questions, a shared rubric, and documented decision criteria. For inclusive hiring, publish your adjustments process on the careers page, and provide alternatives where appropriate.
You can run a candidate interview evaluation form in Excel or Google Sheets with data validation for 1–5 scores, drop-downs for stage and recommendation, and basic conditional formatting to flag outliers. For panels that handle multiple requisitions, store templates centrally in your applicant tracking system and pre-fill role criteria to save time.
For larger teams and high-volume roles, consider using AI tools for bulk candidate evaluation at the first mile. Sapia.ai’s structured, mobile AI interview produces explainable scores against your rubric and integrates with interview scheduling, which keeps candidates engaged and the hiring team focused on decisions rather than administration.
Keep the scoreboard small and weekly.
These metrics help you develop better prompts, empower interviewers, and align hiring with real outcomes.
Paste this into your ATS or document editor and customise the criteria to your context.
Header
Candidate, Role, Interviewer, Stage, Date
Criteria and weightings
Criterion 1 [weight]
Criterion 2 [weight]
Criterion 3 [weight]
Criterion 4 [weight]
Criterion 5 [weight]
Prompts and evidence
Q1, Rating [1–5], Evidence
Q2, Rating [1–5], Evidence
Q3, Rating [1–5], Evidence
Task, Rating [1–5], Evidence
Red flags
Accommodation provided
Overall recommendation: Hire, Hold, No hire
Rationale: one sentence that references evidence
A clear candidate evaluation form makes hiring decisions faster, fairer, and easier to defend. Start with the outcomes that define success in the role, turn them into structured prompts and a simple rubric, and train interviewers to capture short, verbatim evidence. Use the same form at each stage, then review a small set of metrics to learn and improve.
If you want to see how a structured, mobile-first first mile can plug directly into your forms, book a Sapia.ai demo. You will keep people in charge of decisions, while candidates get a clear, consistent process from first interview to offer.
It standardises how interviewers score skills, behaviours, and values. You get consistent data that speeds the final decision and improves fairness.
Keep one core template but tweak prompts and weightings for CV screen, interview, and task review. Use a resume evaluation form for the screen, then a behavioural and task form for interviews.
Four to six. Go deeper with better prompts and a task rather than adding more checkboxes.
Use 1–5 for behaviours, and 1–4 for skills to reduce fence-sitting. Always include behavioural anchors.
Sapia.ai can run the structured first interview, generate explainable scores aligned to your rubric, and handle interview scheduling. You still review the evidence and make the decision.