It should not be surprising then that language is also the basis of most traditional forms of personality testing.
This lexical hypothesis is a thesis, current primarily in early personality psychology. Subsequently subsumed by many later efforts in that subfield. Despite some variation in its definition and application, the hypothesis is generally defined by two postulates.
Lexical hypothesis is a major foundation of the Big Five personality traits. The HEXACO model of personality structure and the 16PF Questionnaire and has been used to study the structure of personality traits in a number of cultural and linguistic settings.
Noam Chomsky summed up the power of language nicely:
“Language is a mirror of mind in a deep and significant sense. It is a product of human intelligence … By studying the properties of natural languages, their structure, organization, and use, we may hope to learn something about human nature; something significant …”
Where chatbots can be programmed to provide answers to basic questions real-time, so that your people don’t need to do that, these answers are canned answers to basic questions delivered through text. They lack the smarts to truly discover what your text responses say about you. The engagement between the chatbot and the individual is purely transactional.
Conversational AI is more about a relationship built through understanding, using natural language to make human-to-machine conversations more like human-to-human ones. It offers a more sophisticated and more personalized solution to engage candidates through multiple forms of communication. Ultimately, this kind of artificial intelligence gets smarter through use and connects people in a more meaningful way.
Put simple, Conversational Ai is intelligent and hyper-personalised Ai, and in the case of ‘Sapia labs’, its is underpinned by provable and explainable science. We have already published our peer-reviewed scientific research which underpins our personality science.
The scientific paper may not make it to your reading table, although you can download it here (“Predicting job-hopping likelihood using answers to open-ended interview questions” ) but the business implications cannot be ignored.
According to one report, voluntary turnover is estimated to cost U.S. companies more than $600 billion a year. This is due to one in four employees projected to quit and to take a different job. If your turnover is even a few basis points above your industry average, then leveraging conversational Ai will save your business costs.
Our research used the free-text responses from 45,899 candidates who had used Sapia’s conversational Ai. Candidates had originally been asked five to seven open-ended questions on past experience and situations. They also responded to self-rating questions based on the job-hopping motive scale, a validated set of rating questions to measure one’s job-hopping motive. The self-rating questions were based on the job-hopping motive scale, a validated set of rating questions to measure one’s job-hopping motive.
We found a statistically significant positive correlation between text based answers and self-rated job-hopping motive scale measure. The language inferred job-hopping likelihood score had correlations with other attributes such as the personality trait “openness to experience”.
Ai, that is the bridge between HR and the business. It is this kind of quantifiable business ROI that distinguishes traditional testing with Ai models.
To keep up to date on all things “Hiring with Ai” subscribe to our blog!
Finally, you can try out Sapia’s SmartInterviewer right now, or leave us your details here to get a personalised demo.
The following is an excerpt from our Talent Acquisition Transformation Guide, a comprehensive playbook to help you audit and improve your recruitment strategy.
Winning more talent means getting your team in ship shape. In many organizations, the Talent Acquisition business operates in an isolated camp – no one sees or hears from you unless you have good or bad news about a particular candidate or role vacancy.
Efficiency in recruitment requires absolute alignment. Your people leaders and your executive team must be in alignment with your new strategy, because they are equally responsible for executing it. Gone are the days when, for example, marketing managers could pass a job description for a copywriter to a Talent Acquisition specialist and wash their hands of the prospecting dirty work. Now, more than ever, the hiring manager and the specialist must form a partnership, sharing the duties of advertising, promoting, vetting, interviewing and assessing. After all, candidates for said copywriter role will expect it.
To get cooperation and buy-in from your people leaders, you need to form a visible, purposeful A-team.
Your crack recruitment task force should comprise:
Once your team is formed, you need to complete a basic audit to see where your recruitment pipeline is at – and the roadblocks stopping you from securing the talent you need.
This step sounds obvious on the face of it, but it actually requires some speculation and problem-solving. Consider this simple matrix, filled in with examples – it’s a good starting point on getting alignment with the A-team on your hiring needs.
Role | Critical skills | Priority | Existing org. strength | Applicants/candidate declined | Advertised salary | Market salary | Notes/suggestions |
Head of marketing |
|
Very high | Low (no marketing leadership) | 40/38 | $150k p/a | $190k p/a |
|
Software engineer |
|
Low | High (replacing a team of 20) | 10/10 | $120k p/a | $130k p/a |
|
Office manager |
|
High | Low (no office manager for ~3 months) | 0/0 | $100k p/a | $100k p/a |
|
Once you’ve filled out your Talent Requirements Matrix, the next step is effective triage. Almost everyone in the A-team will already be aware of your highest hiring priorities, but by filling out this matrix, you can focus talent acquisition efforts on coming up with weird and wonderful ideas for attracting the right candidates. Times like these require outside-the-box thinking!
The Workforce Science team are on the road again!
This time, we are heading to Sydney to host a session at APS’s 12th Industrial and Organisational Psychology Conference (IOP).
IOP is Australia’s premier conference for us organisational psychologists, so it has a permanent spot in our calendars. And this year, we got extra excited when the conference was announced.
Why?
The theme of the conference is set to;
‘From Ideas to Implementation: Embracing the Challenges of Tomorrow’.
With a theme this relevant to our day-to-day work, we couldn’t stop ourselves from hosting a professional practice forum. The forum’s theme is what Elliot and I spend most of our time thinking about; the robots that are coming for our jobs!
It is crystal clear that there is a real need to discuss how our roles will change in the (not so distant) future.
Leading researchers from Oxford University and Deloitte estimate that machines could replace up to 35% of all job types within the next 20 years. So, we will need to find ways to coexist and work with the machines. But how?
In the forum, we will discuss our view on how the role of organisational psychologists will evolve. We will also present our thoughts on how this shift will impact us, both negative and positive aspects.
If you are attending IOP, feel free to come along and add to the discussion!
Our presentation – “The robots are coming (to help us with hiring) for our jobs” – is scheduled for Thursday 14th July at 3.30pm.
We would love to hear your thoughts on the opportunities and challenges we face, as implementation of AI gets more widely adopted.
If you’re not attending the conference, but still would like to discuss this, don’t hesitate to drop us a line on LinkedIn (Elliot Wood/Kristina Dorniak-Wall). Elliot and I are always keen to chat about it!
Hope to see you at IOP!
This came up in my feed last week prompting me to share my own 2 cents on why machines are better at hiring decisions than humans.
Did you know that the Wikipedia list of cognitive biases contains 185 entries? This somewhat exhausting article lays out in excruciating detail biases I didn’t know could exist and arrives at the conclusion that they are mostly unalterable and fixed regardless of how much unconscious bias training you attend in your lifetime.
I get asked A LOT about how I can work for a company that sells technology that relies on ‘machines’ to make people decisions.
I will keep it simple … 2 reasons
Because as per above, our biases are so embedded and invisible mostly we just can’t check ourselves in the moment to manage those biases. (I would rather hire women, ideally, mums, who like the same podcast series as me and straight through to offer stage if they like Larry David humour )
And Machines can be ‘trained’ …humans can’t, as easily or efficiently
But the myriad and ever-present news articles about ‘algorithmic bias’ has lumped all machine learning into one massive alphabet soup of ‘don’t trust the machine!
Really? Are we also biased against machines now? I saw Terminator 2 as well and worry about machines taking over the world ….but that’s a massive leap from the practice of bringing data, objective data into the most critical decision you will make as a people leader, who to hire. The divorce rate is for me the proof point that humans suck at making critical people decisions.
I’ve been in the People space for a while. I was lucky enough to work with 2 organisations BCG and the REA Group that value their people above all else. They also value making money and having your engineers and consultants sucked up in recruiting days and campaigns is a massive investment of your scarce and valuable capacity. I have found most companies don’t even know how much it costs to hire one person because no one is tracking the time investment.
We are all time poor and so we often default on hiring based on ‘pedigree’ . Someone has GE on their CV, they must be great as GE only hires great people. That’s a pretty loose /random data point for making a hiring decision
So here is a non data scientist view of why you should trust machine learning to find the right people and when you shouldn’t
First credit to this post which helped me put this into non tech speak .
https://medium.com/mit-media-lab/the-algorithms-arent-biased-we-are-a691f5f6f6f2
Why use Machine Learning at all for decision-making ? Because it underwrites making repeatable, objectively valid (ie data based) decisions at scale.
Value to the organisation:
• Use less resources to hire
• Every applicant gets a fair go at the role
• Every applicant is interviewed
• Hire the person who will succeed vs someone your gut tells you will succeed
How do you ensure there is no or limited bias in the machine learning ?
Take a look at:
– what’s the data being used to build the model
– what are you doing to that data to build the model
If you build models off the profile of your own talent and that talent is homogenous and monochromatic, then so will be the data model and you are back to self reinforcing hiring
If you are using data which looks at age, gender, ethnicity and all those visible markers of bias , then sure enough, you will amplify that bias in your machine learning
Relying on internal performance data to make people decisions, that’s like layering bias upon bias. The same as building a sentencing algorithm with sentencing data from the US court system, which is already biased against black men.
Reality is that machine learning is by its very definition aiming to bias decisions, and removing bias is driven by what bits of training data you use to feed the machine. This means you can make sure the data you train with has no bias.
Machine learning outcomes are testable and corrective measures remain consistent, unlike in humans. The ability to test both training data and outcome data, continuously, allows you to detect and correct the slightest bias if it ever occurs.
Tick to objective data which has no bio data (that means a big NO to CV and social media scraping )
Tick to using multiple machine learning models to continuously triangulate the model vs rely on one version of truth
So instead of lumping all AI and ML into one big bucket of ‘bias’ , look beneath the surface to really understand what’s going into the machine as that’s where amplification risks looms large
Oh and the reason why I hate Simon Sinek …
I don’t actually at all, but if a candidate said that to me in an interview I’d probably hire them for it because I would make some superficial extrapolation about their personality based on it:-
• first it would tell me they watch ted talks and so that eeks of cleverness and learning appetite
• second it would tell me they are confident to be contrarian and that would make me believe that they are better leaders
• third I would infer they are not sucked into the vortex of thinking that culture is the panacea to every people problem.
See how easy it is to make an unbiased hiring decision?
Soon (maybe already) you will be putting yours and your loved ones lives in the hands of algorithms when you ride in that self driven car. Algorithms are extensions to our cognitive ability helping us make better decisions, faster and consistently based on data. Even in hiring.