World’s leading BPM service provider deploys Aspiring Minds assessments across 7 countries and 70 delivery centers!
A Business process management provider headquartered in Bangalore with over 42,000 employees across 73 office centers in 7 countries. They are consistently ranked as one of the best global outsourcing companies in the world by the International Association of Outsourcing Professionals (IAOP®).*
The company focuses on back-office processing, contact centers and HRO solutions. They combine technology-powered services in automation, digital and analytics to deliver transformational impact to their clients.
As their business model is primarily people-centric, the client came to Aspiring Minds with twin objectives: to find the best talent for voice, non-voice and technical positions across their global offices and to establish a standardized and automated evaluation system for the hiring process.
The client wished to expedite the candidate evaluation process for service personnel — voice, non-voice and technical roles in particular. They needed a platform that could standardize the process of assessing jobseekers across their business centers in the UK, US, Canada, Jamaica, Philippines, India and Columbia, without compromising quality.
Their traditional non-standardized method posed the usual challenges of running redundant assessments manually, resulting in long hiring cycles. This was further aggravated by:
- The absence of assessments to measure competency, which resulted in numerous bad hires.
- Assessment designs lacked a scientific basis, which led to longer time-to-hire as well as high attrition rates.
- Absence of any ATS integration.
WHY ASPIRING MINDS
Aspiring Minds’ 100% cloud-based solutions make it possible to completely automate the assessment and hiring process. Our proven AMCAT® assessment platform, AI-powered written (WriteX) & spoken (SVAR) English language assessments and our engaging coding simulation platform with powerful auto-proctoring technology allows companies to assess job candidates accurately from multiple dimensions. The platform can auto-scale to deliver 50,000-70,000 assessments per day with ease, while administering 25,000 assessments concurrently.
Aspiring Minds’ AMCAT platform allowed the client to automate their entire candidate assessment process. We tailored the platform to focus on role-based hiring for voice, non-voice chat, email and transactional processing profiles. AMCAT’s analytical benchmarking tools correlate success profiles for role types across geographies. We simultaneously integrated the assessment inputs with client's Applicant Tracking System to build an open and scalable recruitment infrastructure to support growth going forward.
We established three successive levels of competency testing, as defined by the client's human resource teams. Auto-proctoring and anti-plagiarism technology was used to ensure test integrity and quality control:
- LEVEL 1: Candidates were evaluated on the basic competencies such as cognitive and behavioral skills, information gathering & analysis, computer literacy, call simulation, customer-centricity and process documentation.
- LEVEL 2: Successful candidates from level 1 were then evaluated on SVAR, the speech-based English language assessment, and WRITEX, the written English evaluation that tests email and case-report writing skills.
- LEVEL 3: Shortlisted candidates advanced to role-based assessments. Here, candidates for specific roles such as customer service and sales were evaluated with situational judgment tests that gauge their performance in typical real-life scenarios.
Scalability: Our automated assessment process allowed the client to evaluate 75K candidates across 7 geographic areas over the course of 8 months.
Reduced hiring-turnaround time: Our standardized assessment model helped the client reduce testing duration and interview time while simultaneously ensuring test fairness.
Improved quality of hires & candidate experience: Candidates reported very high satisfaction with the selection process. Our assessment tools significantly shortened the application process. The common framework and multi-dimensional approach of the evaluation gave confidence to jobseekers that the process was fair. Candidates were further encouraged by the mobile-ready nature of the assessments as well as the highly engaging content.
Cost Effective: The use of an automated platform to screen and evaluate candidates significantly reduced costs compared to the client’s traditional manual evaluations.
SVAR® is an automated AI-powered assessment of spoken English proficiency. SVAR auto-evaluates and scores candidates across six key parameters: pronunciation, fluency, active listening, grammar, vocabulary and spoken English comprehension. It features advanced voice recognition technology that can score candidates with native accents accurately to ensure that scores correlate highly to a human evaluator. The scores are reliable and benchmarked to internationally-accepted CEFR standards, and includes recommendations for “hire” and “train.” SVAR typically takes less than 15 minutes to complete and can be accessed by smartphone, personal computer or IVR phone line.
ABOUT CUSTOMER SERVICE ‑ SITUATIONAL JUDGEMENT TEST (CS-SJT)
CS-SJT is a gamified assessment that simulates the real-life workplace scenarios that customer service agents typically face. CS-SJT measures skills such as customer-centricity, process adherence, customer expectation management, problem solving and self-management — all the skills that are required to do the job well.
ABOUT AMCAT LOGICAL ABILITY
The AMCAT Logical Ability test assesses an individual’s deductive, inductive and abductive reasoning. It evaluates an individual’s capacity to make objective interpretations; to make generalizations by perceiving and interpreting trends; and to analyze the assumptions behind an argument or statement.
ABOUT AMCAT ENGLISH
AMCAT English is a multiple-choice, computer adaptive evaluation based on Item Response Theory (IRT) to test candidate’s Written English and comprehension skills. A candidate’s response to a given question determines the type of question that will be presented next. When the candidate answers a question correctly, the level of difficulty increases. If the answer is incorrect, the level of difficulty decreases.