Archive for the ‘
Ability, Aptitude & Intelligence ’ Category
Tuesday, August 10th, 2010
Psychologist Vincent Wong carried out a review of psychometric tests being used in Hong Kong, Singapore, Malaysia and throughout Asia. In this analysis, more than 40 tests were reviewed which involved no less than 20 test developers. There were several focuses in the analysis which included practical information of the tests (information such as price and practical design issues), construct of the test, report design, technical details and training requirements.
There exists a wide pricing range among tests developed by different test developers. In the lower end of the continuum one test provider provides tests for free in their entire product range and a section of the chargeable report will be produced. Obviously for user to obtain useful information they have to pay for the full report and this is certainly a marketing strategy. However in the perspective of psychometric this practice serious harm the integrity of the test as anybody can get access to the tests for unlimited number of times. Therefore it can only been seen as tests for people who are interested in trying out tests, rather than being usable in organizational settings. For more protected tests, prices range from USD$10 to more than USD$120 with some of the providers charge per usage while the others charge for subscription fee as well (usually paid annually).
In this analysis, several design dimensions of the test were considered and they were the split between ipsative and normative measures, the type of scales that were employed, and other practical issue like medium of test administration.
The majority of the personality assessment tools (over 80%) employ normative measures (the type of psychometric tools that compare the respondent with a group of similar others, or the norm group) while the remaining ones employ an ipsative style (the type of psychometric tools that determine the preference among different personality traits within the respondent). Two exceptional case was identified which employs a mixed style, i.e. normative plus ipsative. The reason behind the popularity of normative style might down to the fact that for tests that were designed for selection purpose normative style was the better style to go with as it actually compare the respondents with the others. On the other hand ipsative measures can provide us with better knowledge about the preference or strength within the respondents. In line with this we found that most of the ipsative tests were preference or value tests which were designed for coaching or counselling purposes, although some ipsative measures that were designed for selection purposes were also identified. For the only tests that incorporated both normative and ipsative styles, the underlying connotation of the difference between normative and ipsative scales were utilized and it represented the discrepancy between the real and ideal self of the respondents.
The type of scale used by the tests is actually a function of whether they are ipsative or normative tests. For normative test the most popular scale type used was 5-point Likert Scale (Likert Scale is the type of scale that respondents choose among several options for the one that represent their thought most). 7-point scale was also quite common and there were a few occurrences of 3-point and 9-point scales. Other than using Likert scales, a few normative tests employed true or false scale. For ipsative tests force-choice scale was employed. One of the more popular version of ipsative scales asked the respondents to pick the option that describes them the best (usually termed as ‘most like me’) as well as option that describes them the worst (usually termed as ‘least like me’). Another appearing version of ipsative scale asked the respondents to put the available options into order, although this version was very uncommon.
Most of the surveyed tests, if not all, were designed for completing on computerized environment. While some of the tests can be administered online in an unsupervised manner, there were quite a few that required supervised administration. Whereas there were few test that provided different versions for supervised and unsupervised administration. Having more than one version allowed the result to be checked in a supervised manner after the candidates had passed the unsupervised session. Paper and pencil version of the tests were usually available with similar price of the computerized version although there were a few tests that did not provide paper and pencil version.
Although all the surveyed tests were not designed to be completed in a designated time, timer was identified in one test and it served the function of checking against random or thoughtful responses.
Among the different attributes, personality was the most popular one being measured. The majority of the personality measurements were built on the Big Five model of personality identified by Costa and McCrae (1985). While some of them retained the original five factors within the tests, about half of the surveyed tests restructured the factor compositions based on the result of the factor analysis or other theoretical support, for example one test split the factor of conscientiousness into ‘Industriousness’ and ‘Methodicalness’ while another developer incorporate the five factor model with behavioural tendencies and came up with a seven factor model. Another common phenomenon observed was that under each of the five factors the primary factors (ranges from 3-5 facets, also known as facets) were also measured, and they were actually more commonly used by test developers in report generation and interpretation. This was probably because the primary factors offer more detailed information thus higher flexibility in using them. Besides the Big Five model, another very popular personality model employed by test developers was Jung’s (1920) typology of personality. For instance two of the tests were developed from this theory as their entire theoretical foundation but one employed the original categorical model while the other one developed a continuum model. Besides building upon one theory, many tests extract personality factors from multiple personality theories and some of them measured as many as 34 personality dimensions. Example of the measured personality dimension includes ambition, initiative, concern for others, flexibility, and energy. Nearly most of the surveyed personality tests served multiple functions which included selection, training/development need analysis, counselling and other related applications such as personal development, conflict management and team building. Test developers further added the applicability of personality tests in different situations by providing multiple versions of reports alongside with a general personality profile.
Value, Motive and Preference
Another popular attributes being measured were value, motive and preference. Although these are three distinct attributes, we found it was common that test publisher combine either two or all three attributes into one test. These tests were less commonly employed in the situation of selection but more widely used in counselling and developmental scenarios, although some of them were also designed to be used in selection as well. For tests that measures value and motive, normative measures were found to be more common and ipsative measures were more common among preference tests. Another related attribute being measured was interest and they were mainly designed to be a career development tool.
Other measured attributes included measure of leadership styles, team role, behavioural tendency, Emotional Intelligence, self-efficacy, work ethic, interpersonal communication, sales orientation, customer service orientation, learning style and even work effectiveness tendency.
Nearly all of the surveyed tests have multiple reports and they are all in narrative form alongside with a graphic representation (usually bar charts) of the measured characteristic. However there was one test that did not employ narrative style in their report at all. Graphical representations with a sentence long description for each factor were employed instead of the narrative format. 2 dimensional typology graphs and score matrix were also employed for some type of reports. Some reports made use of different colours in representing different dimensions being measured yet some others used colour to indicate extreme scores (for example green representing high scores while red representing low scores). Colour was also frequently employed for matching test scores with a standard or an established profile, with green meaning a good match and red representing a poor match.
Generic Personality Profile
For all the surveyed tests, there was at least some form of generic personality profile provided in the report, whether in the form of narrative writing, matrix of scores, 2 dimensional typology graphs, bar charts or broken line graphs. Most commonly the personality profile was consisted of a graphical representation of the test scores on different dimensions with a brief descriptive narrative alongside it. In this generic personality profile the test scores, usually in form of sten scores or percentile were presented. Raw scores were also found in some reports. About half of the survey tests also presented the variation of the test score in the report and a few had an explanation on the meaning behind that. In all cases primary dimensions measured by the tests were reported in this section. Secondary or higher-level composite dimensions were also frequently reported in this section.
Strengths and Limitations
Strengths and limitations were another very popular qualities being reported, although we identified a few tests that do not report them. In reporting strengths and limitations some tests referred them to very specific behavioural terms while there existed some tests simply referred high or low scores in particular dimensions as strengths or limitations. Few tests incorporated contextual factors into the reporting of strengths and limitations were identified and they were more common in purpose-specific reports (for example reports designed for leadership development or team building). Overall tests tended to present information about strengths and limitations of the candidates.
Leadership, team work, interpersonal skills or orientation and problem solving orientation were found to be the most popular competencies being tapped. Other competencies being tackled by the surveyed tests included achievement orientation, customer service orientation, management style, decision making, planning and organization, influence and negotiation, delivery, creativity, analytic orientation, coping style and thinking style. Rather than being measured directly in the tests, these competencies were often generated from several primary dimensions of personality. They were found to be written in context of work and behavioural terms were employed heavily in order to aid comprehensibility of the report. Furthermore competency based reports were identified and leadership related reports were the one which appeared most. Competency based reports for sales and managerial positions were also popular.
Interview prompts were found in some reports. These included general instruction of how to use the report correctly to enhance the effectiveness of a follow-up interview as well as specific suggested interview questions to be asked for a particular candidate. The number of interview prompts varies from three to ten plus suggested questions and some reports even included the expected answer from the candidate. These interview prompts also served as a check or back up of the validity of the tests.
Training (Development) Needs
Several tests with a separated training need or developmental report were identified. For tests that did not have a designated report for training needs, it was surprising to found that the section outlining training was absent for majority of the surveyed tests, given most of them were designed to be used in training need analysis. When present, the training needs outlined (or some tests referred it to be ‘action plans’) were usually generated from the unfit aspects identified or areas that were not up to the normative standard. Simple description about the needs per se was common and a few reports were found to be providing concrete training suggestions.
Cultural fit information was identified in a few test reports. This information could include the fit of the candidate with the organizational culture, task nature as well as co-workers and it existed in several forms. The more popular way to compute it was comparing between the candidate’s score with the norm or an ideal profile. One test generated this information by comparing the candidate with the best performers. Yet another test presented the information in light of the candidate himself by stating what culture or environment will be the best fit for the candidate.
Technical information of the test included normative data, reliability and validity data as well as development procedure of the test. They are the most important information to be readily accessible to the public but unfortunately some of them were virtually absent for some of the surveyed tests. Normative data were found to be the most reported information and reliability data followed. However evidence for validity as well as development procedure of the test were absent for some of the tests despite the claim of ‘scientifically validated’ in their marketing materials. For tests that did not provide any of the above mentioned information the integrity of them were seriously in doubt.
Training requirement of the tests varied from no need training for an extreme case (which was the free online test) to BPS Level B plus additional training (approximately 7 days of training in total). For most of the tests 2-3 days of training for the specific test was common but this type of training would not be recognized by a different test provider. The BPS (British Psychological Society) Competence in Occupational Testing was found to be the most widely accepted qualification by the test providers. Most of the tests could be administered by a BPS Level B qualified user but there existed some tests which required a conversion training (1-2 days long) in order to be a qualified user of them.
Tuesday, March 23rd, 2010
PsyAsia International offers Free Psychometric Testing Course in Hong Kong & Singapore
Introduction to Psychometric Testing Course: Hong Kong, 4 May 2010; Singapore, 11 May 2010
PsyAsia International is Asia’s independent Leader in Psychometric Test products and Training. We choose to distribute only the world’s best, most validated psychometric assessments and offer locally relevant, world-class training in psychometrics. The Introduction to Psychometrics Workshop expands on PsyAsia’s expertise in Psychometric Training in Asia by offering a course geared to those with very little experience or understanding in Psychometrics. Many first time clients don’t understand why they need to be careful in their choice or use of psychometrics and many do not understand why training is a necessity in competent test use.
This one-day course aims to provide experienced-based training in an accessible and economical way. The course is easy to understand and yet covers many of the important issues to be aware of when choosing and using psychometric tests. Given our passion for Asia and our passion for the competent use of psychometric tests in Asia, PsyAsia makes no profit on this course. We charge delegates a small fee that reflects the cost of the hotel venue (including buffet lunch and refreshments) where the training is held as well as materials that we provide to the delegates. What’s more, if you later decide to attend one of our accreditation courses in Psychometrics, we will issue you with a discount code that reduces your course fee by the amount you paid for this course!
| The history of psychometric testing
Comparison of psychometric tests with other modes of employee testing and assessment
The benefit of using psychometric tests in recruitment/selection, development and coaching
Reliability in psychometric testing
Validity in psychometric testing
Error in psychometric testing
Review of different aptitude, personality and values tests on the market
Questions to ask your test publisher or distributor
What next?Note: During the workshop, delegates will create quasi-psychometric tests in groups to enable a hands-on exploration of issues such as reliability, error and validity in psychometric tests.
To view full course details and to register, please click here.
Thursday, March 18th, 2010
PsyAsia International is pleased to announce that until the end of March we will be offering free daily webinars to showcase our product range. Their will be no set agenda. The agenda will be set by attendees. Please note however that product knowledge may differ depending on which of our consultants is running the webinar. Come along and chat with our consultants, see the Saville Consulting Wave, Identity Personality Assessment and the Apollo Profile in action. Ask questions about training and consulting options and so forth!
For times and to register, please click here…
Friday, January 15th, 2010
Types of Bias in Psychometric Test Translation
With the demand and need for psychological tests increasing in various different cultures and countries, there has been much greater awareness regarding some of the issues that are associated with the development or adaptation of tests to be used in contexts and situations that may be different from which the test was developed for. This article focuses on one of the key aspects of translating tests, the types of bias that can occur.
When utilizing the test in a new cultural group, it is not quite as simple as directly translating the test, administering it and then comparing the results for its validity. There are a number of issues that need to be considered such as whether the area assessed with the test applies to the new culture or whether is may be biased towards that group and whether what is assessed by the test also has similar behavioral indicators? These are just some of the potential areas where bias can be found in the translation of tests and affect the validity of the test being utilized in the new context.
Van der Vijer & Hambleton (1996) differentiates between three distinct types of bias that may affect the validity of tests that have been adapted for different cultural contexts and these are construct bias, method bias and item bias.
Construct bias occurs when the construct (e.g. personality) that is measured by the test displays significant differences between the original culture for which it was developed and the new culture where it is going to be utilized. These differences can occur in the way that the construct was formulated and developed as well as in the relevant behaviors that are associated with the construct. It is critical to examine whether the underlying theory of the test is subject to construct bias and this can be examined through the studies examining the construct and its associated behaviors in the context that it will be utilized in. If there are significant differences found in these studies, it may indicative that there is construct bias. Major revisions may be required to overcome this bias. If not, the validity of the test will be affected.
Method bias refers to factors or issues related to the administration of the test that may affect the validity of the test. Examples of areas that method bias can occur include social desirability, acquiescence response styles, the conditions in which the test was conducted and the motivation of the respondents. Across cultures, there potentially can be differences that can occur in these areas and these can affect the way that the respondents answer the items in the test. This potentially may lead to differences between found that can be erroneously attributed to cultural differences when in fact, these differences are the result of differences in the administration procedures. As a result, it is threat to the validity of tests that have been adapted for use in new cultures. Test developers also not only need to focus on the adaptation of the test itself but also need to be aware of issues regarding the implementation of the test in a new context.
Item bias is another source of bias that can occur in the translation of tests and these refer to biases that occur with the items in the test. This is usually the result of either poor translation choices for items or due to culturally inappropriate translations. For example, the phrase “kick the bucket” is essentially a phrase that referring to passing away in the Western context and is commonly known by most people in that culture; unfortunately, this phrase would have no meaning for people from cultures without any prior experience with that phrase. In this manner, a literal translation of that phrase would be a poor translation as it does not convey the correct meaning of the item. The items in the test need to be culturally equivalent, where the meaning of the items needs to be correctly translated so as to maintain the validity of the test in the new cultural context.
These are some of the biases that may occur during the translation of tests. Test developers will need to be aware of the sources of bias and take the appropriate measures to avoid these biases.
Van der Vijer, F. and Hambleton, R. K. (1996). Translating tests: some practical guidelines. European Psychologist, 1, 89-99.
Psychometric Training in Singapore, Hong Kong, Malaysia, and China
If you are serious about using psychometric tests properly then we recommend joining PsyAsia International’s Psychometric Assessment at Work Course which leads to a certificate of competence in Occupational Testing Level A and Level B from the British Psychological Society. The Course is run publically in Singapore and Hong Kong or in-house anywhere.
More details about BPS Level A and B in Singapore and Hong Kong
Online Psychometric Training – Worldwide
Alternatively, you might be interested in introductory Online Psychometric Test Training presented live by a registered psychologist. PsyAsia is offering a special fee of just US$12 for anybody who registers for the February online psychometric training course!
More details about online psychometric test training
Thursday, August 6th, 2009
1. ISIR has instituted a new program to help defray travel costs for post-doctoral students and faculty members without travel support or with only partial support. We will be able to support ten or more persons. First priority will go to those presenting papers who are post-doctoral students or junior faculty without travel support. Travel will be funded for up to $1,500 per person. This program is being funded by ISIR.
2. The second program supports student travel and is fully explained in the conference announcement found at http://www.isironline.org/meeting/. This program is supported by the Templeton Foundation grant that we recently received. To apply for either program, send an email by September 25th to email@example.com with an attachment stating 1) Name, 2) Affiliation, 3) Current Status (e.g., 3rd year graduate student, postdoctoral student, faculty member, etc.), 4) Travel support currently available to you, 5) Indicate if you have submitted a paper to be presented at the conference and if you will be the presenter, 6) Any other information that may be relevant to your application. Note that, because of the new program, the deadline for application to both programs has been extended to September 25, 2009.
Wednesday, December 10th, 2008
The nature versus nature debate of intelligence has been going on for a several years. The debate revolves around how intelligence is formed, either from a person’s genes and physiological attributes (nature), or from personal experiences and learnt through education and exposure to the world.
The nurture debate states that all humans are born as a ‘blank slate’ and the amount of intelligence we have is dependent upon our experiences. This view states that all humans have the capacity to learn and have the same abilities as everyone else when it comes to IQ. Research has shown that family factors have an effect on a child’s IQ up to adolescence, but after a certain point it seems that nature tends to play a part. Twin studies have been used to investigate this debate further and from these studies we can see that genetics can play a huge role in influencing a person’s IQ. Twin studies have shown that twins raised in different environments have similar IQs that fraternal siblings raised together, suggesting that nature plays a more important part in IQ.
Previous research has shown that both, our genes and the environment play a role in intelligence, and we are all born with different levels of capabilities which can be developed through the years. Of important note here is the work of American Psychologist, Robert Plomin. Plomin has demonstrated that genetic factors can mediate the link between the environment and person outcomes such as intelligence. Research is somewhat divided in this area then with some researchers suggesting a 40%/60% divide between nature and nurture, others view the exact opposite, whilst some go with 50%/50%. What is clear is that both nature and nurture are responsible for how we are today. The nature versus nature debate can also be applied to other areas of psychology, such as development of language, identity or personality.
Petrill, SA., Wilkerson, B, (2004). Intelligence and Achievement: A behavioural Genetic Perspective, Educational Psychology Review, 12 (2) pp 185-199.
Plomin, R., Loehlin, J. C., & DeFries, J. C. (1985). Genetic and environmental components of “environmental” influences. Developmental Psychology, 21, 391-402.
Sunday, September 28th, 2008
Over the years, PsyAsia International has placed tremendous effort into putting together a knowledgebase at our website as well as a knowledge blog at assessmentcentral.com. We continue to add to and develop these further. We’re now offering you the opportunity to ask questions pertinent to you and for our psychologists to answer them at our blog. Your question should be related to human resource management or business psychology. It can be under the categories listed at this blog or you may request a new category. We may also open some questions/answers up for discussion and general comment if we feel relevant. Please keep in mind that your question should be able to be answered within a blog submission (i.e., don’t ask anything that might require a very complex or long response!!). Our psychologists will aim to respond to one question per day and the target time for the response is 10-15 minutes. Answers will be posted at assessmentcentral.com and the PsyAsia knowledgebase.
Please feel free to submit your questions now by emailing our ONLINE LEARNING SECTION. You must complete all details in the form accurately. We will not answer questions from those who enter FREE email addresses such as yahoo/google/msn etc or where a company name is not provided. However, we will not mention your personal details in the response in order to protect your privacy. Thank you for your participation!
Monday, July 14th, 2008
Wondering what organisational psychology is and how it relates to HRM? Come along to our seminar on 24 July in Causeway Bay. The seminar will be run in Cantonese and English (2 sessions). Registration is managed by the Hong Kong Institute for HRM. Click below for more details and registration:
Monday, May 12th, 2008
Saville Consulting Wave® Outperforms Major Personality Assessments
A free seminar hosted by PsyAsia International and presented by the MD of Saville Consulting Asia Pacific, Scott Rufus
Registration: Click here More information on the seminar: click here
Over the past four years, Saville Consulting has developed revolutionary assessment tools designed to address the 21st century workplace. Concerns over unsupervised internet testing have now been largely overcome by Saville Consulting’s development of their Swift aptitude assessment portfolio. Additionally, the Saville Consulting Wave®, translated into over 20 languages takes one quarter of the time of the OPQ® with superior validity and is rapidly becoming the definitive personality assessment tool worldwide. The Saville Consulting Wave® has demonstrated outstanding ability to predict performance at work and groundbreaking research by Professor Peter Saville and his team has shown that the tool outperforms the OPQ32, NEO, 16PF, MBTI, Disc and HPI in predicting job performance.
At this PsyAsia seminar in Hong Kong, Scott Rufus will present some of these findings and introduce the Wave tool along with its practical applications.
About the presenter
Scott Ruhfus, Managing Director Saville Consulting Asia Pacific
BA Hons (Syd), MAPS, MAICD, Registered Psychologist
Scott Ruhfus is an organizational psychologist with 30 years experience in human resources, consulting and management. He is part of the management team that looks after Saville Consulting clients in the Asia Pacific region.
Scott is a passionate advocate for the role of assessment as an aid to organizational effectiveness and individual satisfaction. He has assisted many organizations in the region to do just that and he has the practical experience of seeing both sides of the fence, first as a user, then as an adviser and trainer.
Scott’s testing expertise is in the development of decision support systems and reporting formats which address the practical need to identify, develop and retain talent, and to do it better than the competition. He has led projects as diverse as pilot selection for airlines and entry level screening for retail banks.
With his management experience and behavioural background he now also coaches and counsels senior executives in a number of public and private companies as well as the professional services sector.
Until 2002, he was President, Asia Pacific, of SHL, having joined the fledgling Australian office in 1988. Early on, he helped to introduce modern concepts in testing to the Australian and other regional markets, and later, was involved in product development and strategy. He has headed the Australian HR function of 3 multinationals, and was a military psychologist before that.
Scott has a degree in psychology from Sydney University, is a Registered Psychologist and a member of the Australian Institute of Company Directors. He has enjoyed a close link to the profession, speaking at many seminars on advances in testing and assessment, taking an active interest in the postgraduate training of business psychologists, and serving on a number of professional committees. He is a past Chair of the Australian Psychological Society’s College of Organisational Psychologists in Sydney.
More information on the seminar: click here
Friday, March 28th, 2008
In response to continuous requests of quick advice from our clients, PsyAsia International proudly presents the expressCONSULT™ service. Very often our clients need professional advice from our psychologists which is brief enough to be delivered in one email or a telephone call. As experts in the area, we are always very keen to help but often we find ourselves very busy and responding to 20 or more requests for quick support each day means cutting into work for our full-paying clients or leaving work very late!! To overcome this dilemma, PsyAsia has invented the expressCONSULT™ solution of “Purchasing time from our Psychologists”. With this service, you can purchase our psychologists’ time for their professional advice without the need for a formal consultancy service! It is quick and easy and your problem can be solved in as fast as 15 minutes!
What do we offer?
With expressCONSULT™, we offer a wide range of advisory and consultancy services that are not longer than 2 working hours (for projects that are estimated to take longer than 2 hours to complete, you would need to opt for our regular consulting services).
Examples of expressCONSULT™ services include, but are not limited to: checking interview questions, assessment centre exercises and training materials; advice on selection procedures, training design and performance appraisal; and any other advisory services that call on our expertise.
More information at http://www.psychologicalconsultancy.com