Psychometrics? Distractor? CBT Technical Jargons Explained
Sign in

Psychometrics? Distractor? CBT technical jargons explained

With the CAT and NMAT exams taking a new form with the launch of computer based in 2009, the horizon of MBA aspirants was dotted with many jargons, such as 'Psychometrics', 'Testing window', 'Formative assessment'.

With the CAT and NMAT exams taking a new form with the launch of computer based in 2009, the horizon of MBA aspirants was dotted with many jargons, such as 'Psychometrics', 'Testing window', 'Formative assessment'. These technical jargons are new to the test takers of computer based MBA exams.

1) CBT – CBT is an acronym for Computer Based Testing. The most high-profile example of a Computer Based Test in India would be the NMAT or CAT.

2) Accommodations - Extra provisions provided to a test taker for some particular reason, most often a disability of some kind. Examples of accommodations include a reader to read the test on the test taker’s behalf or extra time allocated to the test length to account for factors such as dyslexia.

3) Constructed response - A question type that requires the test taker to create or produce an answer, such as writing an essay or filling in missing words or phrases, as opposed to selected response.

4) Distractor - One of the incorrect choices on a multiple choice question. Distractors are designed to incorporate common test taker errors.  The more plausible a distractor is, the more difficult the question may be.

5) E-Assessment - Another term for Computer Based Testing, often used in academic fraternity.

6) Equating - The process of ensuring that scores on different forms of a test (test forms) share the same statistical characteristics. For example, a test taker can re-sit a test a number of times, and although the test questions will be different each time, the difficulty of the test will be statistically comparable.

7) Feedback - A report on how the test taker performed on a test, which can often be provided immediately after testing. Test takers may be provided with their pass/fail status, a numeric test score, and/or more in-depth analysis based on how they performed on the various content domains within the test.

8) Formative assessment - As with diagnostic assessment, a type of testing that runs alongside the learning program, so that the testing process becomes integrated into, and part of, the test takers’ learning outcomes.

9) Item type - A specific kind of item or question of which there are many types ranging from multiple choice or essay questions to more high-tech on-screen item types such as simulations, 3D modeling tasks or drag-and-drop tasks.

10) Key - The correct answer to an item on a test.

11) Psychometrics - The field of study concerned with the theory and practice of educational and psychological measurement.  Psychometricians are experts in the design and analysis of tests that measure knowledge, abilities, attitudes, and personality traits.

12) Score report - A feature of CBT: the report the test taker receives on completing a test, which indicates his or her performance on the test. Sometimes this is a final result, sometimes it is a partial report of a section of a test with a final overall result to follow.

13) Test form - One version of a test. Multiple test forms (containing different sets of items and whose scores are equated) may be operational at the same time to eliminate the over-exposure of individual items when there are large numbers of test takers. Multiple test forms also ensure that test takers who sit the test on more than one occasion receive a different set of items at each administration.

14) Testing window - The time slot within which a test can be taken. Paper-based exams are typically administered in short testing windows (often just a single day) because of the operational issues involved in securely handling all the paper test forms. Computer Based Testing enables tests to be available in longer testing windows, even continuously, permitting test takers to sit the test on demand, and eliminating the need for managing large volumes of paper forms.

15) Validity - The extent to which the scores on are useful for the purpose the test was designed.  For example if a test is designed to select candidates for graduate school, the scores on the test should separate the more able test takers from the less able test takers. 

You should now be able to understand the meaning of the above jargons. There are more such highly relevant articles for CAT 2010 which you can read on siliconindia.com.

start_blog_img