Validez de contenido de un protocolo de Buenas Prácticas en la evaluación del desarrollo psicomotor
Content validity of a guide of good-practices in the psychomotor development assessment
Resumen
Abstract
Bibliografía
1. Ministerio de Salud. Norma tecnica de salud para el control de crecimiento y desarrollo de la niña y el niño menor de 5 años. Lima: Ministerio de Salud; 2010.
2. Angulo-Ramos M, Merino-Soto C. Características del uso de pruebas de desarrollo psicomotor en el Perú. IV Congreso Internacional Ibero Americano de Enfermería; 10 al 12 de mayo; Cancún, México, 10 al 12 de mayo; 2017.
3. Council on Children With Disabilities, Section on Developmental Behavioral Pediatrics, Bright Futures Steering Committee, Medical Home Initiatives for Children With Special Needs Project Advisory Committee. Identifying infants and young children with developmental disorders in the medical home: an algorithm for developmental surveillance and screening. Pediatrics. 2006; 118(4):405-20.
4. Lynch BA, Weaver AL, Starr SR, Ytterberg KL, Rostad PV, Hall DJ, et al. Developmental screening and follow-up by nurses. Am J Matern Child Nurs. 2015; 40(6):388–93.
5. Monge AA, Meneses MM. Instrumentos de evaluación del desarrollo psicomotor. Rev Educ. 2002; 26(1):155–68.
6. Haeussler IM, Marchant T. TEPSI Test de Desarrollo Psicomotor. Santiago de Chile: Ediciones Universidad Catolica de Chile; 2003.
7. Rodríguez S, Arancibia V, Undurraga C. Escala de Evaluación de Desarrollo Psicomotor para niños entre 0 y 2 años. Santiago de Chile: Editorial Galdoc; 1987.
8. Frankenburg WK, Dodds JB, Archer P, Bresnick B et al. Denver II: training manual. Denver, Colo-rado: Denver Developmental Materials Inc; 1992.
9. Tristán-López A. Modificación al modelo de Lawshe para el dictamen cuantitativo de la validez de contenido de un instrumento objetivo. Avances en Medición 2008;6(1):37–48.
10. American Educational Research Association, American Psychological Association, National Council on Measurement in Education. Standards for Educational and Psychological Testing. Washington, DC: American Psychological Association; 2014.
11. Canadian Psychological Association. Guidelines for Educational and Psychological Testing. Otawa, Ontario: Canadian Psychological Association; 1987.
12. Bartram D. The development of standards for the use of psychological tests in occupational settings: the competence approach. The Psychologist. 1995;1:219–23.
13. International Test Commission. International guidelines for test use. Int J Test. 2001;1(2):93-114.
14. Rukundo A, Magambo J. Effective test administration in schools: principals & good practices for test administrators in Uganda. Afr. J. Teach. Educ. 2010;1(1):166–73.
15. Alfonso V, Johnson A, Patinella L, Rader D. Common WISC-III examiner errors: Evidence from graduate students in training. Psychol Sch. 1998;35(2):119–25.
16. Erdodi L, Richard D, Hopwood C. The importance of relying on the manual: Scoring error variance in the WISC-IV vocabulary subtest. J Psychoeduc Assess. 2009; 27(5):374–85.
17. Lee D, Reynolds CR, Willson VL. Standardized test administration: why bother? Journal of Forensic Neuropsychology 2003; 3:55–81.
18. Rider B, Linden C. Comparison of standardized and non-standardized administration of the Jebsen Hand Function test. J Hand Ther. 1988;1(3):121–3.
19. Groenewald T. A phenomenological research design. Int J Qual Methods. 2004;3(1):1–26.
20. Morse JM. Designing funded qualitative research. In: Denzin INK, Lincoln YS, editors. Handbook of qualitative research. Thousand Oaks, CA: Sage; 1994. Pp. 493–503.
21. Yun J, Ulrich DA. Estimating measurement validity: a tutorial. Adapt Phys Activ Q. 2002; 19(1): 32–47.
22. Skjong R, Wentworth B. Expert judgement and risk perception. Det Norske Veritas. 2000:1–8.
21. Aravamudhan NR, Krishnaveni R. Establishing and reporting content validity evidence of new Training and Development Capacity Building Scale (TDCBS). Manag. 2015;20(1):131–58.
22. Penfield RD, Miller JM. Improving content validation studies using an asymmetric confidence interval for the mean of expert ratings. Appl Meas Educ. 2004;17(4):359–70.
23. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30(4):459–67.
24. Wilson HS. Research in nursing. 2da. ed. California: Addison-Wesley Publishing Company; 1989.
25. Sireci S, Faulkner-Bond M. Validity evidence based on test content. Psicothema. 2014;26:100–7.
26. Delgado-Rico E, Carretero-Dios H, Ruch W. Content validity evidences in test development: an applied perspective. Int J Clin Health Psychol. 2012; 12(3):449–60.
27. O’Neil T, Sireci SG. Evaluating the consistency of test content across two successive administrations of a State-Mandated Science and Technology Assessment (Report No.: Center for Educational Assessment MCAS Validity Report No. 2., CEA-454). Amherst, MA: School of Education, University of Massachusetts; 2002.
28. Fowell SL, Fewtrell R, McLaughlin PJ. Estimating the minimum number of judges required for test-centred standard setting on written assessments. Do discussion and iteration have an influence? Adv Health Sci Educ Theory Pract. 2008;13(1):11–24.
29. Wilson FR, Pan W, Schumsky DA. Recalculation of the critical values for Lawshe’s content validity ratio. Meas Eval Couns Dev. 2012;45(3):197–210.
30. Urcola-Pardo F, Ruiz de Viñaspre R, Orkai-zagirre-Gomara A, Jiménez-Navascués L, Anguas-Gracia A, Germán-Bes C. La escala CIBISA: herramienta para la autoevaluación del aprendiza-je práctico de estudiantes de enfermería. Index de Enfermería 2017; 26(3): 226-230.
31. Ayre C, Scally AJ. Critical values for Lawshe’s Content Validity Ratio: revisiting the original methods of calculation. Meas Eval Couns Dev. 2014;47(1):79 –86.
32. Bartram D. Test qualifications and test use in the UK: The Competence Approach. Eur J Psychol Assess. 1996;12(1):62–71.
33. Aiken LR. Three coefficients for analyzing the reliability and validity of ratings. Educ Psychol Meas 1985;45(1):131–42.
34. Aiken LR. Content validity and reliability of single items or questionnaires. Educ Psychol Meas. 1980; 40(4):955–9.
35. Penfield RD, Giacobbi JPR. Applying a score confidence interval to Aiken’s item content-relevance index. Meas Phys Educ Exerc Sci. 2004; 8(4):213–25.
36. Merino C, Livia C. Intervalos de confianza asimétricos para el índice la validez de contenido: Un programa Visual Basic para la V de Aiken. An de Psicol. 2009;25(1):169–71.
37. Moscoso MS, Merino-Soto C. Construcción y validez de contenido de un Inventario de Mindfulness: una perspectiva iberoamericana. Mindfulness and Compassion. 2017; 2(1):9-16.
38. Bachman RD, Paternoster R. Statistics for criminology and criminal justice. Los Angeles: Sage; 2017.
39. Ward JT. vratio: Stata module to calculate variation ratio and proportion of maximum heterogeneity for categorical variables. Statistical Software Components S458267, Boston College Department of Economics.
40. Gómez del Pulgar García-Madrid M, Pacheco Del Cerro E, González Jurado MA, Fernández Fernández MP, Beneit Montesinos JV. Diseño y validación de contenido de la escala “ECOEnf”. Index de Enfermería 2017; 26(4):265-269
41. Liu M, Senaratanaa W, Tonmukayakula O, Eriksen L, et al. Development of competency inventory for registered nurses in the people’s republic of china: Scale development. Int J Nurs Stud 2007; 44 (5): 805–813.
- Resumen visto - 1995 veces
- PDF descargado - 261 veces
- XML descargado - 0 veces