A Description of Missing Data in Single-Case Experimental Designs Studies and an Evaluation of Single Imputation Methods


Aydın O.

BEHAVIOR MODIFICATION, cilt.0, sa.0, ss.1-48, 2024 (SSCI) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 0 Sayı: 0
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1177/01454455241226879
  • Dergi Adı: BEHAVIOR MODIFICATION
  • Derginin Tarandığı İndeksler: Social Sciences Citation Index (SSCI), Scopus, Academic Search Premier, Periodicals Index Online, CINAHL, EBSCO Education Source, Education Abstracts, EMBASE, ERIC (Education Resources Information Center), Linguistics & Language Behavior Abstracts, Psycinfo, Social services abstracts, Sociological abstracts, Violence & Abuse Abstracts
  • Sayfa Sayıları: ss.1-48
  • Erzincan Binali Yıldırım Üniversitesi Adresli: Evet

Özet

Missing data is inevitable in single-case experimental designs (SCEDs) studies due to repeated measures over a period of time. Despite this fact, SCEDs implementers such as researchers, teachers, clinicians, and school psychologists usually ignore missing data in their studies. Performing analyses without considering missing data in an intervention study using SCEDs or a meta-analysis study including SCEDs studies in a topic can lead to biased results and affect the validity of individual or overall results. In addition, missingness can undermine the generalizability of SCEDs studies. Considering these drawbacks, this study aims to give descriptive and advisory information to SCEDs practitioners and researchers about missing data in single-case data. To accomplish this task, the study presents information about missing data mechanisms, item level and unit level missing data, planned missing data designs, drawbacks of ignoring missing data in SCEDs, and missing data handling methods. Since single imputation methods among missing data handling methods do not require complicated statistical knowledge, are easy to use, and hence are more likely to be used by practitioners and researchers, the present study evaluates single imputation methods in terms of intervention effect sizes and missing data rates by using a real and hypothetical data sample. This study encourages SCEDs implementers, and also meta-analysts to use some of the single imputation methods to increase the generalizability and validity of the study results in case they encounter missing data in their studies.