lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Policy Evaluation and Mixed Methods
Linnaeus University, Faculty of Social Sciences, Department of Pedagogy and Learning. (Läroplansteori och didaktik (SITE))ORCID iD: 0000-0003-2282-8071
2018 (English)Conference paper, Oral presentation with published abstract (Refereed)
Abstract [en]

In this paper, I assess the claim that mixed methods are especially appropriate methods for evaluating policy. The aim of my discussion is to clarify in what way mixing different methods can be instrumentally beneficial when evaluating policy initiatives. 

Social researchers concerned with the issue of evaluation have claimed on many occasions that mixed methods are especially appropriate for the aim of building program theories (Chen 2006; White 2008, 2009). A program theory is a model of a policy intervention that represents the intervention, its context and its outcome as a causal chain. The primacy of mixed methods as a specifically appropriate methodological approach for program theory has been justified in a contrastive way. As it is claimed, evaluation based on statistical studies can only account for the effect of an intervention on a particular phenomenon (White 2008). In contrast, program theory requires understanding how the intervention generated the measured effect (Chen 2006). For this reason, it has been claimed that effect size studies must be complemented with studies that provide an account of the causal path or mechanism that leads from the intervention to the outcome in its original context. It is interesting to note that several philosophers have shared this concern about certain statistical methods (Cartwright 2007; Grüne-Yanoff 2016; Runhardt 2015). 

However, the literature concerning mixed methods shows that the very term “mixed methods” is vague, covering a plethora of different techniques for the collection and analysis of empirical data. Therefore, it is implausible to think that the simple use of any mixed method is a sufficient condition for avoiding the limitation of statistical methods and for providing a good program theory. This poses a problem for the assessment of the claim of the superiority of mixed methods for program evaluation. 

In order for circumventing this problem, I categorize the mixed methods in groups, according to their characteristic integration strategy. The literature on mixed methods typically distinguishes three such strategies: i) method integration ii) data integration iii) model integration. 

The next step in my argument consists in the definition of a criterion for the explanatory aims of program theory. As it is claimed in the literature about program theory, an appropriate model of a program ought to describe some mechanism connecting the intervention to its outcome. Following Cartwright and Stegenga (2012), I define a mechanism as an answer to a how-question that accounts for the INUS condition (Insufficient but Necessary parts of Unnecessary but Sufficient conditions) that influence the outcome of the intervention. Therefore, the capability of the integration strategies to account for mechanisms can be assessed by comparing their capability of accounting for INUS conditions. Using examples from the evaluation of a large-scale professional development program for mathematics teachers recently introduced in Sweden, I proceed with the comparison of integration strategies. 

References 

Cartwright, Nancy. 2007. “Are RCTs the Gold Standard?” BioSocieties 2 (1): 11–20. 

Cartwright, Nancy, and Jacob Stegenga. 2012. A Theory of Evidence for Evidence-Based Policy. Oxford University Press/The British Academy. 

Chen, Huey T. 2006. “A Theory-Driven Evaluation Perspective on Mixed Methods Research.” In Research in the Schools, 75–83. 

Grüne-Yanoff, Till. 2016. “Why Behavioral Policy needs Mechanistic Evidence.” Economics & Philosophy 32 (3): 463–83. 

Runhardt, Rosa W. 2015. “Evidence for Causal Mechanisms in Social Science: Recommendations from Woodward’s Manipulability Theory of Causation.” Philosophy of Science 82 (5): 1296–1307. 

White, Howard. 2008. “Of Probits and Participation: The Use of Mixed Methods in Quantitative Impact Evaluation.” IDS Bulletin 39 (1): 98–109. 

———. 2009. “Theory-Based Impact Evaluation: Principles and Practice.” Journal of Development Effectiveness 1 (3): 271–284.

Place, publisher, year, edition, pages
2018.
Keywords [en]
Policy evaluation; Mixed methods; Mechanistic models; Integration
National Category
Philosophy Pedagogy
Research subject
Social Sciences, Practical Philosophy; Pedagogics and Educational Sciences, Education
Identifiers
URN: urn:nbn:se:lnu:diva-79436OAI: oai:DiVA.org:lnu-79436DiVA, id: diva2:1277790
Conference
The European Network for the Philosophy of the Social Sciences (ENPOSS) and the Philosophy of Social Science Roundtable (POSS-RT) 2018 conference. Institute of Philosophy, Leibniz Universität Hannover.
Available from: 2019-01-11 Created: 2019-01-11 Last updated: 2019-02-22Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records BETA

Matta, Corrado

Search in DiVA

By author/editor
Matta, Corrado
By organisation
Department of Pedagogy and Learning
PhilosophyPedagogy

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 60 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf