Category Archives: Article reviews

Is it important to consider participants’ work-life balance when we design programming?

If you are like me and find yourself drawn to applied research or field-based inquiry, your scholarship may focus on or at least depend upon the way practitioners (like parents, teachers, social workers, health care professionals, school leaders, etc.) implement programming. In my quest to understand implementation and the barriers to effective implementation, I’ve found myself exploring outside the boundaries of my “field” which, if pushed, I would identify as early childhood development and education. Recently, I read an article by Versey (2015) that discussed strategies that adults use to manage the sometimes conflicting demands that employed adults may face in balancing work and family obligations. Versey introduces work-life balance as a developmental process that unfolds over the life course. It seems like such an obvious way to think about balancing work and family–an incremental and recursive process of discovering our own limits and strengths in the shifting landscape of our obligations and aspirations. Yet, many of us fail to consider the way our adult participants are continuing to learn and develop through the life course. As a potential result, we understand so little about the strategies that adults use and whether or not there are explicit supports that we should be providing in training sessions that are meant to change their professional practice but might, as an unintended effect, also put pressure on how they were balancing work and life obligations prior to the intervention.

Versey (2015) focuses on the role of perceived control, what others might call agency, as an important coping mechanism. Lazarus (2006; as well as Lazarus & Folkman, 1984) spent much of his career examining the role that stress plays in a person’s appraisal of situations and the subsequent coping mechanisms an individual employs. In fact, Versey found that participants who were able to appraise positively instances of conflict between home and family roles experienced better psychological health and reported less transfer of negativity (i.e., anger, frustration) from work to family settings and vice versa. Interestingly, Versey found that this relationship was larger for women than for men. This study is definitely worth a read for prevention scientists or applied researchers whose work is based in schools where the majority of teachers and school leaders are often women. Moving forward, it would also be worthwhile to conduct of review of whether interventions specifically target these capacities and, if so, how.

Replication: An important but often overlooked aspect of scientific exploration

As Benedict Carey wrote about this week in the NYT (http://nyti.ms/1U8G10F), few scientific studies are replicated. An effort recently undertaken by the Reproducibility Project at the Center for Open Science highlights why replication is important–particularly for studies that may influence policy and practice. This group recently published a report summarizing their replication of 100 psychology studies. They found that many of these peer-reviewed, published papers reported effect sizes that could not be substantiated. In fact, some of the original studies reported effect sizes larger than what was found upon replication within the population to which the study was designed to be generalizable.

A few things to remember about social science studies: (1) most studies recruit participants who are, as a group, meant to be representative of some larger population; (2) analysis of data using statistical models is meant to reveal population trends but the findings are only as good as: (a) the measures used, (b) the degree to which the sample represents the population of interest, and (c) the fit of the questions being asked, the data collected, and the analytic techniques selected to address the questions; (3) effect sizes are meant to translate population estimates generated by the statistical models used to answer the research questions into a more meaningful metric. Generally speaking, the larger the effect size, the more successful the (experimental) manipulation was in generating a difference between one group (who experienced one thing) and another group (who experienced something else).

What is the real-life significance of this report? First, it highlights how different the acceptable time frames for action are for policymakers and practitioners, on the one hand, and researchers, on the other. Policymakers and practitioners are working on a more compressed time frame–responding to the needs of specific people being served by policies and programs. Researchers may be motivated by similar things; yet, researchers are beholden to a scientific process that yields evidence more relevant to the “average” person in any population.

Furthermore, the process of science is lengthy and complex. Social and psychological phenomena, in particular, are often influenced by a complex array of interdependent characteristics of people and the settings in which they are embedded. When a study seems to identify a silver bullet for a persistent and puzzling social problem, everyone ought to call for more evidence. The scientific process, after all, is best deployed to falsify hypotheses and generate testable theories rather than reveal universally-valid truths.

But, what constitutes adequate replication? Are there other safeguards that can be put in place that would increase the replicability of studies? Or, standards that would help researchers understand what steps ought to be taken in reporting the generalizability of the findings and real-world significance of statistically significant findings? The IES’s What Works Clearinghouse addresses these issues and more by tackling research tailored to educational settings.

In future posts, I hope to explore these topics more. Join me!