It starts with:
A much-anticipated report from the largest and longest-running study of American child care has found that keeping a preschooler in a day care center for a year or more increased the likelihood that the child would become disruptive in class -- and that the effect persisted through the sixth grade.I give the reporter credit though for puting this caveat in the second paragraph. Usually such things are in the second-last paragraph:
The effect was slight, and well within the normal range for healthy children, the researchers found. And as expected, parents' guidance and their genes had by far the strongest influence on how children behaved.This caveat plus the fact that there were no numbers regarding relative risk or any other risk factor in the article means that the "link" they found was not statistically significant. In other words, don't worry, your kids in day care likely won't grow up to be thugs. Good news, I suppose. But if the finding is within natural variability, then we're not even talking about an anomaly here, so why publish the study and run the headline as if the "link" to rowdeyness were significant?
Well, I'm just guessing here of course but I can see where it might happen that if a study were to have a finding of "No problem here" it could, maybe, possibly mean an end to grant money. On the other hand, if a problem could be found, even an insignificant one, there just might be a plausable argument to be made for more grant money.
If what Ayn Rand said is true, "Government encouragement does not require that men believe the false is true, it merely makes them indifferent to the issue of truth or falsehood.", then wouldn't it also make them indifferent to concepts related to truth and falsehood, like "statistical significance"? I would think so.
Update, fixed a typo.