On the subject of the validity of using 1000s of rounds on one course vs being spread out over several / 100s of courses, I agree that it is probably not statistically valid, but it is probably close for the purposes for which the study was used (that is, to give a general idea of the impact).

By design, course slopes / ratings are setup to figure out the difficulty that the golfers of differing abilities will have on a given course. It is essentially an Y=MX+B line graph (Y=handicapped score for the course, M=slope/average slope of 113, X=your handicap index, course rating=B ) that plots estimated scores on a given course based on skills of the golfer. They use course design features such as number of hazards, length, tightness of fairways, green difficulty, etc. etc. to determine the rating/slope. By definition, that rating should plot the impact of those design features across a broad spectrum of players to derive that MX+B formula. However, I seriously doubt that every course is ranked perfectly and that every course and every player fits exactly on an MX+B line. It's like an economics theory that relies on "all else being equal" or "with perfect access to information" - that's great if it weren't for that fact that neither condition ever exists in reality.

I think that if you did that same test on several different courses you'd come up with a slightly different answer. But I don't think it would be off even by as much as a stroke unless the ranking and slopes of that particular course had just been botched in the first place (that is a pretty big assumption, which is the major reason in my mind why the validity is a bit in question - if the course was ranked incorrectly, then the whole thing is irrelevant; more course samples averages those issues out).