2) The number of people used. 101 people is not nearly enough to make clear and accurate results. I see this with a lot of "studies", and the general standards for this is ridiculous.
I've had some stats lectures that said you generally want 100, and some that say 200. They do tend to qualify it by saying, it kind of depends what you're measuring. Then I have had profs that allowed or even encouraged students to produce results with smaller numbers. Some even encourage people to test-run qualitative
data based on quite small samples (say, 30-50) through quantitative methods and see if anything appears worth noting. The key is more that you have to specify and qualify whatever
you do, using statistician language. Smaller samples just need especially careful qualification and explanation of the theory and purpose behind the data and formulas selected.
It's all too easy for people who consider stats as some cut-and-dried tool (rather than the range of flexible, conditional presumptions it is) to dismiss so many studies on the basis of sample size, without looking at their very real merits. There are rather few circumstances where one can actually get a relatively random, controllable, large-scale study on private matters. For example, things where medicine (and/or disease and drugs), big money, or other pre-existing federal policy leanings are involved. If you go requiring very large samples to say anything about most subjects, then very soon you can claim nobody has a plausible guess at what might be going on anywhere in social reality. Which I would find exaggerated.
However, on the type
of sample population and definition of orientation in this particular case: There, I quite agree with you that the concept looks shoddy.