Just for Students, Random SLP

Research Methods for the Common Denominator: Part 2

I have no doubt that part 1 of this (very) mini-series was one of the most exciting blog posts you’ve read to date. In fact, you probably walked away from your computer feeling 1.0563 ounces smarter (yes, intelligence is obviously measured in ounces). As I sit here writing part 2 of my research methods informational post in my new reading glasses, I know I already feel just a tad bit more brilliant (yes, the reading glasses definitely help).

In this post, I’m going to impart upon you some facts about the statistics that you’re likely to find in research papers. To start off, there’s an important distinction to make between descriptive statistics and inferential statistics.

Descriptive Stats: describe or summarize data about your sample/group of subjects.

Inferential Stats: uses what you now know about your sample (since you just performed research) to infer about the population that your sample represents. So, if you were testing a new treatment method on a subject pool of 30 children with Down Syndrome, you would likely be inferring something about that treatment method for all children who are similar to your sample (i.e., kids with Down Syndrome).

Within descriptive statistics, you will want to consider a handful of different specific statistical measurements, starting with distribution. A frequency distribution generates a curve that shows you the frequency of responses/scores at different levels (e.g., different ages, different severities, etc.).

from: http://www.sciencedirect.com

The curve might come out looking like a normal curve, which is symmetrical, has a mean, median, and mode with the same value, and aligns with 68.2% of the population being within 1 standard deviation of the mean.

On the other hand, your curve might end up being a skew curve (positive or negative) or a kurtosis curve (leptokurtic, or platykurtic). Regardless of how your frequency distribution curve turns out, it’s important to understand that different curves imply different things about the effect of the independent variable on the dependent variable. In addition to considering the frequency distribution, you may also have information about the central tendency: mean (average), median (half the values are higher and half are lower), and mode (value that occurs most frequently).

Variability is a critical factor of descriptive statistics. The standard deviation tells us the average deviation of scores from the mean, and this range of variability might indicate that most scores were similar to one another, and therefore the mean can be confidently counted upon. If the scores are all over the place, the mean might not be very representative of the actual range of scores received by the sample.

Inferential statistics begins by testing a hypothesis. The alternative hypothesis hypothesizes some kind of effect of the independent variable on the dependent variable (e.g., X treatment will benefit Y population). Often this is what the researchers hope to find at the end of their study. The alternative hypothesis cannot actually be proven by statistical tests (although it can be supported); rather the null hypothesis (which says the independent variable will have no effect on the dependent variable) is rejected when the alternative hypothesis is supported. In order to reject the null hypothesis and support the alternative hypothesis, researchers use a cut-off value, or significance level (alpha), to decide the point at which the independent variable was effective and statistically significant. Typically, the significance level is <.05: if the observed p-value is <.05, the probability of this result occurring by chance is less than 5 in 100, and therefore can be attributed to a real effect of the independent variable. Although <.05 is the most common significance level, it’s actually just an arbitrary number and, at times, may lead to Type I or Type II errors.

Various characteristics can affect whether the results of a study are significant (i.e., p = <.05). A bigger effect size (more difference between the treatment group and the control group) typically supports significance. Less variability (aka a smaller standard deviation) also supports statistically significant results. Finally, a larger sample size is more likely to support statistical significance.

With all this being said, statistical significance is just one piece of the inferential statistics puzzle. Other statistical outcomes that look at testing differences and correlations must also be considered. However, I’d like to think your brain has worked hard enough for one day, so I’ll leave those explanations for another time and another place!

1 thought on “Research Methods for the Common Denominator: Part 2”

  1. Wow! I love this idea for my high school social groups! I have a few students who love to dominate the conversation with their topics (mainly-video games) and this would be such a great and easy visual! Thanks for sharing,
    Nicole at allisonspeechpeeps.blogspot.com

    Like

Leave a comment