Request

To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!

Tuesday, January 27, 2009

Multiple Regression

The term Multiple regression was first used by Pearson in 1908. Multiple regression is a statistical technique used to evaluate and establish a quantitative relationships between multiple dependent and independent variables. in simple regression, only a single dependent variable can be regressed on a single independent variable. In multiple regression however, a number of variables, both metric and non-metric, can be involved and regressed on one another. Multiple regression, like other statistical techniques, requires that certain assumptions be valid and fulfilled in order to complete a valid analysis. These assumptions are:

1. The Independent variable(s) should be constant, where a repeat sample is involved. This implies that while the dependent variable(s) can change as a treatment is applied to it, the independent variable(s) should be held constant.

2. The variance of all error terms or residuals related to each variable should be constant.

3. There should be no autocorrelation between the error terms of independent variables. the existence of autocorrelation can be tested by the Run test or Durbin-Watson tests. Both tests work differently in indicating the presence of autocorrelation but are generally equally acceptable, with some scholars preferring the latter.

4. The number of observations must be greater than the number of parameters to be estimated.

5. There should not be a perfectly linear relationship between the explanatory or independent variable(s). in case there is, the confidence interval becomes wider, leading to a higher possibility that a hypothesis which must be rejected, is accepted. This issue is called multicolinearity and refers to the independence of the independent variables. The existence of multicolinearity can be tested by the VIF (Variance Inflation Factor), which is essentially equivalent to 1 divided by 1 minus the correlation coefficient between the variable(s). Where multicolinearity exists, the said problem has to be eliminated from the dataset in question. There are a number of ways this can be accomplished. A common method is to either drop the variable altogether or append cases in the problematic variable(s). however, another effective method is to use Factor scores based on Factor analysis, which will club together the correlated variables to produce a more valid result.

6. When performing multiple regression, the resulting error term should have a mean value of ‘Zero’, implying a complete prediction. The presence of a residual will indicate that the regression output has not completely predicted the relationship between the variables in question, and can be [substantially] improved.


The use of multiple regression as a statistical technique involves estimation of coefficients for the various variables in question. There are two key methods for estimation based on whether the multiple regression is linear or non-linear, although the latter method listed below can be used in either case

  1. Ordinary least square (OLS): This method was propounded by German mathematician, Carl Friedrich Gauss. It is a point estimation technique, which means that dependent variables are estimated at a particular point rather than in an interval. This method cannot be used in non-linear multiple regression unless the data are modified to become linear. OLS as a technique is based on the principle of minimizing the error term, as opposed to the Maximum Likelihood method, which is based on probability analysis.

  1. Maximum likelihood Method: This too is a point estimation method, which does not require that data have a linear relationship. in this method, the error term does not need to be normally distributed. This technique of multiple regression relies on probability as measure of the extent to which the model has fit the data. It is more mathematical in nature so before coming computer most of the researcher prefer OLS technique now these day due to computer it is easy to use this method.

A key advantage of multiple regression, besides being able to use multiple variables, is the ability to use multiple types of variables. For instance, a metric or numerical variable can be regressed on a non-metric or string variable, and vice versa. In addition, combinations of metric and non-metric variables can be regressed on metric and non-metric variables. depending on the specific kind of variable in question, different techniques such as discriminant analysis, logistic regression or SEM (Structured Equation Modeling) can be applied.

Click here for dissertation assistance!

Tuesday, January 20, 2009

T-test


A t-test is a statistical technique for comparison of the means of two samples or populations. There are other techniques similar to t-test for comparison of means, with the other popular measure being a z-test. However, a z-test is typically used where the sample size is relatively large, with t-test being the standard for usage in samples where the size or ‘n’ is 30 or smaller. Another key feature of the t-test is that it can be used for comparison of no more than 2 samples, with ANOVA being the most appropriate alternative. The t-test was discovered in the early 20th century by an Englishman, W.S. Gosset. The t-test is also commonly known as the student’s t-test, due to the fact that the usage of statistical analysis was considered a trade secret by Guiness, Gosset’s employer, forcing him to use a pen-name instead of his own real name.

In conducting a t-test, certain key assumptions have to be valid, including the following:

  • Data have to be normally distributed, meaning that there should be no outliers and the mean, median and mode should be the same. In the event that the data are not normal, they have to be normalized by converting into logarithm form. The variance of each sample dataset should also be equal.
  • Sample(s) may be dependent or independent, depending on the hypothesis. Where the samples are dependent, repeat measure are typically used. An example of a dependent sample is where observations are taken before and after a treatment.
  • For help assessing the assumptions of a t-test click here

T-tests are widely used in hypothesis testing for comparison of sample means, to determine whether or not they are statistically different from each other. For instance, a t-test may be used to:

  • Determine whether a sample belongs to a certain population
  • Determine whether two different samples belong to the same population or two different populations.
  • Determine whether the correlation between two samples or two different variables is statistically significant.
  • Determine whether, in case of dependent samples, the treatment has been statistically significant.

In order to conduct a t-test, we need to follow certain steps as follows:

  • Set up a Hypothesis for which the t-test is being conducted. The hypothesis is simply a statement that suggests what our expectation of the existing sample(s) is, and determines how the result of the t-test will be interpreted.
  • Select the level of significance and critical or ‘alpha’ region. Most often, a level of 95% significance is used in non-clinical applications, wherein a 99% or upwards level of significance is used. The balance is simply the alpha region which determines our hypothesis rejection zone or range.
  • Calculation: we obtain the value of the t-test by calculating the mean of the sample and comparing it with the population mean, to determine the standard deviation and dividing it by the number of observations (n), and taking a square root. The resulting value is the coefficient of the t-test.

  • Hypothesis testing: this step involves comparing our original hypothesis in step 1 using the obtained t-test value or coefficient. The idea is to compare our level of significance or ‘alpha’ value with the result of the t-test. For instance, if our t-test is conducted at 95% significance, for the hypothesis to be valid, our coefficient of the t-test should be lower than 5% or .05. If this is the case, then we can say that our hypothesis holds true. If not, we simply reject our hypothesis and can claim that the opposite is true.

While being a very useful tool in data analysis, the t-test is not without its limitations. For one thing, it can only be used in a small sample of 30 observations or less. In large data analysis projects, the t-test is practically useless. In addition, the t-test is a parametric test, which implies that in a non-normal distribution, it cannot be applied without making changes to dataset. In reality, few datasets are ever normal without having to make changes, and a t-test is thus a more cosmetic test. A non-parametric test can thus be applied more effectively, such as the Mann-Whitney U test (for independent samples) or the binomial or signed rank test (for related or dependent samples).

Click here for assistance with conducting T-tests

Thursday, January 8, 2009

Linear Regression Analysis and Logistic Regression Analysis

In this blog I discuss linear regression analysis, aspects of multiple regression, and logistic regression analysis, their function and differences, and SPSS regression analysis interpretation. At Statistics Solutions we hope you glean a few ideas here.

Linear Regression Analysis in SPSS

Linear regression analysis is a statistical analysis technique that assesses the impact of a predictor variable (the independent variable) on a criterion variable (a dependent variable). Importantly, the independent variable must be continuous (interval-level or ratio-level) or dichotomous. The dependent variable must be either continuous (interval-level or ratio-level). Dissertation students often have research questions that are appropriate to this technique. For example, a dissertation research question may be what the impact of smoking is on life expectancy. In this example, smoking is the predictor variable and life expectancy is the criterion variable. For Linear Regression Analysis help, CLICK HERE.

Linear Regression Analysis Assumptions

There are three primary assumptions associated with linear regression: outliers, linearity, and constant variance. Linear regression analysis is very sensitive to outliers. The easiest way to identify outliers is to standardize the scores by requesting that SPSS for the z-scores. Any score with a z-value outside of the absolute value of 3 is probably an outlier and should be considered for deletion. The assumption of linearity and constant variance can be assessed in SPSS by requesting a plot of the residuals (“z-resid” on the y-axis) by the predicted values (on “z-pred” the x-axis). If the scatter plot is not u-shaped, indicating non-linearity, or cone-shaped, indicating non-constant variance, the assumptions are considered met. For Linear Regression Analysis Assumptions Help, CLICK HERE.

Multiple Linear Regression Analysis

Multiple linear regression is a statistical analysis which is similar to Linear Regression with the exception that there can be more than one predictor variable. The assumptions of outliers, linearity and constant variance need to be met. One additional assumption that needs to be examined is multicollinearity. Multicollinearity is the extent to which the predictor variables are related to each other. Multicollinearity can be assessed by asking SPSS for the Variance Inflation Factor (VIF). While different researchers have different criteria for what constitutes too high a VIF number, VIF of 10 or greater is certainly reason for pause. If the VIF is 10 or greater, consider collapsing the variables. For Multiple Linear Regression Analysis Multicollinearity Help, CLICK HERE.

Regression Analysis Interpretation

When I speak with dissertation students about their regression analysis, there are four aspects of the SPSS output that I want to interpret. First is the ANOVA. The ANOVA tells the researcher whether the model is statistically significant; whether the F-value has an associated probability of .05 or less. The second thing to look for is the R-square value, also named the coefficient of determination. The coefficient of determination is a number between 0 and 100 which indicates what percent of the variability in the criterion variable can be accounted for by the predictor variable(s). The third regression analysis aspect to interpret is whether the beta coefficient is statistically significant. The beta’s significance can be found by examining the t-value and the associated significance level of the t-value for that particular predictor. Fourthly, you should interpret the beta, whether positive or negative. For Linear Regression Analysis Interpretation Help, CLICK HERE.

Logistic Regression Analysis in SPSS

Logistic regression, also called Binary Logistic Regression, is a statistical analysis technique that assesses the impact of a predictor variable (the independent variable) on a criterion variable (a dependent variable). As in a linear regression analysis, the independent variable must be continuous (interval-level or ratio-level) or dichotomous. The difference is that the dependent variable must be dichotomous (i.e., a binary variable). For example, a researcher may want to know whether age predicts the likelihood of going to a doctor (yes vs. no). For Logistic Regression Analysis Help, CLICK HERE.

Binary Logistic Regression Analysis Interpretation

While binary logistic regression and linear regression analyses are different in the criterion variables, there are other differences as well. In logistic regression, to assess whether the model is statistically significant, you can look at the chi-square test and whether it is statistically significant. The chi-square in logistic regression analysis is analogous to the ANOVA test in the linear regression. The next thing to examine is the Nagelkerke R-square statistic, which is somewhat analogous to the R-square value in the linear regression analysis. Next, interpret whether the Beta coefficient(s) is statistically significant. If so, look at the Exp(B) to see the likelihood that for a one-unit change in the predictor, the outcome is X more times likely to occur.. For Binary Logistic Regression Analysis Interpretation Help, CLICK HERE.

Friday, January 2, 2009

Statistics for your Dissertation Proposal or Thesis Proposal

Tis the season for dissertation proposals!! I'm sure many of you are preparing to start another riveting semester of graduate work and another semester with edge-of-your seat deadlines – the stuff epic motion pictures are made of!!!

We've all been there. You had plenty of time. You researched and you put off the hard stuff. Now you are facing crunch time. You know who you are… Now you have to hand in the proposal and need help. Maybe you have a couple weeks or maybe you have a couple days. What are you going to do? Read on my friend, read on. Today's post may just save you thousands of dollars and a few years of your life lost from stress.

Statistics for your Dissertation Proposal or Thesis Proposal

Among other things, I am betting you are most concerned about the appropriate statistics for your dissertation or thesis. I have covered this in another blog. Check it out here. In the meantime, I have some recommendations for the graduate student pursuing their thesis or dissertation and working on their proposal.

Know What you Need to Know

Different statistical tests measure different things, so it's important to know what you are trying to find. Are you looking for a relationship or are you looking for differences? Do you need to establish some predictability or are you just seeking to describe something? This will have a direct impact on the type of statistical tests you choose for your dissertation proposal or thesis proposal. There are words associated with certain statistical tests, e.g., "to find a relationship between X and Y is associated with correlation language. Click here for help determining the type of statistical tests to use with your dissertation proposal or thesis proposal.

Know how the Statistics in your Dissertation are Supposed to be Used

This is similar to the one above but I thought I would include it. A pretty good percentage of our clients have had their dissertation or thesis proposal approved and are now beginning to work on their results section. The problem is they aren't really sure how the tests they proposed are supposed to be used. You might think that since the proposal has been approved by experts, that they would have ensured that the statistical analysis you proposed for your dissertation or thesis is correct. Don't be fooled!

Many, many clients have sent us their approved proposal, listing the statistical analysis to be conducted and the variables to be tested, only to find out that the statistical test they proposed cannot be used with their type of variables. This is embarrassing and time-consuming, but can be avoided with a little due diligence. Click here for help determining how to use statistical tests with your dissertation proposal or thesis proposal.

Know the Types of Variables

There aren't very many types of variables. Take an evening if you have to and become familiar with the different types of variables used in statistical analysis. There are only a few and it will make all the difference in the world when you are choosing the statistical tests for your dissertation proposal or thesis proposal. Some statistical tests are only for continuous variables and some statistical tests are only for nominal variables. Some tests can use both if they are entered a particular way. It will pay to familiarize yourself with these types, before you write your survey questions and propose your analysis. If you are keeping these variable types in mind as you are constructing the survey for your dissertation proposal or thesis proposal, it will make choosing the statistical analysis much easier later on. For help with the types of variables included in your graduate thesis or Ph.D. click here.

Know the Assumptions of the Statistical Tests

Each statistical test used in your dissertation proposal or thesis proposal comes complete with assumptions, to make sure the test accurately measures what it is intended to measure. There's a pretty good chance that the assumptions of the statistical tests you choose to use for your dissertation proposal or thesis proposal won't be met, unless you're gathering a lot of observations.While you won't know for sure if the assumptions of the statistical tests have been met until after you have the data, you can get a pretty good idea without having the data.

For instance, maybe you are proposing looking for differences on GPA between those receiving free/reduced lunch and those not receiving free/reduced lunch. If you are researching poor, inner-city schools, you know there is probably going to be a disproportionate number of free/reduced lunch recipients. It's also possible that there will be a disproportionate number of failing schools. For two of the tests that could be used to analyze this difference, the independent samples t-test and the analysis of variance (ANOVA), there is the assumption that the groups are approximately equal in their standard deviations. We know this isn't the case a may instead propose a non-parametric equivalent. Click here for help with the assumptions of the statistical analysis being used in your Master's thesis, Master's dissertation, Ph.D. thesis, or Ph.D. dissertation.

I hope this helps some. I invite you to click here and schedule an appointment to speak with us about helping your with your Master's thesis, Master's dissertation, Ph.D. thesis, or Ph.D. dissertation. I've helped thousands upon thousands of graduate students over the last 16 years and can help you.