Building a regression equation in excel. Regression analysis in excel

Known for being useful in different areas activities, including such a discipline as econometrics, where this work is used software utility. Basically all actions are practical and laboratory classes performed in Excel, which greatly facilitates the work, giving detailed explanations certain actions. Thus, one of the analysis tools “Regression” is used to select a graph for a set of observations using the least squares method. Let's look at what it is this tool program and what is its benefit to users. Below is also a short but clear instructions construction regression model.

Main tasks and types of regression

Regression represents the relationship between given variables, thereby making it possible to predict the future behavior of these variables. Variables are various periodic phenomena, including human behavior. This analysis Excel programs used to analyze the impact on a specific dependent variable of the values ​​of one or a number of variables. For example, sales in a store are influenced by several factors, including assortment, prices and location of the store. Thanks to regression in Excel, you can determine the degree of influence of each of these factors based on the results of existing sales, and then apply the data obtained to forecast sales for another month or for another store located nearby.

Typically, regression is presented as a simple equation that reveals the relationships and strengths of relationships between two groups of variables, where one group is dependent or endogenous and the other is independent or exogenous. If there is a group of interrelated indicators, the dependent variable Y is determined based on the logic of reasoning, and the rest act as independent X variables.

The main tasks of building a regression model are as follows:

  1. Selection of significant independent variables (X1, X2, ..., Xk).
  2. Selecting the type of function.
  3. Constructing estimates for coefficients.
  4. Construction of confidence intervals and regression functions.
  5. Checking the significance of the calculated estimates and the constructed regression equation.

Regression analysis there are several types:

  • paired (1 dependent and 1 independent variables);
  • multiple (several independent variables).

There are two types of regression equations:

  1. Linear, illustrating a strict linear relationship between variables.
  2. Nonlinear - Equations that can include powers, fractions, and trigonometric functions.

Instructions for building a model

To perform a given construction in Excel, you must follow the instructions:


For further calculation, use the “Linear()” function, specifying Y Values, X Values, Const and Statistics. After this, determine the set of points on the regression line using the "Trend" function - Y Values, X Values, New Values, Const. With help given parameters calculate the unknown value of the coefficients based on the given conditions of the problem.

The change in the resultant characteristic y is due to the variation in the factor characteristic x. The share of variance explained by regression in the total variance of the resulting characteristic characterizes the coefficient of determination R 2. For linear dependence the coefficient of determination is equal to the square of the correlation coefficient:

R 2 = r xy 2 , where r xy is the correlation coefficient.

For example, the value of R 2 = 0.83 means that in 83% of cases changes in x lead to changes in y. In other words, the accuracy of selecting the regression equation is high.

Calculated to assess the quality of fitting the regression equation. For acceptable models, it is suggested that the coefficient of determination should be greater than 50%. Models with a coefficient of determination above 80% can be considered quite good. The value of the coefficient of determination R 2 = 1 means a functional relationship between the variables.

When nonlinear regression the coefficient of determination is calculated using this calculator. With multiple regression, the coefficient of determination can be found through the Multiple Regression service
IN general case, the coefficient of determination is found by the formula: or
Rule for adding variances:
,
where is the total sum of squared deviations;
- the sum of squared deviations due to regression (“explained” or “factorial”);
- residual sum of squared deviations.

Using this online calculator you can calculate coefficient of determination and its significance is checked (Example solution).

Instructions. Specify the amount of input data. The resulting solution is stored in Word file. A template is also automatically created for testing the solution in Excel.

Regression analysis in Microsoft Excel– most complete guides on using MS Excel to solve regression analysis problems in the field of business analytics. Konrad Carlberg clearly explains theoretical issues, knowledge of which will help you avoid many mistakes, both in independently conducting regression analysis and when evaluating the results of analyzes performed by others. All material, from simple correlations and t-tests to multiple analysis of covariance, is based on real examples and is accompanied detailed description corresponding step-by-step procedures.

The book discusses the features and controversies associated with Excel functions for working with regression, examines the implications of each option and each argument, and explains how to reliably apply regression methods in areas ranging from medical research to financial analysis.

Konrad Carlberg. Regression analysis in Microsoft Excel. – M.: Dialectics, 2017. – 400 p.

Download the note in or format, examples in format

Chapter 1: Assessing Data Variability

Statisticians have many measures of variation at their disposal. One of them is the sum of squared deviations of individual values ​​from the average. In Excel, the SQUARE() function is used for this. But variance is more often used. Dispersion is the average of squared deviations. The variance is insensitive to the number of values ​​in the data set under study (while the sum of squared deviations increases with the number of measurements).

Excel offers two functions that return variance: DISP.G() and DISP.V():

  • Use the DISP.G() function if the values ​​to be processed form a population. That is, the values ​​contained in the range are the only values ​​you are interested in.
  • Use the DISP.B() function if the values ​​to be processed form a sample from a larger population. It is assumed that there are additional meanings, the variance of which you can also estimate.

If a quantity such as a mean or correlation coefficient is calculated from a population, it is called a parameter. A similar quantity calculated on the basis of a sample is called a statistic. Counting deviations from the average V this set, you will get a sum of squared deviations that is smaller than if you counted them from any other value. A similar statement is true for variance.

The larger the sample size, the more accurate the calculated statistic value. But there is no sample size smaller than the population size for which you can be confident that the statistic value matches the parameter value.

Let's say you have a set of 100 heights whose mean differs from the population mean, no matter how small the difference. By calculating the variance for a sample, you will get a value, say 4. This value is smaller than any other value that can be obtained by calculating the deviation of each of 100 height values ​​relative to any value other than the sample average, including relative to the true average. general population. Therefore, the calculated variance will be different, and smaller, from the variance that you would get if you somehow found out and used a population parameter rather than a sample mean.

The mean sum of squares determined for the sample provides a lower estimate of the population variance. The variance calculated in this way is called displaced assessment. It turns out that in order to eliminate the bias and obtain an unbiased estimate, it is enough to divide the sum of squared deviations not by n, Where n- sample size, and n – 1.

Magnitude n – 1 is called the number (number) of degrees of freedom. Exist different ways calculation of this quantity, although all of them involve either subtracting some number from the sample size or counting the number of categories into which the observations fall.

The essence of the difference between the DISP.G() and DISP.V() functions is as follows:

  • In the function VAR.G(), the sum of squares is divided by the number of observations and therefore represents a biased estimate of the variance, the true mean.
  • In the DISP.B() function, the sum of squares is divided by the number of observations minus 1, i.e. by the number of degrees of freedom, which gives a more accurate, unbiased estimate of the variance of the population from which the sample was drawn.

Standard deviation standard deviation, SD) – yes Square root from dispersion:

Squaring the deviations transforms the measurement scale into another metric, which is the square of the original one: meters - into square meters, dollars - into square dollars, etc. The standard deviation is the square root of the variance, and therefore takes us back to the original units of measurement. Which is more convenient.

It is often necessary to calculate the standard deviation after the data has been subjected to some manipulation. And although in these cases the results are undoubtedly standard deviations, they are usually called standard errors. There are several types of standard errors, including standard error of measurement, standard error of proportion, and standard error of the mean.

Let's say you collected height data for 25 randomly selected adult men in each of the 50 states. Next, you calculate the average height of adult males in each state. The resulting 50 average values ​​can in turn be considered observations. From this, you could calculate their standard deviation, which is standard error of the mean. Rice. 1. compares the distribution of 1,250 raw individual values ​​(height data for 25 men in each of the 50 states) with the distribution of the 50 state averages. The formula for estimating the standard error of the mean (that is, the standard deviation of means, not individual observations):

where is the standard error of the mean; s– standard deviation of the original observations; n– number of observations in the sample.

Rice. 1. Variation in averages from state to state is significantly less than variation in individual observations.

In statistics there is agreement regarding the use of Greek and Latin letters to denote statistical quantities. It is customary to denote parameters of the general population with Greek letters, and sample statistics with Latin letters. Therefore, if we're talking about about the population standard deviation, we write it as σ; if the standard deviation of the sample is considered, then we use the notation s. As for the symbols for designating averages, they do not agree with each other so well. The population mean is denoted by the Greek letter μ. However, the symbol X̅ is traditionally used to represent the sample mean.

z-score expresses the position of an observation in the distribution in standard deviation units. For example, z = 1.5 means that the observation is 1.5 standard deviations away from the mean large values. Term z-score used for individual assessments, i.e. for dimensions assigned to individual sample elements. The term used to refer to such statistics (such as the state average) z-score:

where X̅ is the sample mean, μ is the population mean, is the standard error of the means of a set of samples:

where σ is the standard error of the population (individual measurements), n– sample size.

Let's say you work as an instructor at a golf club. You have been able to measure the distance of your shots over a long period of time and know that the average is 205 yards and the standard deviation is 36 yards. You are offered a new club, claiming that it will increase your hitting distance by 10 yards. You ask each of the next 81 club patrons to take a test shot with a new club and record their swing distance. It turned out that the average distance with the new club was 215 yards. What is the probability that a difference of 10 yards (215 – 205) is due solely to sampling error? Or to put it another way: What is the likelihood that, in more extensive testing, the new club will not demonstrate an increase in hitting distance over the existing long-term average of 205 yards?

We can check this by generating a z-score. Standard error of the mean:

Then z-score:

We need to find the probability that the sample mean will be 2.5σ away from the population mean. If the probability is small, then the differences are not due to chance, but to the quality of the new club. Excel does not have a z-score for determining probability. finished function. However, you can use the formula =1-NORM.ST.DIST(z-score,TRUE), where the NORM.ST.DIST() function returns the area under the normal curve to the left of the z-score (Figure 2).

Rice. 2. The NORM.ST.DIST() function returns the area under the curve to the left of the z-value; To enlarge the image, right-click on it and select Open image in new tab

The second argument of the NORM.ST.DIST() function can take two values: TRUE – the function returns the area of ​​the area under the curve to the left of the point specified by the first argument; FALSE – the function returns the height of the curve at the point specified by the first argument.

If the population mean (μ) and standard deviation (σ) are not known, the t-value is used (see details). The z-score and t-score structures differ in that the standard deviation s obtained from the sample results is used to find the t-score rather than the known value of the population parameter σ. The normal curve has a single shape, and the shape of the t-value distribution varies depending on the number of degrees of freedom df. degrees of freedom) of the sample it represents. The number of degrees of freedom of the sample is equal to n – 1, Where n- sample size (Fig. 3).

Rice. 3. The shape of t-distributions that arise in cases where the parameter σ is unknown differs from the shape of the normal distribution

Excel has two functions for the t-distribution, also called the Student distribution: STUDENT.DIST() returns the area under the curve to the left of given t-value, and STUDENT.DIST.PH() is on the right.

Chapter 2. Correlation

Correlation is a measure of dependence between elements of a set of ordered pairs. The correlation is characterized Pearson correlation coefficients–r. The coefficient can take values ​​in the range from –1.0 to +1.0.

Where Sx And S y– standard deviations of variables X And Y, S xy– covariance:

In this formula, the covariance is divided by the standard deviations of the variables X And Y, thereby removing unit-related scaling effects from the covariance. Excel uses the CORREL() function. The name of this function does not contain the qualifying elements Г and В, which are used in the names of functions such as STANDARDEV(), VARIANCE() or COVARIANCE(). Although the sample correlation coefficient provides a biased estimate, the reason for the bias is different than in the case of variance or standard deviation.

Depending on the magnitude of the general correlation coefficient (often denoted by the Greek letter ρ ), correlation coefficient r produces a biased estimate, with the effect of bias increasing as sample sizes decrease. However, we do not try to correct this bias in the same way as, for example, we did when calculating the standard deviation, when we substituted not the number of observations, but the number of degrees of freedom into the corresponding formula. In reality, the number of observations used to calculate the covariance has no effect on the magnitude.

The standard correlation coefficient is intended for use with variables that are related to each other by a linear relationship. The presence of nonlinearity and/or errors in the data (outliers) lead to incorrect calculation of the correlation coefficient. To diagnose data problems, it is recommended to create scatter plots. This is the only chart type in Excel that treats both the horizontal and vertical axes as value axes. A line chart defines one of the columns as the category axis, which distorts the picture of the data (Fig. 4).

Rice. 4. The regression lines seem the same, but compare their equations with each other

Observations used to construct line chart, are located equidistant along the horizontal axis. The division labels along this axis are just labels, not numeric values.

Although correlation often means that there is a cause-and-effect relationship, it cannot be used to prove that this is the case. Statistics are not used to demonstrate whether a theory is true or false. To exclude competing explanations for observational results, put planned experiments. Statistics are used to summarize the information collected during such experiments, and quantification the likelihood that the decision being made may be incorrect given the available evidence.

Chapter 3: Simple Regression

If two variables are related to each other, so that the value of the correlation coefficient exceeds, say, 0.5, then in this case it is possible to predict (with some accuracy) the unknown value of one variable from the known value of the other. To obtain forecast price values ​​based on the data shown in Fig. 5, any of several can be used possible ways, but you almost certainly won't use the one shown in Fig. 5. Still, you should familiarize yourself with it, because no other method allows you to demonstrate the connection between correlation and prediction as clearly as this one. In Fig. 5 in the range B2:C12 shows a random sample of ten houses and provides data on the area of ​​\u200b\u200beach house (in square feet) and its selling price.

Rice. 5. Forecast sales price values ​​form a straight line

Find the means, standard deviations, and correlation coefficient (range A14:C18). Calculate area z-scores (E2:E12). For example, cell E3 contains the formula: =(B3-$B$14)/$B$15. Compute the z-scores of the forecast price (F2:F12). For example, cell F3 contains the formula: =ЕЗ*$В$18. Convert z-scores to dollar prices (H2:H12). In cell NZ the formula is: =F3*$C$15+$C$14.

Note that the predicted value always tends to shift toward the mean of 0. The closer the correlation coefficient is to zero, the closer to zero the predicted z-score is. In our example, the correlation coefficient between area and selling price is 0.67, and the forecast price is 1.0 * 0.67, i.e. 0.67. This corresponds to an excess of a value above the mean equal to two-thirds of a standard deviation. If the correlation coefficient were equal to 0.5, then the forecast price would be 1.0 * 0.5, i.e. 0.5. This corresponds to an excess of a value above the mean equal to only half a standard deviation. Whenever the value of the correlation coefficient differs from the ideal value, i.e. greater than -1.0 and less than 1.0, the score of the predicted variable should be closer to its mean than the score of the predictor (independent) variable to its own. This phenomenon is called regression to the mean, or simply regression.

Excel has several functions for determining the coefficients of a regression line equation (in Excel it is called a trend line) y =kx + b. For determining k serves function

=SLOPE(known_y_values, known_x_values)

Here at is the predicted variable, and X– independent variable. You must strictly follow this order of variables. The slope of the regression line, correlation coefficient, standard deviations of the variables, and covariance are closely related (Figure 6). The INTERMEPT() function returns the value intercepted by the regression line on the vertical axis:

=LIMIT(known_y_values, known_x_values)

Rice. 6. The relationship between standard deviations converts the covariance into a correlation coefficient and the slope of the regression line

Note that the number of x and y values ​​provided as arguments to the SLOPE() and INTERCEPT() functions must be the same.

Regression analysis uses another important indicator– R 2 (R-squared), or coefficient of determination. It determines what contribution to the overall variability of the data is made by the relationship between X And at. In Excel, there is a function for it called CVPIERSON(), which takes exactly the same arguments as the CORREL() function.

Two variables with a non-zero correlation coefficient between them are said to explain variance or have explained variance. Typically explained variance is expressed as a percentage. So R 2 = 0.81 means that 81% of the variance (scatter) of two variables is explained. The remaining 19% is due to random fluctuations.

Excel has a TREND function that makes calculations easier. TREND() function:

  • accepts the known values ​​you provide X and known values at;
  • calculates the slope of the regression line and the constant (intercept);
  • returns predicted values at, determined by applying a regression equation to known values X(Fig. 7).

The TREND() function is an array function (if you have not encountered such functions before, I recommend).

Rice. 7. Using the TREND() function allows you to speed up and simplify calculations compared to using the pair of SLOPE() and INTERCEPT() functions

To enter the TREND() function as an array formula in cells G3:G12, select the range G3:G12, enter the formula TREND(NW:C12;B3:B12), press and hold the keys and only after that press the key . Note that the formula is enclosed in curly braces: ( and ). This is how Excel tells you that this formula perceived precisely as an array formula. Don't enter the parentheses yourself: If you try to enter them yourself as part of a formula, Excel will treat your input as a regular text string.

The TREND() function has two more arguments: new_values_x And const. The first allows you to make a forecast for the future, and the second can force the regression line to pass through the origin (a value of TRUE tells Excel to use the calculated constant, a value of FALSE tells Excel to use a constant = 0). Excel allows you to draw a regression line on a graph so that it passes through the origin. Start by drawing a scatter plot, then right-click on one of the data series markers. Select in the window that opens context menu paragraph Add a trend line; select an option Linear; if necessary, scroll down the panel, check the box Set up intersection; Make sure its associated text box is set to 0.0.

If you have three variables and you want to determine the correlation between two of them, eliminating the influence of the third, you can use partial correlation. Suppose you are interested in the relationship between the percentage of a city's residents who have completed college and the number of books in the city's libraries. You collected data for 50 cities, but... The problem is that both of these parameters may depend on the well-being of the residents of a particular city. Of course, it is very difficult to find other 50 cities characterized by exactly the same level of well-being of residents.

By using statistical methods to control for the influence of wealth on both library financial support and college affordability, you could obtain a more precise quantification of the strength of the relationship between the variables of interest, namely the number of books and the number of graduates. Such a conditional correlation between two variables, when the values ​​of other variables are fixed, is called partial correlation. One way to calculate it is to use the equation:

Where rC.B. . W- correlation coefficient between the College and Books variables with the influence excluded ( fixed value) variable Wealth; rC.B.- correlation coefficient between the variables College and Books; rCW- correlation coefficient between the College and Welfare variables; rB.W.- correlation coefficient between the variables Books and Welfare.

On the other hand, partial correlation can be calculated based on the analysis of residuals, i.e. differences between the predicted values ​​and the associated results of actual observations (both methods are presented in Fig. 8).

Rice. 8. Partial correlation as correlation of residuals

To simplify the calculation of the correlation coefficient matrix (B16:E19), use the package Excel analysis(menu Data –> Analysis –> Data analysis). By default, this package is not active in Excel. To install it, go through the menu File –> Options –> Add-ons. At the bottom of the opened window OptionsExcel find the field Control, select Add-onsExcel, click Go. Check the box next to the add-in Analysis package. Click A data analysis, select option Correlation. Specify $B$2:$D$13 as the input interval, check the box Labels in the first line, specify $B$16:$E$19 as the output interval.

Another possibility is to determine semi-partial correlation. For example, you are investigating the effects of height and age on weight. Thus, you have two predictor variables - height and age, and one predictor variable - weight. You want to exclude the influence of one predictor variable on another, but not on the predictor variable:

where H – Height, W – Weight, A – Age; the semi-partial correlation coefficient index uses round brackets, with the help of which it is indicated which variable’s influence is eliminated and from which particular variable. IN in this case the notation W(H.A) indicates that the effect of the Age variable is removed from the Height variable, but not from the Weight variable.

It may seem that the issue being discussed is not of significant importance. After all, the most important thing is how accurately it works general equation regression, while the problem of the relative contributions of individual variables to the total explained variance seems to be of secondary importance. However, this is not the case. Once you start wondering whether a variable is worth using in a multiple regression equation at all, the issue becomes important. It can influence the assessment of the correctness of the choice of model for analysis.

Chapter 4. LINEST() Function

The LINEST() function returns 10 regression statistics. The LINEST() function is an array function. To enter it, select a range containing five rows and two columns, type the formula, and click (Fig. 9):

LINEST(B2:B21,A2:A21,TRUE,TRUE)

Rice. 9. LINEST() function: a) select the range D2:E6, b) enter the formula as shown in the formula bar, c) click

The LINEST() function returns:

  • regression coefficient (or slope, cell D2);
  • segment (or constant, cell E3);
  • standard errors regression coefficient and constant (range D3:E3);
  • coefficient of determination R 2 for regression (cell D4);
  • standard error of estimate (cell E4);
  • F-test for full regression (cell D5);
  • number of degrees of freedom for the residual sum of squares (cell E5);
  • regression sum of squares (cell D6);
  • residual sum of squares (cell E6).

Let's look at each of these statistics and how they interact.

Standard error in our case, it is the standard deviation calculated for sampling errors. That is, this is a situation where the general population has one statistic, and the sample has another. Dividing the regression coefficient by the standard error gives you a value of 2.092/0.818 = 2.559. In other words, a regression coefficient of 2.092 is two and a half standard errors away from zero.

If the regression coefficient equal to zero, That best estimate the predicted variable is its mean. Two and a half standard errors is quite large, and you can safely assume that the regression coefficient for the population is nonzero.

You can determine the probability of obtaining a sample regression coefficient of 2.092 if its actual value in the population is 0.0 using the function

STUDENT.DIST.PH (t-criterion = 2.559; number of degrees of freedom = 18)

In general, the number of degrees of freedom = n – k – 1, where n is the number of observations and k is the number of predictor variables.

This formula returns 0.00987, or rounded to 1%. It tells us the following: if the regression coefficient for the population is 0%, then the probability of obtaining a sample of 20 people for which calculated value The regression coefficient is 2.092, a modest 1%.

The F-test (cell D5 in Fig. 9) performs the same functions in relation to full regression as the t-test in relation to the coefficient of simple pairwise regression. The F test is used to test whether the coefficient of determination R 2 for a regression is large enough to reject the hypothesis that in the population it has a value of 0.0, which indicates that there is no variance explained by the predictor and predicted variable. When there is only one predictor variable, the F-test is exactly equal to the t-test squared.

So far we have looked at interval variables. If you have variables that can take on multiple values, representing simple names, for example, Man and Woman or Reptile, Amphibian and Fish, imagine them as numeric code. Such variables are called nominal.

R2 statistics quantifies the proportion of variance explained.

Standard error of estimate. In Fig. Figure 4.9 presents the predicted values ​​of the Weight variable, obtained on the basis of its relationship with the Height variable. The range E2:E21 contains the residual values ​​for the Weight variable. More precisely, these residuals are called errors - hence the term standard error of estimation.

Rice. 10. Both R 2 and the standard error of the estimate express the accuracy of the forecasts obtained using regression

The smaller the standard error of the estimate, the more accurate the regression equation and the closer you expect any prediction produced by the equation to match the actual observation. The standard error of estimation provides a way to quantify these expectations. The weight of 95% of people with a certain height will be in the range:

(height * 2.092 – 3.591) ± 2.092 * 21.118

F-statistic is the ratio of between-group variance to within-group variance. This name was introduced by statistician George Snedecor in honor of Sir, who developed analysis of variance (ANOVA, Analysis of Variance) at the beginning of the 20th century.

The coefficient of determination R2 expresses the proportion total amount squares associated with regression. The value (1 – R 2) expresses the proportion of the total sum of squares associated with residuals - forecasting errors. The F-test can be obtained using the LINEST function (cell F5 in Fig. 11), using sums of squares (range G10:J11), using proportions of variance (range G14:J15). The formulas can be studied in the attached Excel file.

Rice. 11. Calculation of F-criterion

When using nominal variables, dummy coding is used (Figure 12). To encode values, it is convenient to use the values ​​0 and 1. The probability F is calculated using the function:

F.DIST.PH(K2;I2;I3)

Here the function F.DIST.PH() returns the probability of obtaining an F-criterion that obeys the central F-distribution (Fig. 13) for two sets of data with the numbers of degrees of freedom given in cells I2 and I3, the value of which coincides with the value given in cell K2.

Rice. 12. Regression analysis using dummy variables

Rice. 13. Central F-distribution at λ = 0

Chapter 5. Multiple Regression

When you move from simple pairwise regression with one predictor variable to multiple regression, you add one or more predictor variables. Store the values ​​of the predictor variables in adjacent columns, such as columns A and B in the case of two predictors, or A, B, and C in the case of three predictors. Before entering a formula that includes the LINEST() function, select five rows and as many columns as there are predictor variables, plus one more for the constant. In the case of regression with two predictor variables, the following structure can be used:

LINEST(A2: A41; B2: C41;;TRUE)

Similarly in the case of three variables:

LINEST(A2:A61,B2:D61,;TRUE)

Let's say you want to study the possible effects of age and diet on LDL levels - low-density lipoproteins, which are believed to be responsible for the formation of atherosclerotic plaques, which cause atherothrombosis (Fig. 14).

Rice. 14. Multiple regression

The R 2 of multiple regression (reflected in cell F13) is greater than the R 2 of any simple regression (E4, H4). Multiple regression uses multiple predictor variables simultaneously. In this case, R2 almost always increases.

For any simple linear equation In a regression with one predictor variable, there will always be a perfect correlation between the predicted values ​​and the values ​​of the predictor variable, since in such an equation the predictor values ​​are multiplied by one constant and another constant is added to each product. This effect does not persist in multiple regression.

Displaying the results returned by the LINEST() function for multiple regression (Figure 15). Regression coefficients are output as part of the results returned by the LINEST() function in reverse order of variables(G–H–I corresponds to C–B–A).

Rice. 15. Coefficients and their standard errors are displayed in reverse order following them on the worksheet

The principles and procedures used in single predictor variable regression analysis are easily adapted to account for multiple predictor variables. It turns out that much of this adaptation depends on eliminating the influence of the predictor variables on each other. The latter is associated with partial and semi-partial correlations (Fig. 16).

Rice. 16. Multiple regression can be expressed through pairwise regression of residuals (see Excel file for formulas)

In Excel, there are functions that provide information about t- and F-distributions. Functions whose names include the DIST part, such as STUDENT.DIST() and F.DIST(), take a t-test or F-test as an argument and return the probability of observing a specified value. Functions whose names include the OBR part, such as STUDENT.INV() and F.INR(), take a probability value as an argument and return a criterion value corresponding to the specified probability.

Since we are looking for critical values ​​of the t-distribution that cut off the edges of its tail regions, we pass 5% as an argument to one of the STUDENT.INV() functions, which returns the value corresponding to this probability (Fig. 17, 18).

Rice. 17. Two-tailed t-test

Rice. 18. One-tailed t-test

By establishing a decision rule for the single-tailed alpha region, you increase the statistical power of the test. If, when you begin an experiment, you are confident that you have every reason to expect a positive (or negative) regression coefficient, then you should perform a single-tail test. In this case, the probability that you accept correct solution, rejecting the hypothesis of a zero regression coefficient in the population, will be higher.

Statisticians prefer to use the term directed test instead of the term single-tail test and term undirected test instead of the term two-tail test. The terms directed and undirected are preferred because they emphasize the type of hypothesis rather than the nature of the tails of the distribution.

An approach to assessing the impact of predictors based on model comparison. In Fig. Figure 19 presents the results of a regression analysis that tests the contribution of the Diet variable to the regression equation.

Rice. 19. Comparing two models by testing differences in their results

The results of the LINEST() function (range H2:K6) are related to what I call full model, which regresses the LDL variable on the Diet, Age, and HDL variables. The range H9:J13 presents calculations without taking into account the predictor variable Diet. I call this the limited model. In the full model, 49.2% of the variance in the dependent variable LDL was explained by the predictor variables. In the restricted model, only 30.8% of LDL is explained by the Age and HDL variables. The loss in R 2 due to excluding the Diet variable from the model is 0.183. In the range G15:L17, calculations are made that show that only with a probability of 0.0288 the effect of the Diet variable is random. In the remaining 97.1%, Diet has an effect on LDL.

Chapter 6: Assumptions and Cautions for Regression Analysis

The term "assumption" is not defined strictly enough, and the way it is used suggests that if the assumption is not met, then the results of the entire analysis are at the very least questionable or possibly invalid. This is not actually the case, although there are certainly cases where violating an assumption fundamentally changes the picture. Basic assumptions: a) the residuals of the Y variable are normally distributed at any point X along the regression line; b) Y values ​​are linearly dependent on X values; c) the dispersion of the residuals is approximately the same at each point X; d) there is no dependence between the residues.

If assumptions do not play a significant role, statisticians say that the analysis is robust to violation of the assumption. In particular, when you use regression to test for differences between group means, the assumption that the Y values ​​- and hence the residuals - are normally distributed does not play a significant role: the tests are robust to violations of the normality assumption. It is important to analyze data using charts. For example, included in the add-on Data analysis tool Regression.

If the data does not meet the assumptions of linear regression, there are approaches other than linear regression at your disposal. One of them is logistic regression (Fig. 20). Near the upper and lower limits of the predictor variable, linear regression produces unrealistic predictions.

Rice. 20. Logistic regression

In Fig. Figure 6.8 displays the results of two data analysis methods aimed at examining the relationship between annual income and the likelihood of buying a home. Obviously, the likelihood of making a purchase will increase with increasing income. The charts make it easy to spot the differences between the results that linear regression predicts the likelihood of buying a home and the results you might get using a different approach.

In statistician's parlance, rejecting the null hypothesis when in fact it is true is called a Type I error.

In the add-on Data analysis offered handy tool to generate random numbers, allowing the user to specify the desired shape of the distribution (for example, Normal, Binomial, or Poisson), as well as the mean and standard deviation.

Differences between functions of the STUDENT.DIST() family. Beginning with Excel versions 2010 three available different shapes a function that returns the proportion of the distribution to the left and/or to the right of a given t-test value. The STUDENT.DIST() function returns the fraction of the area under the distribution curve to the left of the t-test value you specify. Let's say you have 36 observations, so the number of degrees of freedom for the analysis is 34 and the t-test value = 1.69. In this case the formula

STUDENT.DIST(+1.69,34,TRUE)

returns the value 0.05, or 5% (Figure 21). The third argument of the STUDENT.DIST() function can be TRUE or FALSE. If set to TRUE, the function returns the cumulative area under the curve to the left of given t-test, expressed as a fraction. If it is FALSE, the function returns the relative height of the curve at the point corresponding to the t-test. Other versions of the STUDENT.DIST() function - STUDENT.DIST.PH() and STUDENT.DIST.2X() - take only the t-test value and the number of degrees of freedom as arguments and do not require specifying a third argument.

Rice. 21. The darker shaded area in the left tail of the distribution corresponds to the proportion of area under the curve to the left of a large positive t-test value

To determine the area to the right of the t-test, use one of the formulas:

1 — STIODENT.DIST (1, 69;34;TRUE)

STUDENT.DIST.PH(1.69;34)

The entire area under the curve must be 100%, so subtracting from 1 the fraction of the area to the left of the t-test value that the function returns gives the fraction of the area to the right of the t-test value. You may find it more preferred option directly obtaining the area fraction you are interested in using the STUDENT.DIST.PH() function, where PH means the right tail of the distribution (Fig. 22).

Rice. 22. 5% alpha region for directional test

Using the STUDENT.DIST() or STUDENT.DIST.PH() functions implies that you have chosen a directional working hypothesis. The directional working hypothesis combined with setting the alpha value to 5% means that you place all 5% in the right tail of the distributions. You will only have to reject the null hypothesis if the probability of the t-test value you obtain is 5% or less. Directional hypotheses generally result in more sensitive statistical tests (this greater sensitivity is also called greater statistical power).

In an undirected test, the alpha value remains at the same 5% level, but the distribution will be different. Because you must allow for two outcomes, the probability of a false positive must be distributed between the two tails of the distribution. It is generally accepted to distribute this probability equally (Fig. 23).

Using the same obtained t-test value and the same number of degrees of freedom as in the previous example, use the formula

STUDENT.DIST.2Х(1.69;34)

For no particular reason, the STUDENT.DIST.2X() function returns the error code #NUM! if it is given a negative t-test value as its first argument.

If the samples contain different number data, use the two-sample t-test with different variances included in the package Data analysis.

Chapter 7: Using Regression to Test Differences Between Group Means

Variables that previously appeared under the name predictor variables will be called outcome variables in this chapter, and the term factor variables will be used instead of the term predictor variables.

The simplest approach to coding a nominal variable is dummy coding(Fig. 24).

Rice. 24. Regression analysis based on dummy coding

When using dummy coding of any kind, the following rules should be followed:

  • The number of columns reserved for new data must be equal to the number of factor levels minus
  • Each vector represents one factor level.
  • Subjects in one of the levels, which is often the control group, are coded 0 in all vectors.

The formula in cells F2:H6 =LINEST(A2:A22,C2:D22,;TRUE) returns regression statistics. For comparison, in Fig. Figure 24 shows the results of traditional ANOVA returned by the tool. One-way ANOVA add-ons Data analysis.

Effects coding. In another type of coding called effects coding, The mean of each group is compared with the mean of the group means. This aspect of effect coding is due to the use of -1 instead of 0 as the code for the group, which receives the same code in all code vectors (Figure 25).

Rice. 25. Effects coding

When dummy coding is used, the constant value returned by LINEST() is the mean of the group that is assigned zero codes in all vectors (usually the control group). In the case of effects coding, the constant is equal to the overall mean (cell J2).

General linear model - useful way conceptualization of the components of the value of the resulting variable:

Y ij = μ + α j + ε ij

The use of Greek letters in this formula instead of Latin letters emphasizes the fact that it refers to the population from which samples are drawn, but it can be rewritten to indicate that it refers to samples drawn from a given population:

Y ij = Y̅ + a j + e ij

The idea is that each observation Y ij can be viewed as the sum of the following three components: the grand average, μ; effect of treatment j, and j ; value e ij, which represents the deviation of the individual quantitative indicator Y ij from the combined value of the general average and the effect j-th treatment(Fig. 26). The goal of the regression equation is to minimize the sum of squares of the residuals.

Rice. 26. Observations decomposed into components of a general linear model

Factor analysis. If the relationship between the outcome variable and two or more factors is studied simultaneously, then in this case we talk about using factor analysis. Adding one or more factors to a one-way ANOVA can increase statistical power. In one-way analysis of variance, variance in the outcome variable that cannot be attributed to a factor is included in the residual mean square. But it may well be that this variation is related to another factor. Then this variation can be removed from the mean square error, a decrease in which leads to an increase in the F-test values, and therefore to an increase in the statistical power of the test. Superstructure Data analysis includes a tool that processes two factors simultaneously (Fig. 27).

Rice. 27. Tool Two-way analysis of variance with repetitions of the Analysis Package

The ANOVA tool used in this figure is useful because it returns the mean and variance of the outcome variable, as well as the counter value, for each group included in the design. In the table Analysis of variance displays two parameters not present in the output of the single-factor version of the ANOVA tool. Pay attention to sources of variation Sample And Columns in lines 27 and 28. Source of variation Columns refers to gender. Source of Variation Sample refers to any variable whose values ​​occupy various strings. In Fig. 27 values ​​for the KursLech1 group are in lines 2-6, the KursLech2 group is in lines 7-11, and the KursLechZ group is in lines 12-16.

The main point is that both factors, Gender (label Columns in cell E28) and Treatment (label Sample in cell E27), are included in the ANOVA table as sources of variation. The means for men are different from the means for women, and this creates a source of variation. The means for the three treatments also differ, providing another source of variation. There is also a third source, Interaction, which refers to the combined effect of the variables Gender and Treatment.

Chapter 8. Analysis of Covariance

Analysis of Covariance, or ANCOVA (Analysis of Covariation), reduces bias and increases statistical power. Let me remind you that one of the ways to assess the reliability of a regression equation is F-tests:

F = MS Regression/MS Residual

where MS (Mean Square) is the mean square, and the Regression and Residual indices indicate the regression and residual components, respectively. MS Residual is calculated using the formula:

MS Residual = SS Residual / df Residual

where SS (Sum of Squares) is the sum of squares, and df is the number of degrees of freedom. When you add covariance to a regression equation, some portion of the total sum of squares is included not in SS ResiduaI, but in SS Regression. This leads to a decrease in SS Residua l, and hence MS Residual. The smaller the MS Residual, the larger the F-test and the more likely you are to reject the null hypothesis of no difference between the means. As a result, you redistribute the variability of the outcome variable. In ANOVA, when covariance is not taken into account, variability becomes error. But in ANCOVA, part of the variability previously attributed to the error term is assigned to a covariate and becomes part of SS Regression.

Consider an example in which the same data set is analyzed first with ANOVA and then with ANCOVA (Figure 28).

Rice. 28. ANOVA analysis indicates that the results obtained from the regression equation are unreliable

The study compares the relative effects of physical exercise, which develops muscle strength, and cognitive exercise (doing crossword puzzles), which stimulates brain activity. The subjects were randomly distributed into two groups so that at the beginning of the experiment both groups were in the same conditions. After three months, subjects' cognitive performance was measured. The results of these measurements are shown in column B.

The range A2:C21 contains the source data passed to the LINEST() function to perform analysis using effects coding. The results of the LINEST() function are given in the range E2:F6, where cell E2 displays the regression coefficient associated with the impact vector. Cell E8 contains t-test = 0.93, and cell E9 tests the reliability of this t-test. The value contained in cell E9 indicates that the probability of encountering the difference between group means observed in this experiment, is 36% if the group means are equal in the population. Few consider this result to be statistically significant.

In Fig. Figure 29 shows what happens when you add a covariate to the analysis. In this case, I added the age of each subject to the dataset. The coefficient of determination R 2 for the regression equation that uses the covariate is 0.80 (cell F4). The R 2 value in the range F15:G19, in which I replicated the ANOVA results obtained without the covariate, is only 0.05 (cell F17). Therefore, a regression equation that includes the covariate predicts values ​​for the Cognitive Score variable much more accurately than using the Impact vector alone. For ANCOVA, the probability of obtaining the F test value displayed in cell F5 by chance is less than 0.01%.

Rice. 29. ANCOVA brings back a completely different picture

For statistical models, in many cases it is necessary to determine the accuracy of the forecast. This is done using special calculations in Microsoft Excel, and the coefficient of determination will be used. It is denoted as R^2.

Statistical models can be divided into quality levels depending on the coefficient. Models from 0.8 to 1 good quality, models of sufficient quality have a level from 0.5 to 0.8, and poor quality has a range from 0 to 0.5.

Method for determining accuracy using the KVPIRSON function

IN linear function the coefficient of determination will be equal to the square correlation coefficient. It can be calculated using special function. First, let's create a table with data.

Then you need to select the place where the calculation result will be shown and click on the insert function button.

After this, a special window will open. The category needs to be selected “Statistical” and select QPIRSON. This function allows you to determine the correlation coefficient relative to the Pearson function, respectively, the square value of the correlation coefficient = the coefficient of determination.

After confirming the action, a window will appear in which you need to enter “Known X values” and “Known Y values” in the fields. Click the “Known Y values” field with the mouse and select the Y column data in the working window. Similar action We do this with another field, selecting data from table X.

As a result of these actions, the value of the coefficient of determination will be shown in the cell that was previously selected to display the result.

Determination of the coefficient of determination if the function is not linear.

If the function is nonlinear, then the Excel toolkit also allows you to calculate the coefficient using the Regression tool. It can be found in the data analysis package. But first you need to activate this package by going to the “File” section and opening “Options” in the list.

After this, you can see a new window in which you need to select “Add-ins” from the menu, and in the special field for managing add-ins, select “Excel Add-ins” and go to them.

After moving to Excel add-ins a new window will appear. In it you can see the add-ons available to the user. Check the box next to "Analysis package" and confirm the action.

You can find it in the “Data” section, after going to which click on “Data Analysis” on the right side of the screen.

After opening it, select “Regression” from the list and confirm the action.

After this, a new window will appear in which you can make settings. The input data allows you to configure the value of the X and Y intervals; just select the corresponding argument cells of another argument. In the reliability level field you can set required indicator. Output options allow you to specify where the result will be shown. If, for example, you select display on the current sheet, then first you need to select the “Output interval” item - and click on the area of ​​the main window where the result will be displayed in the future and the cell coordinates will be shown in the corresponding field. At the end we confirm the action.

The result will appear in the working window. Since we are calculating the coefficient of determination, we need the R-coefficient in the results. If you look at the value, you can see that it refers to the best quality.

Method for determining the coefficient of determination for a trend line

Having created a table with the corresponding values, we create a graph. To draw a trend line on it, you need to click on the chart, namely on the area where the line is drawn. At the top of the toolbar, select the “Layout” section, and in it select “Trend Line”. After that, in context this example Select "Exponential approximation" from the list.

The trend line will be displayed on the chart as a curve with black color.

In order to show the coefficient of determination, you need to right-click on the black curve and select “Format trend line” from the list.

After this, a new window will appear. You need to check the box and select required action(shown in the screenshot). Thanks to this, the coefficient will be displayed on the graph. After this has been done, close the window.

After closing the trend line format window, you can see the value of the coefficient of determination in the working window.

If the user needs a different type of trend line, then in the "Trend Line Format" window you can select it. Don't forget to set it earlier when creating a trend line in the "Layout" section or in the context menu. Also, don’t forget to check the box for the R^2 function.

As a result, you can see the change in the trend line and the confidence number.

After viewing different variations of trend lines, the user can determine the most suitable one for himself, since the reliability indicator may change depending on the choice of line. The maximum coefficient is one, which means maximum reliability, but it is not always possible to achieve this value.

Thus, several methods for finding the coefficient of determination were considered. The user can choose the most optimal one for his purposes.