aThe relationship between an adult female’s Attachment style and their peripheral Oxytocin levels.
Results presentation
Outliers and missing values: This part would go in beginning of your data analysis. This a dissertation by the way which is like an article, please use that type of format. Yours is written like a stats book, kind of a compliment, but clearly need to shift. Read all of my comments first before you begin to change anything, then send me the updated revised version.
Salivary Oxytocin concentrations were below 0.8 pg-ml detection level for twenty-nine participants that were excluded from the study after being identified as outliers. Outliers were marked as * in Salimetrics Lab’s final Salivary Oxytocin analysis outdome data. A total of 58 participants were included in this study. Were outliers included.
Included Analyses Not sure what this is, this would not go in the final dissertation.
1. Descriptive Statistics: Frequencies and percentages of participants’ Nominal demographic variables.
2. Descriptive Statistics: Frequencies of participants’ Ratio demographic variables.
3. Descriptive Statistics: Summary statistics for Anxiety, avoidance, and Mean_pg_mL
4. Descriptive Statistics: Frequencies and percentages for Attachment style, Attachment Type, and Attachment Quality.
5. Linear Regression with Mean_pg_mL predicted by Attachment Style
6. Linear Regression with Mean_pg_mL predicted by Anxiety and Avoidance
7. Two-Tailed Independent Samples t-Test for Mean_pg_mL by Attachment Style
8. Hierarchical Linear Regression for Mean_pg_mL predicted by Stress, Anxiety, and Avoidance
9. Hierarchical Linear Regression for Mean_pg_mL predicted by Stress and Attachment Style
10. Linear Regression with Anxiety predicted by Mean_pg_mL
11. Linear Regression with Avoidance predicted by Mean_pg_mL
12. Binary Logistic Regression with Attachment Style predicted by Mean_pg_mL
13. Binary Logistic Regression with Attachment Type predicted by Mean_pg_mL
Results Presentation
1.Descriptive Statistics
Introduction
Participants’ Frequencies and percentages were calculated for the following Nominal demographic variables: Marital Status, Sexual Orientation, Ethnicity, Religion, Class, Employment Status, Trauma History, Disability, Maternal Mental Status History Therapy, and Contraceptive.
Frequencies and Percentages The most frequently observed category of Marital Status was Single (n = 41, 71%). The most frequently observed category of Sexual Orientation was Hetro (n = 53, 91%). The most frequently observed category of Ethnicity was Caucasian (n = 21, 36%). The most frequently observed category of Religion was No (n = 35, 60%). The most frequently observed category of Class was Middle (n = 38, 66%). The most frequently observed category of Employment Status was Part-Time (n = 41, 71%). The most frequently observed category of Trauma History was No (n = 38, 66%). The most frequently observed category of Maternal Mental Status History was Healthy (n = 40, 69%). The most frequently observed category of Disability was Non (n = 47, 81%). The most frequently observed category of Therapy was No (n = 53, 91%). The most frequently observed category of Contraceptive was Pill (n = 40,69%). Frequencies and percentages are presented in Table 1.
Table 1. Frequency Table for Nominal Demographic Variables
Variable n %
Marital Status
Single 41 70.69
Living with partner 10 17.24
In relationship 1 1.72
Married 3 5.17
Widowed 1 1.72
Divorced 2 3.45
Sexual Orientation
Hetero-Sexual 53 91.38
Bi-Sexual 5 8.62
Ethnicity
Caucasian 21 36.21
Afro-American 7 12.07
Latino 7 12.07
Other so may others what is this? Do you think? 19 32.76
Asian 4 6.90
Religion This was oddly asked, was it asked to fill in the blank?
No 35 60.34
Christian 20 34.48
Other 3 5.17
Class
Middle 38 65.52
Hardship 10 17.24
Working 8 13.79
Upper 2 3.45
Employment Status
Unemployed
7
12.07
Part-Time 41 70.69
Full-Time 10 17.24
Trauma History
No was this used NO vs others in the stats? 38 65.52
Separation, Neglect, Sexual, Emotional Abuse 1 1.72
Emotional Abuse 2 3.45
Sexual Abuse 2 3.45
Separation, Physical, Emotional Abuse 2 3.45
Separation, Emotional abuse 1 1.72
Separation, Neglect, Physical Abuse 1 1.72
Sexual, Emotional Abuse 1 1.72
Physical, Emotional Abuse 2 3.45
Separation 2 3.45
Neglect 1 1.72
Neglect, Sexual, Emotional Abuse 1 1.72
Separation, Physical, Sexual, Emotional Abuse 1 1.72
Separation, Neglect, Physical, Emotional Abuse 1 1.72
Neglect, Physical, Sexual, Emotional Abuse 1 1.72
Physical Abuse 1 1.72
Maternal Mental Status History
Healthy was this used in t test or other stats? 40 68.97
Ill otherwise why ask it, I think you had a reason. 18 31.03
Disability
Non 47 81.03
Physio 1 1.72
Two psycho 3 5.17
Psycho 4 6.90
Two physio and psycho 1 1.72
Physio and psycho 1 1.72
Physio and psycho 1 1.72
Therapy
No was this used, if not, we will cut it, 53 91.38
Past
Past & Current 4
1 6.90
1.72
Contraceptive was this used?
pill 40 68.96
Implant 9 15.52
IUD 5 8.62
Shot 4 6.90
Note. Due to rounding errors, percentages of each variable may not equal 100%.
For some of the above, you can do simple stats, like those with Maternal mental status issues, were those participants more secure, etd? Etd.
2. Table 2. Frequency Table for Demographic Ratio Variables
Participants’ Frequencies were calculated for the variables Income and Age. The observations for Income had an average of 25887.02 (SD = 38796.64, Min = 0.00, Max = 180000.00). The observations for AGE had an average of 21.26 (SD = 2.40, Min = 18.00, Max = 29.00). The summary statistics can be found in Table 2.
Table 2. Summary Statistics Table for the Ratio Variables Income and Age
Variable M SD n Min Max
Income 25887.02 38796.64 58 0.00 180000.00
Age 21.26 2.40 58 18.00 29.00
Introduction
Summary statistics were calculated for Anxiety, avoidance, and Mean_pg_mL.
3. Summary Statistics
The observations for Anxiety had an average of 3.28 (SD = 1.13, Min = 1.33, Max = 5.94). The observations for Avoidance had an average of 2.48 (SD = 0.86, Min = 1.06, Max = 4.78). The observations for Mean_pg_mL had an average of 15.81 (SD = 8.28, Min = 3.15, Max = 45.49). The summary statistics can be found in Table 3. Ok so here is your pf levels., we need a discussion of is this low, high, average compared to general public. What do these levels mean, have a section to explain that, without us having to go to the lit review. It looks like there is quite a range of pg scores, what does 3 mean, 15, 45? Have a separate paragrgaph to go over this critical data. It is of value in itself just at the attachment scores are./
Table 3. Summary Statistics Table for Interval and Ratio Variables
Variable M SD n Min Max
Anxiety 3.28 1.13 58 1.33 5.94
Avoidance 2.48 0.86 58 1.06 4.78
Mean_pg_mL 15.81 8.28 58 3.15 45.49
4. Descriptive Statistics
Introduction
Frequencies and percentages were calculated for Attachment style, Attachment Type, and Attachment Quality.
Frequencies and Percentages
The most frequently observed category of Attachment style was Secure (n = 39, 67%). The most frequently observed category of Attachment Type was Anxious (n = 17, 29%). The most frequently observed category of Attachment Quality was Preoccupied (n = 17, 29%). Frequencies and percentages are presented in Table 4. Is this similar to general population? Or the other studies? Say a little bit about that.
Table 4. Frequency Table for Nominal Variables
Variable n %
Attachment style
Insecure 19 32.76
Secure 39 67.24
Attachment type
Anxious 17 29.31
Avoidant 3 5.17
Attachment Quality
Preoccupied 17 29.31
Fearful-Avoidant 2 3.45
Dismissive 1 1.72
Note. Due to rounding errors, percentages may not equal 100%.
5. Linear Regression Analysis
Introduction
Linear regression is a predictive analysis. Regression analysis is used to explain the relationship between one dependent variable and one or more independent variables. Three major uses for regression analysis are (1) determining the strength of predictors, (2) forecasting an effect, and (3) trend forecasting.
A linear regression analysis was conducted to assess whether Attachment Style significantly predicted Mean_pg_mL.
This is not a way to write a paper, you don’t explain a stat issue then later show the results on other pages. You briefly present the stat or test, then go into the results. This is not a stats text book. So, you can combine these two areas, and put them into one, like Normality, introduce it then show you results. Same for all these assumptions. Have you ever read an article that wrote it the way you have… no because none every have. Maybe you need to look at other dissertations or articles to see models. If they did it this way, it would be completely not okay of course.
Assumptions
Normality: Refers to the distribution of the residuals; the assumption is that the residuals follow a bell-shaped curve; the assumption is met when the q-q plot has the points distributed approximately on the normality line.
The assumption of normality was assessed by plotting the quantiles of the model residuals against the quantiles of a Chi-square distribution, also called a Q-Q scatterplot (DeCarlo, 1997). For the assumption of normality to be met, the quantiles of the residuals must not strongly deviate from the theoretical quantiles. Strong deviations could indicate that the parameter estimates are unreliable. Figure 1 presents a Q-Q scatterplot of the model residuals.
Figure 1. Q-Q scatterplot for normality of the residuals for the regression model.
Is this just an example of a data plot, if so, we don’t need graphs of data that is not yours, it is your data, not a textbook discussion. Cut all these graphs that are not your data/
Homoscedasticity: Refers to the relationship between the residuals and the fitted values; the assumption is met when the residuals plot has the points randomly distributed (with no pattern), and the distribution line is approximately straight.
Homoscedasticity was evaluated by plotting the residuals against the predicted values (Bates et al., 2014; Field, 2017; Osborne & Walters, 2002). The assumption of homoscedasticity is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 2 presents a scatterplot of predicted values and model residuals.
Figure 2. Residuals scatterplot testing homoscedasticity
Multicollinearity: A state of very high intercorrelations or inter-association among a set of variables. Multicollinearity. Since there was only one predictor variable, multicollinearity does not apply, and Variance Inflation Factors were not calculated.
Results
The results of the linear regression model were not significant, F(1,56) = 1.23, p = .273, R2 = 0.02, indicating Attachment Style did not explain a significant proportion of variation in Mean_pg_mL. Since the overall model was not significant, the individual predictors were not examined further. Table 4 summarizes the results of the regression model.
Table 4. Results for Linear Regression with Attachment Style predicting Mean_pg_mL
Variable B SE 95% CI β t p
(Intercept) 14.08 1.90 [10.29, 17.88] 0.00 7.43 < .001
Attachment Style Secure 2.56 2.31 [-2.07, 7.19] 0.15 1.11 .273
Note. Results: F(1,56) = 1.23, p = .273, R2 = 0.02
Unstandardized Regression Equation: Mean_pg_mL = 14.08 + 2.56*Attachment Style: Secure --------------------------------------------------------------------------------------------------------------------
6. Linear Regression Analysis
A linear regression analysis was conducted to assess whether Anxiety and Avoidance significantly predicted Mean_pg_mL.
Assumptions
Normality. The assumption of normality was assessed by plotting the quantiles of the model residuals against the quantiles of a Chi-square distribution, also called a Q-Q scatterplot (DeCarlo, 1997).
Figure 3 Q scatterplot for normality of the residuals for the regression model.
Homoscedasticity. Homoscedasticity was evaluated by plotting the residuals against the predicted values (Bates et al., 2014; Field, 2017; Osborne & Walters, 2002). The assumption of homoscedasticity is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 4 presents a scatterplot of predicted values and model residuals.
Figure 4. Residuals scatterplot testing homoscedasticity
Multicollinearity. Variance Inflation Factors (VIFs) were calculated.
Table 5. Variance Inflation Factors for Anxiety and Avoidance
Variable VIF
Anxiety 1.47
Avoidance 1.47
Results Not sure why you start with linear regression vs t test, unless there is a good reason?
State hypo or question before each stat presented. The results of the linear regression model were not significant, F(2,55) = 0.14, p = .872, R2 = 0.00, indicating Anxiety and Avoidance did not explain a significant proportion of variation in Mean_pg_mL. Since the overall model was not significant, the individual predictors were not examined further. Table 6 summarizes the results of the regression model.
Table 6. Results for Linear Regression with Anxiety and Avoidance predicting Mean_pg_mL
Variable B SE 95% CI β t p
(Intercept) 17.00 3.80 [9.37, 24.62] 0.00 4.47 < .001
Anxiety -0.62 1.20 [-3.02, 1.79] -0.08 -0.51 .609
Avoidance 0.34 1.56 [-2.80, 3.47] 0.03 0.21 .831
Note. Results: F(2,55) = 0.14, p = .872, R2 = 0.00
Unstandardized Regression Equation: Mean_pg_mL = 17.00 - 0.62*Anxiety + 0.34*Avoidance
7. Two-Tailed Independent Samples t-Test Now we are jumping here, completely confused.
Introduction
Two-Tailed Independent Samples t-Test can be run on sample data from a normally distributed numerical outdome variable to determine if its mean differs across two independent groups. ( The University of Texas at Austin, 2015) Two-Tailed Independent Samples t-Test was conducted to examine whether the mean of Mean_pg_mL was significantly different between the Insecure and Secure categories of Attachment Style.
Assumptions
Normality.
Shapiro-Wilk Test: Shapiro-Wilk tests test assesses if the assumption of normality is met. If statistical significance is found in this test, the data is not normally distributed.
Normality. Shapiro-Wilk tests were conducted to determine whether a normal distribution could have produced Mean_pg_mL for each category of Attachment Style (Razali & Wah, 2011). The Shapiro-Wilk test for Mean_pg_mL in the Insecure category was not significant based on an alpha value of 0.05, W = 0.96, p = .633. This result suggests that a normal distribution cannot be ruled out as the underlying distribution for Mean_pg_mL in the Insecure category. The result of the Shapiro-Wilk test for Mean_pg_mL in the Secure category was significant, based on an alpha value of 0.05, W = 0.80, p < .001. This result suggests that Mean_pg_mL in the Secure category is unlikely to have been produced by a normal distribution. The Shapiro-Wilk test was significant for the Secure category of Attachment Style, indicating the normality assumption is violated.
Homogeneity of Variance.
Levene's Test: Levene's Test assesses if the assumption of equality of variance is met; if significance is found, the groups differ in their spread of the dependent variable scores; this may differ from the output found from other statistical packages (such as SPSS), as Intellectus Statistics™ uses the median instead of the mean for calculations; the median tends to provide a more-robust choice that can account for non-normality.
Homogeneity of Variance. Levene's test was conducted to assess whether the variance of Mean_pg_mL was equal between the categories of Attachment Style. The result of Levene's test for Mean_pg_mL was not significant based on an alpha value of 0.05, F(1, 56) = 0.53, p = .471. This result suggests the variance of Mean_pg_mL may possibly be equal for each category of Attachment Style, indicating the assumption of homogeneity of variance was met.
Results
Restate hypothesis simply before every stat that addresses a hypo or question. The result of the two-tailed independent samples t-test was not significant based on an alpha value of 0.05, t(56) = -1.11, p = .273, indicating the null hypothesis cannot be rejected. This is not a way to write, in double or triple negative, it would be hard for anyone to figure it out. This finding suggests that the Mean_pg_mL (spell this out each time since not a biochem paper) was not significantly different between the Insecure and Secure categories of Attachment Style. The results are presented in Table 10. A bar plot of the means is shown in Figure 5.
Table 10. Two-Tailed Independent Samples t-Test for Mean_pg_mL by Attachment Style
Insecure Secure
Variable M SD M SD t p d
Mean_pg_mL 14.08 6.31 16.64 9.04 -1.11 .273 0.33
Note. N = 58. Degrees of Freedom for the t-statistic = 56. d represents Cohen's d.
Figure 5. The mean of Mean_pg_mL by levels of Attachment Style
8. Hierarchical Linear Regression this is not a diss about statistics, stats are used to understand relationships and group differences. So organize your results by your hypotheses and questions and use stats to address those hypo and questions.
Introduction
Hierarchical linear regression (HLR) is used to analyze and compare sequential regression models in steps. Each successive step is a new regression with additional predictor variables entered into the previous regression model. HLR compares each step by using the F-test to determine if the change in explained variance is significant. HLR is commonly used by entering demographic variables in the first step and introducing the predictor variables in each subsequent step. This allows determining the predictive power of the predictor variables while controlling for the demographic variables.
A two-step hierarchical linear regression was conducted with Mean_pg_mL as the dependent variable. For Step 1, Stress was entered as a predictor variable into the null model. Anxiety and Avoidance were added as predictor variables into the model at Step 2.
Assumptions
Normality. Normality was evaluated for each model using a Q-Q scatterplot. The Q-Q scatterplots for normality are presented in Figure 6.okay is this a result what does it mean?
Figure 6. Q-Q scatterplot for normality for models predicting Mean_pg_mL
Are those a few outliars at the top of the scale below>? Are they>? Or not?
Homoscedasticity. Homoscedasticity was evaluated for each model by plotting the model residuals against the predicted model values (Osborne & Walters, 2002). The assumption is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 7 presents a scatterplot of predicted values and model residuals.
Figure 7. Residuals scatterplot for homoscedasticity for models predicting Mean_pg_mL
Multicollinearity. Variance Inflation Factors (VIFs) were calculated to detect the presence of multicollinearity between predictors for each regression model. Multicollinearity occurs when a predictor variable is highly correlated with one or more other predictor variables. If a variable exhibits multicollinearity, then the regression coefficient for that variable can be unreliable and difficult to interpret. Multicollinearity also causes the regression model to lose statistical power (Yoo et al., 2014). High VIFs indicate increased effects of multicollinearity in the model. Variance Inflation Factors greater than 5 are cause for concern, whereas VIFs of 10 should be considered the maximum upper limit (Menard, 2009). For Step 2, all predictors in the regression model have VIFs less than 10. Table 11 presents the VIF for each predictor in the model. Ok so what does that mean for you and this study?
Table 11. Variance Inflation Factors for Each Step
Variable VIF
Step 1
Stress –
Step 2
Stress 1.15
Anxiety 1.58
Avoidance 1.48
Note. – indicates that VIFs were not calculated as there were less than two predictors for the model step.
Outliers. Studentized residuals were calculated to identify influential points, and the absolute values were plotted against the observation numbers. An observation with a Studentized residual greater than 3.24 in absolute value, the 0.999 quantiles of a t distribution with 57 degrees of freedom, was considered to have a significant influence on the results of the model. Figure 8 presents a Studentized residuals plot of the observations. Observation numbers are specified next to each point with a Studentized residual greater than 3.24. So what does this mean>?
Figure 8. Studentized residuals plot for outlier detection for models predicting Mean_pg_mL
Results
The hierarchical regression analysis results consist of model comparisons and a model interpretation based on an alpha of 0.05. Each step in the hierarchical regression was compared to the previous step using F-tests. The coefficients of the model in the final step were interpreted.
Again, restate the hypo or question, that guides it not the stat, the stat is the way to address hypo and questions.
Comparing Models. The F-test for Step 1 was not significant, F (1, 56) = 0.06, p = .802, ΔR2 = 0.00. This model indicates that adding STRESS did not account for a significant amount of additional variation in Mean_pg_mL. The F-test for Step 2 was not significant, F (2, 54) = 0.11, p = .897, ΔR2 = 0.00. This model indicates that adding Anxiety and Avoidance did not account for a significant amount of additional variation in Mean_pg_mL. The results for the model comparisons are in Table 12.
Table 12. Model Comparisons for Variables predicting Mean_pg_mL
Model R2 dfmod dfres F p ΔR2
Step 1 0.00 1 56 0.06 .802 0.00
Step 2 0.01 2 54 0.11 .897 0.00
Note. Each Step was compared to the previous model in the hierarchical regression analysis.
Model Interpretation. Stress did not significantly predict Mean_pg_mL, B = -0.02, t(54) = -0.10, p = .917. Based on this sample, a one-unit increase in Stress does not have a significant effect on Mean_pg_mL. Anxiety did not significantly predict Mean_pg_mL, B = -0.58, t(54) = -0.47, p = .643. Based on this sample, a one-unit increase in Anxiety does not have a significant effect on Mean_pg_mL. Avoidance did not significantly predict Mean_pg_mL, B = 0.35, t(54) = 0.22, p = .827. Based on this sample, a one-unit increase in Avoidance does not have a significant effect on Mean_pg_mL. The results for each regression are shown in Table 13.
Table 13. Summary of Hierarchical Regression Analysis for Variables Predicting Mean_pg_mL
Variable B SE 95% CI β t p
Step 1
(Intercept) 16.59 3.31 [9.97, 23.22] 0.00 5.02 < .001
Stress -0.04 0.17 [-0.38, 0.29] -0.03 -0.25 .802
Step 2
(Intercept) 17.21 4.35 [8.49, 25.93] 0.00 3.96 < .001
Stress -0.02 0.18 [-0.38, 0.34] -0.02 -0.10 .917
Anxiety -0.58 1.25 [-3.10, 1.93] -0.08 -0.47 .643
Avoidance 0.35 1.58 [-2.83, 3.52] 0.04 0.22 .827
9. Hierarchical Linear Regression
Introduction
Hierarchical Linear Regression, allowes for adding or removing predictor variables from the regression model in steps. It’s being used to informes the predictive power that a certain variable adds to a given model above and beyond the other factors, and to control for a given variable Artifact. This needs a full edit, do you have an editor. Once we get the result stat it better shape, needs the full edit.
Hierarchical Linear Regression: Literature review suggests that the variable Stress has an Artifact potential in measuring a human's baseline Oxytocin level. For the purpose of holding Stress Controlled, a Two-Step Hierarchical Linear Regression was conducted with Mean_pg_mL as the dependent variable. For Step 1, Stress was entered as a predictor variable into the null model. Attachment Style was added as a predictor variable into the model at Step 2.
Assumptions
Normality. Normality was evaluated for each model using a Q-Q scatterplot. The Q-Q scatterplot compares the distribution of the residuals (the differences between observed and predicted values) with a normal distribution (a theoretical distribution that follows a bell curve). In the Q-Q scatterplot, the solid line represents the theoretical quantiles of a normal distribution. Thus, normality can be assumed if the points form a relatively straight line. The Q-Q scatterplots for normality are presented in Figure 9.
Figure 9. Q-Q scatterplot for normality for models predicting Mean_pg_mL
Homoscedasticity. Homoscedasticity was evaluated for each model by plotting the model residuals against the predicted model values (Osborne & Walters, 2002). The assumption is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 10 presents a scatterplot of predicted values and model residuals.
Figure 10. Residuals scatterplot for homoscedasticity for models predicting Mean_pg_mL
Multicollinearity. Variance Inflation Factors (VIFs) were calculated to detect the presence of multicollinearity between predictors for each regression model. Multicollinearity occurs when a predictor variable is highly correlated with one or more other predictor variables. If a variable exhibits multicollinearity, then the regression coefficient for that variable can be unreliable and difficult to interpret. Multicollinearity also causes the regression model to lose statistical power (Yoo et al., 2014). High VIFs indicate increased effects of multicollinearity in the model. Variance Inflation Factors greater than 5 are cause for concern, whereas VIFs of 10 should be considered the maximum upper limit (Menard, 2009). For Step 2, all predictors in the regression model have VIFs less than 10. Table 14 presents the VIF for each predictor in the model.
Table 14. Variance Inflation Factors for Each Step
Variable VIF
Step 1
Stress -
Step 2
Stress 1.03
Attachment Style 1.03
Note. - indicates that VIFs were not calculated as there were less than two predictors for the model step.
Outliers. Studentized residuals were calculated to identify influential points, and the absolute values were plotted against the observation numbers. An observation with a Studentized residual greater than 3.24 in absolute value, the 0.999 quantiles of a t distribution with 57 degrees of freedom, was considered to have a significant influence on the results of the model. Figure 11 presents a Studentized residuals plot of the observations. Observation numbers are specified next to each point with a Studentized residual greater than 3.24. So what is your conclusion on outliers? Are two scores outliers? If so are they the same two scores as above? IF they are outliers did you take them out?
Figure 11. Studentized residuals plot for outlier detection for models predicting Mean_pg_mL
Results
The hierarchical regression analysis results consist of model comparisons and a model interpretation based on an alpha of 0.05. Each step in the hierarchical regression was compared to the previous step using F-tests. The coefficients of the model in the final step were interpreted.
Comparing Models. State hypo each time. The F-test for Step 1 was not significant, F (1, 56) = 0.06, p = .802, ΔR2 = 0.00. This model indicates that adding Stress did not account for a significant amount of additional variation in Mean_pg_mL. The F-test for Step 2 was not significant, F (1, 55) = 1.15, p = .289, ΔR2 = 0.02. This model indicates that adding Attachment Style did not account for a significant amount of additional variation in Mean_pg_mL. The results for the model comparisons are in Table 15.
Table 15. Model Comparisons for Variables predicting Mean_pg_mL
Model R2 dfmod dfres F p ΔR2
Step 1 0.00 1 56 0.06 .802 0.00
Step 2 0.02 1 55 1.15 .289 0.02
Note. Each Step was compared to the previous model in the hierarchical regression analysis.
Model Interpretation. Stress did not significantly predict Mean_pg_mL, B = -0.01, t(55) = -0.07, p = .943. Based on this sample, a one-unit increase in Stress does not have a significant effect on Mean_pg_mL. The Secure category of Attachment Style did not significantly predict Mean_pg_mL, B = 2.53, t(55) = 1.07, p = .289. . sample suggests that moving from the Insecure to Secure category of Attachment Style does not have a significant effect on the mean of Mean_pg_mL. The results for each regression are shown in Table 16.
Table 16. Summary of Hierarchical Regression Analysis for Variables Predicting Mean_pg_mL
Variable B SE 95% CI β t p
Step 1
(Intercept) 16.59 3.31 [9.97, 23.22] 0.00 5.02 < .001
Stress -0.04 0.17 [-0.38, 0.29] -0.03 -0.25 .802
Step 2
(Intercept) 14.33 3.92 [6.47, 22.19] 0.00 3.65 < .001
Stress 0.17 [-0.35, 0.33] -0.01 -0.07 .943
AttachmentStyle:
Secure 2.53 2.37 [-2.21, 7.27] 0.14 1.07 .289
11. Linear Regression Analysis
Introduction
A linear regression analysis was conducted to assess whether Mean_pg_mL significantly predicted Anxiety.
Assumptions
Normality. The assumption of normality was assessed by plotting the quantiles of the model residuals against the quantiles of a Chi-square distribution, also called a Q-Q scatterplot (DeCarlo, 1997). Figure 12 presents a Q-Q scatterplot of the model residuals. Figure 12. Q-Q Scatterplot for normality of the residuals for the regression model.
Homoscedasticity. Homoscedasticity was evaluated by plotting the residuals against the predicted values (Bates et al., 2014; Field, 2017; Osborne & Walters, 2002). The assumption of homoscedasticity is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 13 presents a scatterplot of predicted values and model residuals.
Figure 13. Residuals scatterplot testing homoscedasticity
Multicollinearity. Since there was only one predictor variable, multicollinearity does not apply, and Variance Inflation Factors were not calculated.
Results
The linear regression model results were not significant, F(1,56) = 0.23, p = .632, R2 = 0.00, indicating Mean_pg_mL did not explain a significant proportion of variation in Anxiety Since the overall model was not significant, the individual predictors were not examined further. Table 17 summarizes the results of the regression model.
Table 17. Results for Linear Regression with Mean_pg_mL predicting Anxiety
Variable B SE 95% CI β t p
(Intercept) 3.42 0.32 [2.77, 4.06] 0.00 10.57 < .001
Mean_pg_mL -0.01 0.02 [-0.05, 0.03] -0.06 -0.48 .632
Note. Results: F(1,56) = 0.23, p = .632, R2 = 0.00
Unstandardized Regression Equation: Anxiety = 3.42 - 0.01*Mean_pg_mL
12. Linear Regression Analysis
Introduction
A linear regression analysis was conducted to assess whether Mean_pg_mL significantly predicted Avoidance.
Assumptions
Normality. The assumption of normality was assessed by plotting the quantiles of the model residuals against the quantiles of a Chi-square distribution, also called a Q-Q scatterplot (DeCarlo, 1997). Figure 14 presents a Q-Q scatterplot of the model residuals.
Figure 14. Q-Q scatterplot for normality of the residuals for the regression model.
Homoscedasticity. Homoscedasticity was evaluated by plotting the residuals against the predicted values (Bates et al., 2014; Field, 2017; Osborne & Walters, 2002). The assumption of homoscedasticity is met if the points appear randomly distributed with a mean of zero and no apparent curvature. Figure 15 presents a scatterplot of predicted values and model residuals.
Figure 15. Residuals scatterplot testing homoscedasticity
Multicollinearity. Since there was only one predictor variable, multicollinearity does not apply, and Variance Inflation Factors were not calculated.
Results
The results of the linear regression model were not significant, F(1,56) = 0.01, p = .926, R2 = 0.00, indicating Mean_pg_mL did not explain a significant proportion of variation in Avoidance. Since the overall model was not significant, the individual predictors were not examined further. Table 18 summarizes the results of the regression model.
Table 18. Results for Linear Regression with Mean_pg_mL predicting Avoidance
Variable B SE 95% CI β t p
(Intercept) 2.50 0.25 [2.01, 3.00] 0.00 10.09 < .001
Mean_pg_mL -0.00 0.01 [-0.03, 0.03] -0.01 -0.09 .926
Note. Results: F(1,56) = 0.01, p = .926, R2 = 0.00
Unstandardized Regression Equation: Avoidance= 2.50 - 0.00*Mean_pg_mL
13. Binary Logistic Regression Did you do a Pearson correlation on all the variables to show how they inter-relate? That is usually given before all of the regressions above. If you can do that, please run it and show it. It is simple correlations between your variables like the attachment styles, stress, etd, and just shows if they have any simple correlation.
Binary logistic regression is used to examine the relationship between one or more independent (predictor) variables and a single dichotomous dependent (outdome) variable. The independent variable is used to estimate the probability that a case is a member of one group versus the other (e.g., whether a participant is Secure or Insecure). The binary logistic regression creates a linear combination of all the independent variables to predict the log-odds of the dependent variable. A significant overall model means that the independent variable significantly predicts the dependent variable. A binary logistic regression was conducted to examine whether Mean_pg_mL had a significant effect on the odds of observing the Secure category of Attachment Style. The reference category for Attachment Style was Insecure.
Results
The overall model was not significant based on an alpha of 0.05, χ2(1) = 1.36, p = .244, suggesting that Mean_pg_mL did not have a significant effect on the odds of observing the Secure category of Attachment Style. McFadden's R-squared was calculated to examine the model fit. McFadden R2: Measures the goodness-of-fit of the model. It tends to be more conservative than R2 values utilized in linear regression models. McFadden R2 values of .2 or greater indicate an excellent model fit. (Louviere et al., 2000). The McFadden R-squared value calculated for this model was 0.02. Since the overall model was not significant, the individual predictors were not examined further. Table 19 summarizes the results of the regression model.
Table 19. Logistic Regression Results with Mean_pg_mL Predicting Attachment Style
Variable B SE χ2 p OR 95% CI
(Intercept) 0.05 0.66 0.01 .942 - -
Mean_pg_mL 0.04 0.04 1.19 .275 1.05 [0.97, 1.13]
Note. χ2(1) = 1.36, p = .244, McFadden R2 = 0.02.
14. Binary Logistic Regression
A binary logistic regression was conducted to examine whether Mean_pg_mL had a significant effect on the odds of observing the Avoidant category of Attachment Type. The reference category for Attachment Type was Anxious.
Results
The overall model was not significant based on an alpha of 0.05, χ2(1) = 1.46, p = .226, suggesting that Mean_pg_mL did not have a significant effect on the odds of observing the Avoidant category of Attachment Type. McFadden's R-squared was calculated to examine the model fit, where values greater than .2 are indicative of models with an excellent fit (Louviere et al., 2000). The McFadden R-squared value calculated for this model was 0.09. Since the overall model was not significant, the individual predictors were not examined further. Table 20 summarizes the results of the regression model.
Table 20. Logistic Regression Results with Mean_pg_mL Predicting Attachment Type.
Variable B SE χ2 p OR 95% CI
(Intercept) 0.12 1.65 0.01 .941 - -
Mean_pg_mL -0.15 0.14 1.19 .275 0.86 [0.66, 1.12]
Note. χ2(1) = 1.46, p = .226, McFadden R2 = 0.09.
References
Statistics Solutions. (2013). What is Linear Regression? Retrieved from: HYPERLINK "https://www.statisticssolutions.com/academic-solutions/resources/directory-of-statistical-analyses/what-is-linear-regression/" https://www.statisticssolutions.com/academic-solutions/resources/directory-of-statistical-analyses/what-is-linear-regression/
The University of Texas at Austin. (2015). Retrieved from: https://sites.utexas.edu/sos/
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2014). Fitting linear mixed-effects models using lme4: arXiv preprint arXiv, Journal of Statistical Software. https://doi.org/10.18637/jss.v067.io1
DeCarlo, L. T. (1997). On the meaning and use of kurtosis. Psychological Methods, 2(3), 292-307. https://doi.org/10.1037/1082-989X.2.3.292
Field, A. (2017). Discovering statistics using IBM SPSS statistics: North American edition. Sage Publications
Intellectus Statistics [Online computer software]. (2021). Intellectus Statistics. https://analyze.intellectusstatistics.com/
Louviere, J. J., Hensher, D. A., & Swait, J. D. (2000). Stated choice methods: Analysis and Applications. Cambridge University Press. https://doi.org/10.1017/CBO9780511753831
Menard, S. (2009). Logistic regression: From introductory to advanced concepts and applications. Sage Publications. https://doi.org/10.4135/9781483348964
Osborne, J., & Waters, E. (2002). Four assumptions of multiple regression that researchers should always test. Practical Assessment, Research & Evaluation, 8(2), 1-9. Razali, N. M., & Wah, Y. B. (2011). Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors, and Anderson-Darling tests. Journal of Statistical Modeling and Analytics, 2(1), 21-33.
Yoo, W., Mayberry, R., Bae, S., Singh, K., He, Q. P., & Lillard Jr, J. W. (2014). A study of effects of multicollinearity in multivariable analysis. International Journal of Applied Science and Technology, 4(5), 9.
Note
Several analysis were conducted as an option for the discussion section and might not be included in the results presentation.