# Multi-scale Dynamic System Reliability Analysis of Actively-controlled Structures under Random Stationary Ground Motions SpringerLink

2023年03月16日、掲載

To achieve an identified model, the original simplex model (Wiley & Wiley, 1970) assumes that error variances are equal across all waves (referred to as the stationary error variance assumption). It is also possible to obtain an identifiable model by equating the true score variances across waves and allowing the error variances to vary (referred to as the stationary true score variance assumption). Unfortunately, allowing both true score and error variances to vary by wave leads to a non-identified model (i.e., insufficient number of degrees of freedom to obtain a unique solution to the SEM). In the present work, both types of assumptions (stationary true score variance and stationary error variance) will be considered although the stationary error variance assumption seems plausible for most practical situations.

SB8 was used to demonstrate how a poorly selected item on a summated scale can affect the resulting value of alpha. It should be noted here that factor analysis is not required in the determination of Cronbach's alpha. It is the objective of this book to provide a more in-depth understanding of the vehicle-bridge interaction from the random vibration perspective. This book is suitable for adoption as a text book or a reference book in an advanced structural reliability analysis course.

## Engineering Failure Analysis

However, most experienced researchers would insist on running a reliability test for all the factors before using them in subsequent analyses. To compute internal consistency reliability, we will use the alpha function from the psych package (Revelle 2022). To get started, install and access the psych package using the install.packages and library functions, respectively (if you haven’t already done so). This completes the Reliability measures (Cronbach Alpha) for the scale items as part of m-banking demo dataset.

A research instrument is created comprising all of the refined construct items, and is administered to a pilot test group of representative respondents from the target population. Data collected is tabulated and subjected to correlational analysis or exploratory factor analysis using a software program such as SAS or SPSS for assessment of convergent and discriminant validity. Items that do not meet the expected norms of factor loading (same-factor loadings higher than 0.60, and cross-factor loadings less than 0.30) should be dropped at this stage. The remaining scales are evaluated for reliability using a measure of internal consistency such as Cronbach alpha. Scale dimensionality may also be verified at this stage, depending on whether the targeted constructs were conceptualized as being unidimensional or multi-dimensional. Next, evaluate the predictive ability of each construct within a theoretically specified nomological network of construct using regression analysis or structural equation modeling.

## Research Methods for the Social Sciences

On the other hand, if stationary true score and nonstationary measurement error variances are assumed, the opposite effect can occur. For example, if true score variances are actually decreasing and error variances are constant, ρ(Sw) decreases but the simplex estimate of ρ(Sw) will show reliability to be increasing at each wave. A common method for assessing scale score reliability is Cronbach’s α (Hogan, Benjamin, & Brezinsky, 2000), which is based upon the internal consistency of the items comprising the SSM. It can be shown that, under certain assumptions (specified below) the reliability of an SSM is proportional to the item consistency.

Reliability measures are one of the key elements of the scale evaluation process. In this article, I will discuss the process of measuring the reliability (Cronbach Alpha) of scale items using SPSS and the m-Banking dataset. In this simulation study, which was illustrated by an analysis of real data, the behavior of six reliability estimators was evaluated when applied to latent multidimensional bifactor https://wizardsdev.com/en/news/multiscale-analysis/ structures. When the strict unidimensionality assumption is violated, most estimators tend to produce biases that affect the correct estimate and interpretation of reliability (Raykov, 2001; ten Berge and Sočan, 2004; Green and Yang, 2015; Crutzen and Peters, 2017). We use a multilevel confirmatory factor analysis (MCFA) to estimate the reliability of a psychological scale in a two-level framework.

## Failure assessment of corrosion affected pipeline networks with limited failure data availability

Regarding the limitations of this article, it should be mentioned that the estimator bias results were presented without detailed evaluations between the conditions and their interactions due to the low dispersion of the Omega Limit and Omega Hierarchical estimators. However, the high dispersion shown by the unidimensional estimators indicates that their values are sensitive to the specific conditions of application and therefore, before using them, additional simulations should be conducted to find their magnitude of bias in the specific scenario. Future lines of research could expand upon the findings of the present study by considering categorical variables in the simulation. As observed in Table 2, the Omega Hierarchical and Omega Limit coefficients share 84.5% and 78.7% of variance, respectively, with the true reliability attributed to the general factor (the two with the least bias and variability).

- Computation of alpha is based on the reliability of a test relative to other tests with same number of items, and measuring the same construct of interest (Hatcher, 1994).
- If an adequate set of items is not achieved at this stage, new items may have to be created based on the conceptual definition of the intended construct.
- Let’s imagine our conceptual definition of turnover intentions is a person’s thoughts and intentions to leave an organization, and the three turnover intentions items follow.
- When items with very similar content are included in a test (i.e., content overlap), a positive correlation between errors is observed.
- This is achieved using an efficient finite element analysis (FEA)-based multi-scale reliability framework and sequential optimisation strategy.
- For additional information, I recommend that you refer to a good statistics book.

Assessing such validity requires creation of a “nomological network” showing how constructs are theoretically related to each other. The first statement invoked the procedure PROC CORR that implements the option ALPHA to do Cronbach's alpha analysis on all observations with no missing values (dictated by the NOMISS option). Incidentally, the listed variables, except SB8, were the ones that loaded high (i.e., showed high positive correlation) in factor analysis. While labeling is critical, it definitely makes for an easy identification of which construct is running on what particular procedure. At this point, the named common factors can now be used as independent or predictor variables.

## 2.3 Initial Steps

The general norm for factor extraction is that each extracted factor should have an eigenvalue greater than 1.0. A more sophisticated technique for evaluating convergent and discriminant validity is the multi-trait multi-method (MTMM) approach. This technique requires measuring each construct (trait) using two or more different methods (e.g., survey and personal observation, or perhaps survey of two different respondent groups such as teachers and parents for evaluating academic quality). This is an onerous and relatively less popular approach, and is therefore not discussed here.

In the next section, we attempt to answer this question for many practical applications. The data frame includes annual employee survey responses from 156 employees to three Job Satisfaction items (JobSat1, JobSat2, JobSat3), three Turnover Intentions items (TurnInt1, TurnInt2, TurnInt3), and four Engagement items (Engage1, Engage2, Engage3, Engage4). Employees responded to each item using a 5-point response format, ranging from Strongly Disagree (1) to Strongly Agree (5).

## How to Choose the Right UGC Approved Journal for Publishing Your Research Paper

For these scales, total variance in the SSMs seems to be decreasing, resulting in changes in reliability estimates when either true score variance or error variance are constrained to be constant over time. Thus, depending upon which stationarity assumption is chosen, reliability can appear to either increase or decrease over time. Cronbach’s alpha coefficient is the main estimator of reliability despite its limitations (Green and Yang, 2009; Sijtsma, 2009a, 2012; Yang and Green, 2011; Cho and Kim, 2015; Trizano-Hermosilla and Alvarado, 2016; McNeish, 2018). Even in cases where there are correlations between errors, the alpha coefficient delivers higher reliability values than would be expected if the relevant corrections were made, as long as the correlations between the errors of these items are positive (Raykov, 2001).

We are going to refer to level-1 as the within-level, and to level-2 as the between-level. The methods are described in Geldhof, Preacher, and Zyphur (2014) and Shrout and Lane (2012). 4It is also possible to obtain an identified model assuming the reliability ratio is constant over waves (i.e., stationary reliability). This produced reliability estimates that were constant across waves and approximately equal to the average reliability obtained by the alternative stationarity assumptions.