Abstract
We propose a statistical model for multicomponent stress-strength reliability under the inverted exponentiated Rayleigh distribution. The model is specifically designed for complex data structures where component strength is measured using block adaptive Type-II progressive hybrid censoring, while operational stress is captured as upper k-records with inter-k-record times. After formulating the reliability function for an s-out-of-k system, we develop both frequentist and Bayesian estimation procedures. Frequentist inference is based on the maximum likelihood estimator, from which we construct asymptotic and bootstrap confidence intervals. For the Bayesian analysis, we use squared error and linear exponential loss functions, obtaining estimates via the Tierney and Kadane approximation and a Metropolis-Hastings sampling algorithm. The performance of the estimators is evaluated through Monte Carlo simulations, which compare their bias and mean squared error. The results indicate that the Bayesian estimators are consistently more accurate than their frequentist counterparts. An analysis of two real datasets confirms the model’s practical utility for assessing system reliability in complex scenarios.
Similar content being viewed by others
Introduction
Reliability theory offers essential statistical methods for predicting the lifespan and performance of systems, a critical task for preventing failures and ensuring safety in fields like manufacturing and healthcare (Breneman et al.1). A central idea in this field is stress-strength reliability (SSR), which calculates the probability (\(\zeta\)) that a component’s strength (X) can withstand the operational stress (Y) it encounters, defined as \(\zeta = P(X > Y)\) (Kotz et al.2). An accurate SSR assessment is fundamental for guaranteeing that components operate dependably under diverse conditions.
However, modern systems are rarely single components; they are complex assemblies of interconnected parts. This reality pushes the field toward multicomponent stress-strength reliability (MSSR), a more intricate and analytically demanding area (Khan and Jan3). Because the failure of one part can trigger a cascade across the entire system, precise MSSR analysis is indispensable for robust design and effective maintenance (Hassan et al.4). Researchers have extensively studied SSR and MSSR estimation using various distributions and data types (e.g., Baklizi5; Condino et al.6; Mahmoud et al.7; Nadar and Kizilaslan8; Tarvirdizade and Ahmadpour9; Wang and Ye10).
Modeling these phenomena correctly hinges on the right probability distribution. The inverted exponentiated Rayleigh (IER) distribution has become prominent for its impressive flexibility, capable of describing a wide range of hazard rate shapes–including increasing, decreasing, and bathtub curves–often seen in reliability testing (Fan and Gui11). This adaptability makes the IER distribution a strong candidate for modeling both stress and strength variables, providing a solid foundation for complex reliability analysis.
In practice, collecting complete lifetime data is often impossible due to high costs, time constraints, or the need to remove units from testing prematurely. This has led to the development of various censoring schemes (Balakrishnan12). Advanced methods like progressive censoring allow for the removal of surviving units at different stages, which enhances experimental flexibility (Balakrishnan and Cramer13). An even more sophisticated approach is the Type-II progressive hybrid censoring (AII-PHCS) scheme, which balances test duration and data yield by combining a fixed number of required failures with a time limit (Balakrishnan and Kundu14).
The AII-PHCS framework, introduced by Ng et al.15, allows an experiment to run beyond a preset time limit (T) to ensure a target number of failures (m) is observed. In a test with n units and a censoring plan \((R_1, \ldots , R_m)\), \(R_i\) units are removed at the time of the i-th failure, \(x_i\). The final removal group is \(R_m = n - m - \sum _{i=1}^{m-1}R_i\). The scheme’s brilliance lies in its adaptive nature. If the m-th failure (\(x_m\)) happens before time T, the experiment concludes as a standard progressive Type-II test. However, if an earlier failure \(x_J\) occurs before T but the next, \(x_{J+1}\), is projected to occur after T (i.e., \(x_J< T < x_{J+1}\)), the plan adapts: all scheduled removals from \(R_{J+1}\) to \(R_{m-1}\) are canceled. Instead, all survivors are removed once the target m-th failure is finally observed. This redefines the final removal group as \(R^*_{m} = n - m - \sum _{j=1}^{J} R_{j}\) and guarantees the desired number of failure observations. This practical flexibility has made adaptive censoring a subject of significant research interest (Dutta et al.16, Elshahhat et al.17, Nassar and Abo-Kasem18).
A further practical challenge is the inability to monitor all units simultaneously due to facility or equipment limitations. Block censoring addresses this by dividing the total N units into g smaller, independent groups (with \(n_i\) units each, where \(\sum n_i = N\)). These blocks are then tested in parallel, making experiments more manageable and efficient when resources are constrained (Zhu19).
For stress modeling, extreme events are often the trigger for failure. This is where record statistics become invaluable for analysis in fields from industry to finance (Arnold et al.20; Safariyan et al.21; Singh and Singh22). While classical records track new maximums or minimums, k-records generalize this to the k-th largest or smallest value. An innovative extension known as upper k-records given inter-k-record times (URIT) captures not only the extreme values but also the crucial time dimension between them, providing richer data for processes driven by extremes (Hassan et al.4). The value of including inter-record times for improving reliability estimates has been recently highlighted (Pak et al.23).
These advanced tools for data collection–complex censoring for strength and record values for stress–have largely evolved separately. A critical gap exists in the literature: no unified framework can handle the complex but common scenario where component strength is measured using block adaptive Type-II progressive hybrid censoring (BAIIPH) while system stress follows the patterns of extreme records (URIT), particularly under the flexible IER distribution. This study bridges that critical gap by developing a novel and comprehensive statistical inference framework.
Our work makes several key contributions. We introduce and formulate a novel MSSR model that integrates BAIIPH for strength data and URIT for stress data, a combination that to our knowledge has not been explored. Building on this model, we develop a complete inferential toolkit using both frequentist and Bayesian methods. For the frequentist approach, we derive maximum likelihood estimators (MLEs) and construct asymptotic (ACI) and bootstrap-based confidence intervals. For the Bayesian paradigm, we employ squared error (SE) and linear exponential (LINEX) loss functions, deriving estimates using both Tierney and Kadane’s (TK) approximation and the Metropolis-Hastings (MH) algorithm. We then test the performance of all proposed estimators through extensive Monte Carlo simulations, evaluating bias, mean squared error, and interval properties. Finally, we demonstrate the model’s practical value by applying it to a real-world engineering dataset, offering tangible insights into system reliability.
The structure of this paper follows the logical progression of our analysis. We begin by formulating the fundamental multicomponent stress-strength reliability model, which integrates the flexible inverted exponentiated Rayleigh distribution with our specialized data collection schemes (Sect. “Data description and model formulation”). Building upon this model, we then construct a complete inferential toolkit, developing both frequentist and Bayesian estimation procedures (Sect. “Estimation of multicomponent stress-strength reliability”). The finite-sample performance of these proposed estimators is rigorously assessed through extensive Monte Carlo simulations (Sect. “Simulation study”). Finally, we demonstrate the framework's practical utility by applying it to two real-world engineering datasets (Sect. “Empirical applications”), before summarizing our conclusions and their broader significance (Sect. “Concluding remarks”).
Data description and model formulation
Our analysis is built upon the inverted exponentiated Rayleigh distribution, a flexible lifetime model we use for both stress and strength variables. Since its introduction by Fatima and Ahmad24, the IER has become increasingly important in reliability analysis. Its key advantage is the ability to capture non-monotone hazard rate functions–including increasing, decreasing, and bathtub shapes–that appear frequently in real-world component failure data. This adaptability makes it a powerful alternative to more conventional distributions.
The IER distribution is part of the inverse exponentiated family and can be viewed as a special case of the inverted exponentiated Weibull distribution25. A random variable X following the IER distribution, denoted \(X \sim \text {IER}(\theta , \lambda )\), has a shape parameter \(\theta > 0\) and a scale parameter \(\lambda > 0\). Its probability density function (PDF) is given by:
The corresponding cumulative distribution function (CDF) is:
From these, we can define the reliability function S(x) and the hazard rate function (HRF) h(x) as:
We chose the IER distribution because it can model complex failure behaviors that simpler models, like the exponential or Rayleigh, cannot. Such models are limited to monotonic hazard rates and often fail to represent the nuanced life cycles of real components. The IER excels here, handling both decreasing and unimodal hazard functions. Its capacity to model a unimodal (or “upside-down bathtub”) hazard rate is particularly valuable, as this shape describes phenomena where the risk of failure first rises to a peak and then declines–a pattern seen in temporary stress events, like the operational load on a component during a specific mission.
Visual illustration of the IER distribution’s flexibility. The left panel shows the PDF’s ability to model various forms of skewness, while the right panel demonstrates the HRF’s capacity to capture non-monotonic failure rates, including the unimodal (upside-down bathtub) shape that is a key feature motivating its use in this study.
Figure 1 visually demonstrates this versatility. The PDF can be adjusted to fit various forms of skewness, and the HRF confirms the unimodal hazard shape for different parameter settings. Because stress and strength often originate from distinct physical processes, the IER’s robustness allows us to characterize their diverse behaviors within a single, unified framework. A significant analytical advantage also comes from its closed-form CDF (Eq. (2.2)), which greatly simplifies the derivation of both the stress-strength reliability and the likelihood functions needed for complex data structures. This combination of practical flexibility and analytical tractability, which has been leveraged in other studies11, makes the IER distribution a theoretically sound and highly relevant foundation for our work.
The multicomponent stress-strength model
The classic stress-strength model, defined as \(\zeta = P(X > Y)\), is a foundational concept in reliability analysis2. However, modern systems are rarely single components; instead, they are complex assemblies of multiple parts that must all withstand operational stresses. This reality has motivated the extension of the classic model to a multicomponent stress-strength framework, which is now a central focus of reliability theory4,3.
In this study, we consider a coherent system made up of \(\kappa\) independent and identically distributed (iid) strength components, \(X_1, X_2, \dots , X_\kappa\), each subjected to a common stress, Y. We adopt the “s-out-of-\(\kappa\):G” system configuration, a practical model for systems with built-in redundancy. Under this configuration, the system functions as long as at least s of its \(\kappa\) components have a strength greater than the applied stress (\(1 \le s \le \kappa\)). Examples include multi-engine aircraft that can remain airborne with a minimum number of operational engines, or data storage arrays that maintain integrity despite several disk failures.
To formalize this, we assume the strength variables \(\{X_i\}_{i=1}^\kappa\) are iid and follow an \(\text {IER}(\theta _1, \lambda _1)\) distribution, while the stress variable Y follows an \(\text {IER}(\theta _2, \lambda _2)\) distribution. All variables are assumed to be mutually independent. The reliability of this multicomponent system, which we denote by \(\zeta _{s,\kappa }\), is the probability that at least s components have a strength exceeding the stress. If we let N be the number of components for which \(X_i > Y\), then because the components are iid, the probability that any single component withstands the stress is a constant, \(\vartheta = P(X > Y)\). This implies that N follows a binomial distribution with parameters \(\kappa\) and \(\vartheta\). The system reliability is therefore given by the binomial cumulative probability:
The core of this expression is the single-component reliability, \(\vartheta\), which is found by integrating the survival function of X against the probability density function of Y:
This integral typically does not have a closed-form solution. However, a closed-form expression can be obtained under the practical assumption that the scale parameters are equal, i.e., \(\lambda _1 = \lambda _2 = \lambda\). This condition is often justified in experiments where stress and strength are measured on the same scale or are subject to similar environmental factors. With this assumption, the substitution \(u = (1-e^{-\lambda /y^2})\) simplifies the integral to:
This result gives the probability of single-component success as \(\vartheta = \frac{\theta _2}{\theta _1 + \theta _2}\), and the corresponding probability of failure as \(1 - \vartheta = \frac{\theta _1}{\theta _1 + \theta _2}\). Substituting these probabilities into the binomial sum in Eq. (2.5) yields the final expression for the multicomponent reliability:
This function, which depends only on the shape parameters \(\theta _1\) and \(\theta _2\), is the central quantity of interest in our study. The subsequent sections are dedicated to developing frequentist and Bayesian methods for its estimation from the complex data structures unique to this research.
Observed data structure
The theoretical models in reliability analysis are only as good as the data used to fit them. In practice, collecting complete lifetime data for every component is often infeasible due to high costs, long experimental durations, or the physical constraints of testing facilities. To address these challenges, modern reliability studies employ advanced data collection strategies, known as censoring and record-value schemes, to maximize information gain while maintaining experimental efficiency.
Strength data (X-sample): block adaptive type-II progressive hybrid censoring
For the strength component, we adopt a BAIIPH. This is a highly flexible, multi-stage approach that synthesizes several state-of-the-art censoring techniques to create a realistic and efficient testing protocol26. The BAIIPH scheme can be understood by breaking it down into its core components:
-
1.
Block censoring: To manage facility limitations, the total of N units are divided into g smaller, independent groups (or “blocks”), where the i-th block contains \(n_i\) units (\(\sum _{i=1}^g n_i = N\)). These blocks are tested in parallel, often in different facilities26,19.
-
2.
Adaptive Type-II progressive hybrid censoring: Each block i is subjected to an adaptive hybrid censoring plan. This plan is designed to guarantee the observation of a predetermined number of failures, \(s_i,\) while offering flexibility in the experimental duration15. A pre-specified progressive censoring plan \(\textbf{R}_i = (R_{i,1}, R_{i,2}, \dots , R_{i,s_i})\) is set, where \(R_{i,j}\) units are scheduled for removal at the time of the j-th failure, \(x_{i,j:s_i}\). A maximum ideal test time, \(T_i\), is also specified for each block. The “adaptive” procedure for each block i unfolds in one of two ways, as detailed in26:
-
Case I: Experiment completes early. If the \(s_i\)-th failure occurs before the time limit \(T_i\) (i.e., \(x_{i,s_i:s_i} < T_i\)), the experiment for that block is terminated. The censoring is carried out exactly as planned, and the remaining \(R_{i,s_i} = n_i - s_i - \sum _{j=1}^{s_i-1} R_{i,j}\) units are removed.
-
Case II: Experiment time is extended. If the \(s_i\)-th failure has not occurred by time \(T_i\), let \(J_i\) be the number of failures observed up to that point. To ensure the target of \(s_i\) failures is met, the plan is adapted: the intermediate removals \(R_{i,J_i+1}, \dots , R_{i,s_i-1}\) are cancelled (set to 0). The experiment continues past \(T_i\) until the \(s_i\)-th failure is observed at time \(x_{i,s_i:s_i}\). At this point, all remaining \(R^*_{i,s_i} = n_i - s_i - \sum _{j=1}^{J_i} R_{i,j}\) units are removed.
-
Likelihood function for strength data: Given that the g blocks are independent, the total likelihood function for the strength data is the product of the likelihoods from each block. Let the PDF and survival function of the strength variable be \(f_X(\cdot ; \theta _1, \lambda _1)\) and \(S_X(\cdot ; \theta _1, \lambda _1)\), respectively. The data from block i is denoted by \(\textbf{x} = (x_{i,1:s_i}, \dots , x_{i,s_i:s_i})\). The likelihood function for the entire strength dataset \(\textbf{x}\) is:
where the censoring plan for block i is \(\textbf{R}_i = (R_{i,1}, \dots , R_{i,s_i})\). The values of \(R_{i,j}\) are determined by the adaptive scheme described above (i.e., some may be set to 0 in Case II) and \(C_i=\prod _{j=1}^{s_i}\left( n_i -j+1-\sum _{l=1}^{\min (j-1,J_i)}R_{i,l} \right)\) is a normalization constant.
Stress data (Y-sample): upper k-records with inter-k-record times
For the stress variable, we use a data collection model based on upper k-records with inter-k-record times. Standard stress-strength analysis often assumes a random sample of stress values. However, in many applications, system failure is triggered by extreme stress events. Record value theory provides a natural framework for modeling such extremes20. A k-record is a value that ranks among the top k observations seen thus far. By collecting data as k-records, we focus on the most severe stresses a system is likely to encounter. Our model goes a step further by incorporating the inter-k-record times–the time intervals between the occurrences of these successive k-records. This temporal dimension provides richer information than the record values alone, as it helps characterize the frequency and clustering of extreme stress events, a concept that has proven valuable in reliability estimation4,23. The observed stress data, therefore, consists of a sequence of increasing record values. In this scheme, we observe a sequence of stress measurements. A value is identified as an upper k-record if it is among the k largest values observed to date. The term “inter-k-record times” refers to the number of observations (or time) between successive k-records. As shown by MirMostafaee et al.27, this additional temporal information can be used to significantly improve the precision of nonparametric or conditional inference. While we acknowledge this important feature, for the purpose of parametric estimation of the underlying stress distribution, the likelihood function is based on the sequence of the record values themselves.
Likelihood function for stress data: Let \(U_{1},U_{2},\ldots ,U_{a}\) be iid continuous random variables from IER \((\theta _2, \lambda _2)\) distribution. The sequence of upper k-record values is defined as \(Y_{i}=U_{T_{i}-k+1:T_{i}}\), \(i=1,2,\ldots ,n\), \(a>n\) where \(T_{1}=k\) with probability 1, and for \(i\ge 2\),
\(T_{i}=\min \left\{ j:j> T_{i-1}, U_{j}> U_{T_{i-1}-k+1:T_{i-1}}\right\}\). For \(k=1\), the k-records simply become the usual records. Moreover, the corresponding upper inter-k-record times are defined as \(\delta _{i}=T_{i+1}-T_{i}\), \(i\ge 1\). Let the set of upper k-records be denoted by \(\textbf{y}= (y_{1},y_{2},\ldots ,y_{n})\) and the corresponding set of inter-k-record times by \(\mathbf {\delta }= (\delta _{1},\delta _{2},\ldots ,\delta _{n-1})\in \mathbb {N}^{n-1}\). This structure is designed to capture information about extreme stress events. As established by Arnold et al.20, the joint likelihood function for the observed records and inter-record times, given that the underlying stress variable Y follows an \(\text {IER}(\theta _2, \lambda _2)\) distribution, is:
This expression can be written more compactly in terms of the hazard rate function, \(h_Y(y_i) = f_Y(y_i)/S_Y(y_i)\):
Since the multicomponent failure strength and stress data are assumed to be collected independently, the total likelihood function for all parameters is the product of the individual likelihoods:
To summarize, the complete dataset for our analysis is composed of two distinct and independently gathered components, as illustrated in Scheme 1. The structure of the observed data is as follows:
-
Strength data (\(\textbf{x}\)): The data on the left represent the failure times from n independent multicomponent life-testing experiments. For each experiment \(\tau \in \{1, 2, \dots , n\}\), the observed data consists of the ordered failure times \(\{x_{\tau , 11}, x_{\tau , 12}, \dots , x_{\tau , g_{\tau } s_{g\tau }}\}\) collected from \(g_\tau\) blocks of an \(s_{i}\)-out-of-\(\kappa\):G system under the BAIIPH censoring scheme.
-
Stress data (\(\textbf{y}, \mathbf {\delta }\)): The data on the right represent the extreme stress values. This consists of a sequence of n upper k-record values, \(\{y_1, y_2, \dots , y_n\}\), and their corresponding inter-k-record times.
The strength variables \(\{x_{\tau ,ij}\}\) and stress variables \(\{y_i\}\) are assumed to be independent. This comprehensive data structure provides a rich foundation for the inferential procedures developed in the remainder of this paper.
Estimation of multicomponent stress-strength reliability
With the model’s theoretical components established, we turn to the primary goal of this study: statistical inference for the system reliability, \(\zeta _{s,\kappa }\). This section develops a comprehensive set of estimation procedures from both frequentist and Bayesian viewpoints. We begin with the frequentist approach by deriving the MLE for \(\zeta _{s,\kappa }\) and leveraging this result to construct asymptotic and bootstrap-based confidence intervals. We then transition to the Bayesian framework to derive posterior estimates for the parameters and the reliability function itself. This involves using Tierney and Kadane’s approximation for point estimation under different loss functions and a Metropolis-Hastings algorithm to generate samples for constructing credible intervals. Together, these methods provide a robust toolkit for assessing system reliability under the complex data conditions defined in this paper.
Maximum likelihood estimation of \(\zeta _{s,\kappa }\)
Maximum likelihood estimators are foundational in frequentist inference due to their desirable asymptotic properties of consistency and efficiency. Our objective is to derive the MLE for the multicomponent stress-strength reliability, \(\zeta _{s,\kappa }\), using the two independent datasets previously described: the strength data \(\textbf{x}\) from the BAIIPH scheme26, and the stress data \((\textbf{y}, \varvec{\delta })\) from the URIT scheme20. The model assumes that both strength (X) and stress (Y) variables follow IER distributions with a common scale parameter, such that \(X \sim \text {IER}(\theta _1, \lambda )\) and \(Y \sim \text {IER}(\theta _2, \lambda )\)24.
Since the strength and stress data are collected independently, the total likelihood function for the parameter vector \(\varvec{\Theta } = (\theta _1, \theta _2, \lambda )\) is the product of the individual likelihoods. From Eq. (2.11), the combined likelihood function is:
where \(C_{\text {const}} = \left( \prod _{\tau =1}^{n} \prod _{i=1}^{g_\tau } C_i \right) \cdot k^n \cdot 2^{n+\sum _{\tau =1}^{n} \sum _{i=1}^{g_\tau } s_i} \cdot \left( \prod _{\tau =1}^{n} \prod _{i=1}^{g_\tau } \prod _{j=1}^{s_i} x_{\tau , ij}^{-3} \right) \cdot \left( \prod _{i=1}^n y_i^{-3} \right)\).
For computational tractability, we work with the log-likelihood function, \(\ell (\varvec{\Theta }) = \log L(\varvec{\Theta })\), which is given by:
The MLEs of the parameters, denoted \(\hat{\varvec{\Theta }}=(\hat{\theta }_1, \hat{\theta }_2,\hat{\lambda })\), are the values that maximize this function. They are found by solving the system of non-linear equations obtained by setting the partial derivatives of \(\ell\) to zero:
Letting \(S_{\text {total}} = \sum _{\tau =1}^{n} \sum _{i=1}^{g_\tau } s_i\) be the total number of failures from the strength experiments, the resulting likelihood equations are as follows.
Differentiating \(\ell\) with respect to \(\theta _1\) gives:
Differentiating with respect to \(\theta _2\) gives:
Finally, defining \(\Psi (z) = \frac{z^{-2}e^{-\lambda /z^2}}{1-e^{-\lambda /z^2}}\) for clarity, the derivative with respect to \(\lambda\) is:
Due to the highly non-linear and intricate form of these partial derivatives, an analytical solution for \(\hat{\theta }_1, \hat{\theta }_2,\) and \(\hat{\lambda }\) is not available. We must therefore employ iterative numerical optimization algorithms, such as the Newton-Raphson or quasi-Newton methods, to find the MLEs numerically.
Once the parameter estimates \(\hat{\varvec{\Theta }}\) are obtained, the MLE of the multicomponent reliability \(\zeta _{s,\kappa }\) follows from the invariance property of MLEs. This property allows us to find the MLE of a function of parameters by simply evaluating the function at the parameter MLEs. Applying this to Eq. (2.7), the MLE of \(\zeta _{s,\kappa }\) is:
This point estimate, \(\hat{\zeta }_{s,\kappa , \text {MLE}}\), is the foundation for constructing the confidence intervals discussed in the following sections and is consistent with methods used for other complex stress-strength models4.
Remark 1: Let the log-likelihood function be denoted by \(\ell (\varvec{\Theta } | \text {data})\), where \(\varvec{\Theta } = (\theta _1, \theta _2, \lambda )\). If a solution \(\hat{\varvec{\Theta }} = (\hat{\theta }_1, \hat{\theta }_2, \hat{\lambda })\) to the system of likelihood equations \(\nabla \ell (\varvec{\Theta }) = \textbf{0}\) exists within the parameter space \(\Omega = \{ (\theta _1, \theta _2, \lambda ): \theta _1> 0, \theta _2> 0, \lambda > 0 \}\), then this solution is the unique maximum likelihood estimator of \(\varvec{\Theta }\). The uniqueness of this solution is established by demonstrating that the log-likelihood function \(\ell (\varvec{\Theta })\) is strictly concave at \(\hat{\varvec{\Theta }}\). This requires showing that its Hessian matrix, \(\textbf{H}\), is negative definite when evaluated at \(\hat{\varvec{\Theta }}\). The Hessian matrix is given by:
where \(\ell _{ij} = \frac{\partial ^2 \ell }{\partial \theta _i \partial \theta _j}\), with \((\theta _1, \theta _2, \theta _3)\) corresponding to \((\theta _1, \theta _2, \lambda )\). For \(\textbf{H}\) to be negative definite, its leading principal minors must satisfy the conditions \(H_1 < 0\), \(H_2 > 0\), and \(H_3 < 0\). The relevant second-order partial derivatives, evaluated at \(\hat{\varvec{\Theta }}\), are as follows:
The remaining derivatives are:
where \(\Psi (z, \lambda ) = \frac{z^{-2}e^{-\lambda /z^2}}{1-e^{-\lambda /z^2}}\) and its derivative \(\Psi '(z, \lambda ) = \frac{-z^{-4}e^{-\lambda /z^2}}{(1-e^{-\lambda /z^2})^2}\) are always negative. Given these components, we can check the conditions for negative definiteness at the solution \(\hat{\varvec{\Theta }}\): The negative definiteness of \(\textbf{H}\) is verified by inspecting its leading principal minors. The first minor, \(H_1 = \ell _{11}\), is clearly negative. Since \(\ell _{12}=0\), the second minor is \(H_2 = \ell _{11}\ell _{22}\). Given that \(\ell _{22}\) is also strictly negative (as a sum of negative terms), it follows that \(H_2 > 0\). The third minor, \(H_3 = \det (\textbf{H})\), simplifies to \(H_3 = \ell _{11}\ell _{22}\ell _{33} - \ell _{11}\ell _{23}^2 - \ell _{22}\ell _{13}^2\). While its sign is not obvious from the algebraic form, it is determined by a fundamental property of likelihood theory. The observed information matrix, \(\mathcal {I}(\varvec{\Theta }) = -\textbf{H}\), is the sum of two positive definite matrices (from the independent strength and stress data) and is therefore itself positive definite. This requires its negative, the Hessian matrix \(\textbf{H}\), to be negative definite, which in turn requires that \(H_3 < 0\).
Since the principal minors alternate in sign as required (\(H_1< 0, H_2 > 0, H_3 < 0\)), the Hessian is negative definite. This confirms the concavity of the function and guarantees that the solution to the likelihood equations is a unique global maximum.
Asymptotic confidence interval for \(\zeta _{s,\kappa }\)
Building upon the point estimation of the model parameters, we now develop an interval estimator for the multicomponent reliability \(\zeta _{s,\kappa }\). Under certain regularity conditions, which our model satisfies, the MLEs are consistent and asymptotically normally distributed. Specifically, as the sample sizes increase, the distribution of the MLE vector \(\hat{\varvec{\Theta }} = (\hat{\theta }_1, \hat{\theta }_2, \hat{\lambda })^T\) approaches a multivariate normal distribution with a mean equal to the true parameter vector \(\varvec{\Theta }\) and a variance-covariance matrix given by the inverse of the Fisher information matrix, \(\mathcal {I}(\varvec{\Theta })^{-1}\)28. While the expected Fisher information matrix can be difficult to compute, a standard and reliable alternative is to use the observed Fisher information matrix, which is estimated by the negative of the Hessian matrix evaluated at the MLEs, \(\mathcal {I}(\hat{\varvec{\Theta }}) = -\textbf{H}(\hat{\varvec{\Theta }})\). The asymptotic variance-covariance matrix of the MLEs can therefore be estimated as:
where \(\hat{\sigma }_{ij}\) is the estimated asymptotic covariance between \(\hat{\theta }_i\) and \(\hat{\theta }_j\).
To find the asymptotic variance of our reliability estimator, \(\hat{\zeta }_{s,\kappa }\), we employ the delta method. This technique approximates the variance of a function of random variables using a first-order Taylor series expansion. The asymptotic variance of \(\hat{\zeta }_{s,\kappa }\) is given by:
where \(\nabla \zeta _{s,\kappa }\) is the gradient of the reliability function evaluated at the MLEs. Since \(\zeta _{s,\kappa }\) depends only on \(\theta _1\) and \(\theta _2\), the gradient vector is \(\nabla \zeta _{s,\kappa } = (\frac{\partial \zeta _{s,\kappa }}{\partial \theta _1}, \frac{\partial \zeta _{s,\kappa }}{\partial \theta _2}, 0)^T\). The partial derivatives are:
Let \(d_1 = \frac{\partial \zeta _{s,\kappa }}{\partial \theta _1}\Big |_{\hat{\varvec{\Theta }}}\) and \(d_2 = \frac{\partial \zeta _{s,\kappa }}{\partial \theta _2}\Big |_{\hat{\varvec{\Theta }}}\). The variance of \(\hat{\zeta }_{s,\kappa }\) is then estimated by:
A straightforward \(100(1-\alpha )\%\) ACI for \(\zeta _{s,\kappa }\) is then constructed as:
where \(z_{\alpha /2}\) is the upper \((\alpha /2)\)-th percentile of the standard normal distribution. However, this interval may produce bounds outside the valid range of [0, 1]. To address this, we apply a logarithmic transformation, a technique used by29 and others, to ensure the interval remains within the proper bounds. Let \(h(\zeta ) = \log (\frac{\zeta }{1-\zeta })\). The variance of \(h(\hat{\zeta }_{s,\kappa })\) is found via the delta method:
The \(100(1-\alpha )\%\) ACI for the transformed parameter \(h(\zeta _{s,\kappa })\) is:
Let the lower and upper bounds of this interval be (L, U). Applying the inverse transformation (the logistic function), we obtain the final, more reliable \(100(1-\alpha )\%\) ACI for \(\zeta _{s,k}\): \(\left( \frac{e^L}{1+e^L}, \; \frac{e^U}{1+e^U}\right)\).
Bootstrap confidence intervals for \(\zeta _{s,\kappa }\)
While the asymptotic confidence interval developed in previous section offers a computationally straightforward approach, its validity hinges on the central limit theorem, which presumes that the sampling distribution of the MLEs is approximately normal. This assumption may not hold reliably for moderate sample sizes, especially given the intricate nature of the data collected under BAIIPH and k-record schemes. Such complex data structures can lead to skewed sampling distributions for \(\hat{\zeta }_{s,\kappa }\). To construct more robust and accurate confidence intervals that do not depend on large-sample normality, we turn to resampling techniques, specifically parametric bootstrap methods30.
Percentile bootstrap (Boot-p) confidence interval for \(\zeta _{s,\kappa }\)
Due to the specific generative nature of our data collection schemes–BAIIPH for strength and URIT for stress–a standard non-parametric bootstrap (i.e., resampling observations with replacement) is unsuitable. Instead, a parametric bootstrap approach is required, as it simulates new datasets directly from the fitted model, thereby respecting the underlying data generation mechanisms. The procedure is outlined as follows:
-
1.
Initial estimation: From the original observed data (strength data \(\textbf{x}\) and stress data \(\textbf{y}\) with inter-record times \(\varvec{\delta }\)), compute the MLEs of the parameters, \(\hat{\Theta } = (\hat{\theta }_1, \hat{\theta }_2, \hat{\lambda })\). Then, calculate the MLE of the system reliability, \(\hat{\zeta }_{s,\kappa ,MLE}\), using Eq. (3.4).
-
2.
Bootstrap strength sample generation: Generate a bootstrap strength sample, \(\textbf{x}^*\), by simulating data from the IER(\(\hat{\theta }_1, \hat{\lambda }\)) distribution. This simulation must precisely replicate the original BAIIPH scheme, including the number of groups (\(g_\tau\)), target failures (\(s_i\)), predetermined test times (\(T_i\)), and censoring plans (\(R_{i,j}\)), to produce a valid bootstrap censored sample \(\textbf{x}^*\).
-
3.
Bootstrap stress sample generation: Generate a bootstrap stress sample, \(\textbf{y}^*\), by simulating data from the IER(\(\hat{\theta }_2, \hat{\lambda }\)) distribution under the URIT scheme. This yields a new set of n upper k-records, \(\textbf{y}^*\), and their corresponding inter-k-record times, \(\varvec{\delta }^*\).
-
4.
Bootstrap parameter estimation: Using the generated bootstrap samples \(\textbf{x}^*\) and \((\textbf{y}^*, \varvec{\delta }^*)\), compute the bootstrap MLEs of the parameters, denoted as \(\hat{\Theta }^* = (\hat{\theta }_1^*, \hat{\theta }_2^*, \hat{\lambda }^*)\).
-
5.
Bootstrap reliability calculation: Calculate the bootstrap estimate of reliability, \(\hat{\zeta }_{s,\kappa }^*\), by substituting the bootstrap MLEs (\(\hat{\Theta }^*\)) into the reliability function given by Eq. (3.4).
-
6.
Repetition: Repeat Steps 2 through 5 a large number of times, B (e.g., \(B=2000\)), to obtain a collection of bootstrap reliability estimates: \(\{\hat{\zeta }_{s,\kappa ,1}^*, \hat{\zeta }_{s,\kappa ,2}^*, \dots , \hat{\zeta }_{s,\kappa ,B}^*\}\).
This collection of B estimates serves as an empirical approximation of the sampling distribution of \(\hat{\zeta }_{s,\kappa }\). The percentile bootstrap (Boot-p) interval is constructed directly from the quantiles of this empirical distribution31. Let \(\hat{\zeta }_{s,\kappa ,(\alpha )}^*\) be the \(100\alpha\)-th percentile of the ordered bootstrap estimates. The \(100(1-\alpha )\%\) Boot-p confidence interval for \(\zeta _{s,\kappa }\) is given by:
This interval is simple to compute and performs well, particularly when the sampling distribution of \(\hat{\zeta }_{s,\kappa }\) is symmetric and its estimator is median-unbiased.
Bootstrap-t (Boot-t) confidence interval for \(\zeta _{s,\kappa }\)
For potentially improved coverage accuracy, particularly in skewed distributions, the bootstrap-t (or studentized bootstrap) interval is an excellent alternative. It is known to be second-order accurate, a significant improvement over the first-order accuracy of the percentile and asymptotic intervals32. This method works by bootstrapping a studentized pivot quantity, which often has a more stable sampling distribution than the estimator itself. The process requires an estimate of the standard error for each bootstrap replicate:
-
1.
Studentized statistic calculation: First, generate the collection of B bootstrap reliability estimates, \(\{\hat{\zeta }_{s,\kappa ,1}^*, \dots , \hat{\zeta }_{s,\kappa ,B}^*\}\), as described in the Boot-p procedure. For each bootstrap sample \(b=1, \dots , B\), compute the studentized statistic:
$$\begin{aligned} T_b^* = \frac{\hat{\zeta }_{s,\kappa ,b}^* - \hat{\zeta }_{s,\kappa ,MLE}}{\widehat{se}(\hat{\zeta }_{s,\kappa ,b}^*)}. \end{aligned}$$(3.14)Here, \(\hat{\zeta }_{s,\kappa ,MLE}\) is the original MLE of the reliability. The term \(\widehat{se}(\hat{\zeta }_{s,\kappa ,b}^*)\) is the standard error of the reliability estimate computed from the b-th bootstrap sample. This is obtained by applying the delta method (as detailed in previous section) to the bootstrap data and the bootstrap parameter estimates \(\hat{\Theta }_b^*\).
-
2.
Empirical distribution of T*: Obtain the empirical distribution of the studentized statistics from the B bootstrap samples: \(\{T_1^*, T_2^*, \dots , T_B^*\}\).
-
3.
Quantile determination: Find the \((\alpha /2)\) and \((1-\alpha /2)\) percentiles from the empirical distribution of \(T^*\). Let these be denoted as \(t_{\alpha /2}^*\) and \(t_{1-\alpha /2}^*\), respectively.
-
4.
Interval construction: The \(100(1-\alpha )\%\) bootstrap-t confidence interval for \(\zeta _{s,k}\) is then constructed as:
$$\begin{aligned} \left( \hat{\zeta }_{s,\kappa ,MLE} - t_{1-\alpha /2}^* \widehat{se}(\hat{\zeta }_{s,\kappa ,MLE}), \quad \hat{\zeta }_{s,\kappa ,MLE} - t_{\alpha /2}^* \widehat{se}(\hat{\zeta }_{s,\kappa ,MLE}) \right) , \end{aligned}$$(3.15)where \(\widehat{se}(\hat{\zeta }_{s,\kappa ,MLE})\) is the standard error of the original MLE, calculated from the observed-information matrix as the square root of the variance given in Eq. (3.9).
Bayes estimates for \(\zeta _{s,\kappa }\)
In contrast to the frequentist approach, which treats model parameters as fixed, unknown constants, the Bayesian paradigm considers them to be random variables. This allows us to incorporate prior knowledge about the parameters and update our beliefs in light of the observed data. The synthesis of prior information and data-driven evidence is encapsulated in the posterior distribution, which forms the basis for all subsequent inference.
To develop the Bayesian framework for our model, we must first specify prior distributions for the unknown parameters \(\theta _1\), \(\theta _2\), and \(\lambda\). Since these parameters are positive, the gamma distribution is a natural and flexible choice for modeling our prior beliefs. We assume that \(\theta _1\), \(\theta _2\), and \(\lambda\) are mutually independent and follow gamma distributions, defined as:
-
\(\pi (\theta _1) \propto \theta _1^{a_1-1} e^{-b_1 \theta _1}\), for \(\theta _1 > 0,\)
-
\(\pi (\theta _2) \propto \theta _2^{a_2-1} e^{-b_2 \theta _2}\), for \(\theta _2 > 0,\)
-
\(\pi (\lambda ) \propto \lambda ^{a_3-1} e^{-b_3 \lambda }\), for \(\lambda > 0.\)
The hyperparameters \((a_1, b_1, a_2, b_2, a_3, b_3)\) are chosen to reflect existing knowledge from previous studies, expert opinion, or physical constraints. In cases where little prior information is available, one can specify non-informative priors by setting the hyperparameters to values close to zero (e.g., \(a_i, b_i \rightarrow 0\)), which allows the likelihood function to dominate the posterior inference33. The joint posterior distribution is obtained by applying Bayes’ theorem, which states that the posterior is proportional to the product of the likelihood function and the joint prior distribution. Substituting the likelihood function from Eq. (3.1) and the gamma priors, we derive the joint posterior density:
where \(\mathcal {W} = \sum _{\tau =1}^n \sum _{i=1}^{g_\tau } \sum _{j=1}^{s_{\tau i}} x_{\tau ,ij}^{-2} + \sum _{i=1}^n y_i^{-2}\). The complexity of this posterior distribution makes it immediately clear that analytical integration is not feasible. To obtain a single point estimate for the reliability function \(\zeta _{s,\kappa }\), we must evaluate the posterior distribution under a chosen loss function. The Bayes estimate is the value that minimizes the expected posterior loss.
Squared error loss function: The most common choice is the symmetric squared error loss function, \(L(\zeta , \hat{\zeta }) = (\zeta - \hat{\zeta })^2\). Let \(\varvec{\Theta } = (\theta _1, \theta _2, \lambda )\) be the vector of parameters, and let \(\Omega = (0, \infty )^3\) be the parameter space. The Bayes estimate under SE is then the posterior mean:
Linear exponential loss function: In reliability analysis, underestimation of reliability is often less critical than overestimation. The asymmetric LINEX loss function, \(L(\zeta , \hat{\zeta }) = e^{c(\hat{\zeta } - \zeta )} - c(\hat{\zeta } - \zeta ) - 1\) for \(c \ne 0\), accounts for this by assigning different penalties. The sign and magnitude of the shape parameter c control the direction and degree of asymmetry. For \(c > 0\), overestimation is penalized more heavily. The Bayes estimate under LINEX loss is given by34:
where the posterior expectation is calculated as:
As is evident from the intricate form of the posterior distribution in Eq. (3.16), the three-dimensional integrals required to compute these Bayes estimates are analytically intractable. Therefore, we must rely on advanced numerical approximation techniques. The following subsections will detail two such methods: the Tierney and Kadane approximation for analytical tractability and a Metropolis-Hastings algorithm for posterior sampling.
Tierney and Kadane’s approximation for point estimates
As a powerful, deterministic alternative to computationally intensive simulation, we employ the Tierney and Kadane approximation35. This method is exceptionally effective for calculating the ratio of two integrals–the precise form of a posterior expectation. The core idea is to approximate the integrands with a Gaussian function, yielding a highly accurate result with minimal computational overhead.
The general TK framework: The posterior expectation of a function \(U(\varvec{\Theta })\) is given by the TK formula:
where \(\mathcal {L}(\varvec{\Theta }) = \ln (\pi (\varvec{\Theta } \,|\, \text {data}))\) is the log-posterior, and the auxiliary function is \(\mathcal {L}^*(\varvec{\Theta }) = \mathcal {L}(\varvec{\Theta }) + \ln (U(\varvec{\Theta }))\). Implementation requires finding the modes (\(\hat{\varvec{\Theta }}\), \(\varvec{\Theta }^*\)) and negative inverse Hessian (\(\varvec{\Sigma }\), \(\varvec{\Sigma }^*\)) of these functions. This, in turn, requires their first and second derivatives.
Derivatives of the log-posterior: The log-posterior function, \(\mathcal {L}(\varvec{\Theta })\), is the sum of the log-likelihood \(\ell (\varvec{\Theta })\) from Eq. (3.2) and the log-priors. We denote the first partial derivatives of the log-likelihood as \(\ell _i = \partial \ell / \partial \theta _i\) (where \(\varvec{\theta } = (\theta _1, \theta _2, \lambda )\)) and the second partials as \(\ell _{ij} = \partial ^2 \ell / \partial \theta _i \partial \theta _j\). The derivatives of the log-posterior are:
Since the priors are independent, the mixed partial derivatives are unchanged: \(\mathcal {L}_{ij} = \ell _{ij}\) for \(i \ne j\).
Applying TK for SE loss: For the symmetric SE loss, the goal is to estimate the posterior mean by setting \(U(\varvec{\Theta }) = \zeta _{s,\kappa }(\theta _1, \theta _2)\). The auxiliary function to maximize is:
Let \(\zeta _{s,\kappa }\) be denoted simply as \(\zeta\), and let its partial derivatives be \(\zeta _i = \partial \zeta /\partial \theta _i\) and \(\zeta _{ij} = \partial ^2\zeta /\partial \theta _i\partial \theta _j\). The first partial derivatives of \(\mathcal {L}_{\text {SE}}^*\) are:
The elements of the Hessian matrix for \(\mathcal {L}_{\text {SE}}^*\) are:
With these derivatives, we can numerically find the mode \(\varvec{\Theta }_{\text {SE}}^*\) and the Hessian \(\varvec{\Sigma }_{\text {SE}}^*\) needed for the TK formula. The TK estimate for the reliability is then:
Applying TK for LINEX loss: In reliability analysis, the cost of overestimating system performance (leading to unexpected failures) is often much higher than the cost of underestimating it (leading to conservative designs). The asymmetric LINEX loss function is ideal for such scenarios. The parameter c controls the direction and degree of asymmetry: a positive c penalizes overestimation more severely, while a negative c penalizes underestimation more. As \(c \rightarrow 0\), LINEX loss converges to the symmetric SE loss.
For LINEX loss, we need to estimate \(E[e^{-c\zeta _{s,\kappa }}]\), so we set \(U(\varvec{\Theta }) = e^{-c\zeta _{s,\kappa }(\theta _1, \theta _2)}\). This yields a simpler auxiliary function:
The first partial derivatives of \(\mathcal {L}_{\text {L}}^*\) are:
The Hessian elements for \(\mathcal {L}_{\text {L}}^*\) are also straightforward:
After finding the mode \(\varvec{\Theta }_{\text {L}}^*\) and Hessian \(\varvec{\Sigma }_{\text {L}}^*\) using these derivatives, we compute \(\hat{E}[e^{-c\zeta _{s,\kappa }}]\) via Eq. (3.20). The final LINEX estimate is then:
While the Tierney and Kadane approximation35 offers an efficient way to derive point estimates, it cannot fully characterize the posterior uncertainty needed for interval estimation. To achieve a more complete picture of the posterior distributions for the model parameters and the reliability function \(\zeta _{s,\kappa }\), we turn to simulation. The Metropolis-Hastings algorithm, a widely-used Markov Chain Monte Carlo (MCMC) method, is particularly well-suited for this problem, given the analytical intractability of the posterior distribution shown in Eq. (3.16)33.
MCMC algorithm for \(\zeta _{s,\kappa }\)
The MH algorithm works by constructing a Markov chain whose stationary distribution is the target posterior, \(\pi (\varvec{\Theta }|\text {data})\). After a sufficient number of iterations, the samples drawn from the chain can be treated as a sample from the posterior itself29. The posterior density plots in Figure 2 are unimodal and roughly symmetric, supporting our choice of a normal proposal distribution. The specific steps of our MH algorithm are as follows:
-
1.
Initialization: The parameter vector \(\varvec{\Theta }^{(0)} = (\theta _1^{(0)}, \theta _2^{(0)}, \lambda ^{(0)})\) is initialized using the MLEs \((\hat{\theta }_1, \hat{\theta }_2, \hat{\lambda })\) from Sect. 3.1, which provides an effective starting point near the mode of the posterior.
-
2.
Iterative sampling: For each iteration \(j=1, \dots , M\), a candidate parameter vector \(\varvec{\Theta }^*\) is generated from a multivariate normal proposal distribution centered at the chain’s current state:
$$\varvec{\Theta }^* \sim N\left( \varvec{\Theta }^{(j-1)}, \Sigma _{\text {prop}}\right) .$$The efficiency of the sampler depends heavily on the proposal covariance matrix, \(\Sigma _{\text {prop}}\). We made an informed choice by using the inverse of the observed Fisher information matrix, calculated for the asymptotic confidence intervals. This aligns the proposal distribution with the curvature of the posterior near its mode, which typically improves the mixing of the chain4.
-
3.
Acceptance step: The candidate \(\varvec{\Theta }^*\) is accepted or rejected based on the Metropolis-Hastings acceptance probability, \(\alpha\):
$$\alpha = \min \left( 1, \frac{\pi (\varvec{\Theta }^*|\text {data})}{\pi (\varvec{\Theta }^{(j-1)}|\text {data})}\right) .$$Because our multivariate normal proposal is symmetric, the ratio of proposal densities cancels out. A random number u is drawn from a Uniform (0, 1) distribution; if \(u \le \alpha\), the candidate is accepted (\(\varvec{\Theta }^{(j)} = \varvec{\Theta }^*\)), otherwise it is rejected and the chain remains at its previous state (\(\varvec{\Theta }^{(j)} = \varvec{\Theta }^{(j-1)}\)).
-
4.
Posterior inference: An initial portion of the sequence (the first B samples) is discarded as a “burn-in” to ensure the chain has reached its stationary distribution. The remaining \(M-B\) samples, \(\{\varvec{\Theta }^{(j)}\}_{j=B+1}^{M}\), form our sample from the joint posterior.
From this posterior sample of the parameters, we can directly generate a sample from the posterior distribution of the reliability function \(\zeta _{s,\kappa }\). For each sampled parameter vector \(\varvec{\Theta }^{(j)}\), the corresponding reliability value is calculated using Eq. (2.7):
This procedure yields a set of \(M-B\) values, \(\{\zeta _{s,\kappa }^{(j)}\}_{j=B+1}^{M}\), which represents a direct sample from the posterior distribution of \(\zeta _{s,\kappa }\). This sample is then used to compute the final Bayes point estimates and to construct the credible intervals.
Bayes estimators: The Bayes estimate of \(\zeta _{s,\kappa }\) under the symmetric SE loss function is the posterior mean, which can be approximated by the sample mean of the generated reliability values:
For cases where overestimation and underestimation have different costs, the asymmetric LINEX loss function is often more appropriate. The Bayes estimate under LINEX loss is given by:
where the expectation is approximated using the posterior sample:
The sign and magnitude of the parameter c control the direction and degree of asymmetry in the loss function. A positive c penalizes overestimation more heavily, while a negative c penalizes underestimation more.
HPD credible interval: Furthermore, a \(100(1-\alpha )\%\) HPD credible interval can be constructed. This interval is the shortest possible interval containing \(100(1-\alpha )\%\) of the posterior probability and is particularly useful for skewed posterior distributions. To obtain this interval from the MCMC output \(\{\zeta _{s,\kappa }^{(j)}\}\), we first sort the samples to get \(\zeta _{s,\kappa ,(1)} \le \zeta _{s,\kappa ,(2)} \le \dots \le \zeta _{s,\kappa ,(M-B)}\). Then, we find the interval \([\zeta _{s,\kappa ,(j)}, \zeta _{s,\kappa ,(j+L)}]\) that minimizes the length \(\zeta _{s,\kappa ,(j+L)} - \zeta _{s,\kappa ,(j)}\) over all \(j = 1, 2, \dots , (M-B)-L\), where \(L = \lfloor (1-\alpha )(M-B) \rfloor\). This approach, as described in36, provides a robust interval estimate that fully captures the posterior uncertainty.
Simulation study
To rigorously assess the performance of the proposed frequentist and Bayesian inferential procedures, we conduct a detailed Monte Carlo simulation study. The objective is to evaluate the properties of the point and interval estimators for the multicomponent stress-strength reliability, \(\zeta _{s,\kappa }\), under a variety of realistic conditions. This section first details the data generation and estimation algorithms, then presents and discusses the comprehensive simulation results.
Data generation and estimation algorithms
The core of the simulation involves generating datasets that mimic the complex experimental schemes and then applying the various inferential methods.
For each of the main parameter sets in Table 1, the true parameters \((\theta _1, \theta _2, \lambda )\) were first generated from their respective gamma priors: \(\theta _1 \sim \text {Gamma}(a_1, b_1)\), \(\theta _2 \sim \text {Gamma}(a_2, b_2)\), and \(\lambda \sim \text {Gamma}(a_3, b_3)\), using non-informative hyperparameters (e.g., \(a_i=2, b_i=2\)). Once fixed, these true parameters were used for all Monte Carlo replications within that set of scenarios. Each individual dataset was then generated as follows:
For each simulated dataset, the following computational procedure was implemented to obtain point and interval estimates for \(\zeta _{s,\kappa }\).
Simulation results and discussion
The simulation results are organized into the following tables. To assess the performance of the various estimators, we employ four standard statistical metrics. For a given true reliability \(\zeta\) and \(M=5000\) Monte Carlo replications, the metrics are defined as follows:
-
Bias: Measures the systematic tendency of an estimator, \(\hat{\zeta }\), to over- or underestimate the true value. An ideal estimator has a bias close to zero.
$$\text {Bias}(\hat{\zeta }) = \frac{1}{M} \sum _{j=1}^{M} (\hat{\zeta }^{(j)} - \zeta ).$$ -
Mean squared error (MSE): A comprehensive measure of an estimator’s overall accuracy, combining both bias and variance. A smaller MSE indicates a more accurate estimator.
$$\text {MSE}(\hat{\zeta }) = \frac{1}{M} \sum _{j=1}^{M} (\hat{\zeta }^{(j)} - \zeta )^2.$$ -
Average interval length (AIL): The average width of the confidence or credible intervals, \((L^{(j)}, U^{(j)})\). It measures the precision of an interval estimator; shorter intervals are preferred.
$$\text {AIL} = \frac{1}{M} \sum _{j=1}^{M} (U^{(j)} - L^{(j)}).$$ -
Coverage probability (CP): The proportion of intervals that successfully capture the true parameter value. For a 95% nominal level, the CP should be close to 0.95.
$$\text {CP} = \frac{1}{M} \sum _{j=1}^{M} I(L^{(j)} \le \zeta \le U^{(j)}).$$
Table 1 details the twelve distinct scenarios. Table 2 presents the bias and MSE for the point estimators, while Table 3 shows the AIL and CP for the 95% interval estimators. The multicomponent system configuration is fixed at \((s, \kappa ) = (3, 5)\).
Analysis of simulation results
From the comprehensive results presented, we can draw several important conclusions.
Performance of point estimators (Table 2):
-
Effect of simulation scenarios: The design of the simulation allows for a clear analysis of how different experimental factors influence estimation accuracy.
-
Sample size (Scenarios 1–3): Comparing Scenarios 1–3 reveals the predictable and significant impact of sample size. As the sample sizes for both stress (n) and strength (\(n_i\)) increase, the bias and MSE for all estimators consistently decrease, confirming the consistency of the proposed methods.
-
Block number (Scenarios 4–6): These scenarios investigate the effect of splitting a fixed total number of strength units into a varying number of blocks (g). The results indicate that as g increases from 2 to 6, both bias and MSE exhibit a marginal increase. This suggests that, for a fixed experimental budget, using fewer, larger blocks is statistically more efficient, as it provides more stable information from each experimental group.
-
Record parameter k (Scenarios 7–9): The effect of the record parameter k is explored in Scenarios 7–9. As k increases, the estimators show a slight improvement with marginally lower bias and MSE. This is likely because a larger k draws information from a denser, less extreme part of the distribution’s upper tail, leading to more stable parameter estimation for the stress variable.
-
Parameter robustness (Scenarios 10–12): Finally, Scenarios 10–12 confirm the robustness of our findings across different sets of true parameters. The established performance hierarchy–with Bayesian methods outperforming the MLE–holds true regardless of the underlying reliability level, demonstrating that our conclusions are not an artifact of a specific parameter configuration.
-
-
Comparison of Bayesian methods: The TK and MCMC-based Bayesian estimators produce very similar results in terms of both bias and MSE. This is an important finding, as it validates the use of the computationally efficient TK method as a reliable alternative to the more intensive MCMC for obtaining point estimates. As expected, the MCMC estimates show a marginally lower MSE, which can be attributed to the full integration over the posterior rather than approximation at the mode.
-
MLE vs. Bayesian: Both TK and MCMC-based Bayesian estimators consistently outperform the MLE by yielding a lower MSE. The Bayes-SE estimate (posterior mean), whether from TK or MCMC, is particularly effective. This highlights the inherent advantage of the Bayesian paradigm in reducing variance by averaging over the parameter space33.
-
LINEX loss function: The LINEX estimators perform exactly as designed. A positive \(c=1.5\) penalizes overestimation, resulting in estimates with a smaller positive (or more negative) bias compared to the symmetric SE loss estimates. Conversely, a negative \(c=-1.5\) penalizes underestimation, yielding larger estimates. This demonstrates the practical utility of asymmetric loss functions in contexts where the cost of estimation error is not symmetric34.
Performance of interval estimators (Table 3):
-
Effect of simulation scenarios: The simulation design reveals how different experimental factors influence the precision and accuracy of the interval estimators.
-
Comparing Scenarios 1–3, we see that increasing the sample size has a predictably positive impact: the average interval length for all interval types becomes narrower, reflecting greater precision, while the coverage probabilities for all methods improve and move closer to the 95% nominal level.
-
Scenarios 4–6 demonstrate that for a fixed total number of experimental units, splitting them into more, smaller blocks (increasing g) adversely affects interval performance. The AILs become wider and the CPs tend to degrade slightly, highlighting that fewer, larger experimental blocks are statistically more efficient.
-
The effect of the record parameter k is observed in Scenarios 7–9. As k increases, the AILs for all methods show a marginal decrease. This suggests that using a larger k provides slightly more stable information from the stress data, leading to more precise interval estimates.
-
Finally, Scenarios 10–12, which vary the true underlying parameters while holding the experimental design constant, confirm the robustness of our conclusions. The established performance hierarchy–with the HPD interval being the most reliable–is maintained across different levels of true reliability, indicating that the superiority of the Bayesian credible interval is not dependent on a specific parameter configuration.
-
-
Hierarchy of performance: The results clearly show a performance hierarchy. The ACI is the least reliable, suffering from under-coverage. The Boot-p interval is better, and the Boot-t interval is very accurate but computationally demanding30,32.
-
HPD interval superiority: The Bayesian HPD credible interval, obtained via MCMC, stands out as the best overall method. It consistently achieves coverage probabilities at or very near the 95% nominal level while maintaining average interval lengths that are competitive with, and often shorter than, the best frequentist alternatives. This demonstrates its efficiency and reliability in capturing the true posterior uncertainty for this complex estimation problem36.
In conclusion, the simulation study strongly validates the proposed inferential framework. For point estimation, Bayesian methods (both TK and MCMC) are superior to the MLE, with TK offering a highly efficient and accurate alternative to MCMC. For interval estimation, the MCMC-based HPD credible interval is the most recommended method, providing the best balance of accurate coverage and interval efficiency.
Empirical applications
To demonstrate the practical applicability of the proposed inferential framework, we analyze two distinct real-world datasets from different fields: materials science and hydrology. The primary goal of this section is to first validate the choice of the IER distribution as a suitable model for these datasets and then to apply the developed estimation techniques.
Dataset 1: breaking strength of carbon fibers
Dataset description
This dataset, adapted from studies by37, concerns the mechanical reliability of a carbon fiber composite used in aerospace applications. The system’s integrity depends on the ability of its constituent fibers to withstand operational stresses.
Strength data (\(X_{1}\)): Block adaptive censoring
The strength data represents the breaking strength (in GPa) of individual carbon fibers. To obtain this data, a total of \(N=100\) fiber specimens were allocated for destructive testing. Due to testing equipment constraints, the specimens were divided into \(g=5\) blocks of \(n_i=20\) each. Each block was subjected to a BAIIPH scheme. The experimental plan for each block was to observe \(s_i=16\) failures, with a maximum test duration of \(T_i = 3.8\) GPa. The pre-specified censoring scheme was \(R_{i,j} = (0, \dots , 0, 4)\), \(j=1,\ldots ,s_i\), \(i=1,\ldots ,g_\tau\), \(\tau =1,\ldots ,n\) meaning the 4 surviving units would be removed upon the 16th failure. For all five blocks, the 16th failure occurred before the time limit \(T_i\), so the experiment proceeded under Case I of the adaptive scheme. The combined 80 observed failure times (in GPa) are presented in Scheme 2.
Stress data (\(Y_{1}\)): Upper k-records
The stress data represents the maximum tensile stress (in GPa) recorded during in-flight operations. This data was compiled from long-term monitoring logs. To focus on extreme operational loads that threaten material integrity, the data was structured as a sequence of \(n=5\), \(a=35\) upper \(k=3\) records. The inter-k-record times were also recorded as required by the likelihood function in Eq. (2.9). The observed record stress values are also listed in Scheme 2.
Dataset 2: river flood discharge and levee capacity
Dataset description
This dataset, drawn from hydrological studies by38, assesses the flood risk for a river system. The reliability problem here is whether the levee system’s discharge capacity (strength) can withstand the annual peak flood discharge (stress).
Strength data (\(X_{2}\)): Block adaptive censoring
The strength data represents the maximum discharge capacity (in \(10^{3}\,\hbox {m}^{3}\hbox {/s}\)) of a river levee system, evaluated through simulation-based stress tests on different sections. A total of \(N=90\) levee section models were tested. The tests were run in \(g=3\) blocks of \(n_i=30\) each. Each block was tested under BAIIPH scheme with a plan to observe \(s_i=25\) failures and a maximum simulated stress duration of \(T_i=25\) (\(10^{3}\,\hbox {m}^{3}\hbox {/s}\)). The censoring plan was \(R_{i,j} = (0, \dots , 0, 5)\). As with the first dataset, all tests concluded before the time limit \(T_i\). The 75 observed failure points are shown in Scheme 3.
Stress data (\(Y_{2}\)): Upper k-records
The stress data consists of annual peak flood discharges (in \(10^{3}\,\hbox {m}^{3}\hbox {/s}\)) recorded over several decades. To model the occurrence of severe floods, the data was compiled as a sequence of \(n=5\), \(a=40\) upper \(k=2\) records. The observed flood record values are listed in Scheme 3.
Comparative goodness-of-fit analysis
Before applying our stress-strength model, we must confirm that the IER distribution provides an adequate fit for the data. To rigorously validate this choice, we compare its performance against three other flexible lifetime distributions: the inverted Weibull (IW), generalized inverted exponential (GIE), and inverted Nadarajah–Haghighi (INH) distributions.
Table 4 presents a comprehensive model comparison for all four data subsets. The table provides the MLEs of the parameters for each model, along with standard goodness-of-fit metrics: the Akaike information criterion (AIC), Bayesian information criterion (BIC), and the p-value from the Kolmogorov-Smirnov (K-S) test. For AIC and BIC, lower values indicate a better model fit, while for the K-S test, a higher p-value is preferred. The results clearly demonstrate the superiority of the IER distribution, as it consistently achieves the best metric values (highlighted in bold) across all scenarios. This provides strong empirical justification for its selection as the foundational distribution for our framework.
This conclusion is further supported by the visual goodness-of-fit plots in Figures 3 and 4. For each dataset, the fitted IER density closely follows the empirical histogram, and the Q-Q plots align closely with the diagonal reference line. This strong visual and statistical evidence validates the excellent fit of the IER distribution.
Estimation and results
Having confirmed the suitability of the IER distribution, we now apply the frequentist and Bayesian inferential methodologies developed in this paper to estimate the multicomponent stress-strength reliability for both datasets.
For the carbon fiber dataset, the system is a critical fiber bundle. We model it as a 3-out-of-5:G system, meaning the component remains functional if at least \(s=3\) of the \(\kappa =5\) fibers withstand the operational stress. The reliability of interest is thus \(\zeta _{3,5}\).
For the river flood dataset, the system represents a critical stretch of a levee system. We model its reliability as a 2-out-of-5:G system, where the area is protected if at least \(s=2\) of the \(\kappa =5\) key levee sections hold against the flood discharge. The reliability is \(\zeta _{2,5}\).
For the Bayesian analysis, non-informative gamma priors are used for the parameters. The asymmetric LINEX loss function is evaluated with \(c=1.5\) (to penalize overestimation more heavily) and \(c=-1.5\) (to penalize underestimation). For the frequentist bootstrap confidence intervals, \(B=2000\) bootstrap samples were generated to obtain the Boot-p and Boot-t estimates. The MCMC analysis was performed by generating 50,000 samples from the posterior distribution after a burn-in period of 10,000 iterations to ensure convergence. The point and 95% interval estimates for system reliability for both datasets are summarized in Tables 5 and 6.
To further elucidate the nature of these simulation-based estimates, the distributions generated by the bootstrap and MCMC procedures are visualized in Figs. 5 and 6 for the carbon fiber and river flood datasets, respectively. Each figure is composed of three panels: (a) the empirical sampling distribution of the bootstrap reliability estimates (\(\hat{\zeta }_{s,\kappa ,b}^*\)), from which the percentile interval is directly derived; (b) the distribution of the corresponding studentized bootstrap statistics (\(T_b^*\)), which is expected to be centered around zero; and (c) the posterior distribution of the reliability parameter (\(\zeta _{s,\kappa }^{(j)}\)) based on MCMC samples. These plots offer a clear visual representation of the uncertainty associated with each estimate, with the vertical dashed lines indicating the boundaries of the respective 95% confidence or credible intervals, thereby complementing the numerical results presented in the tables.
Empirical distributions of \(B=2000\) samples for Dataset 1: Breaking Strength of Carbon Fibers. Left: Bootstrap-p samples of \(\hat{\zeta }_{s,\kappa ,b}^*\). Middle: Bootstrap-t statistics \(T_b^*\). Right: Posterior MCMC samples of \(\zeta _{s,\kappa }^{(j)}\) based on 50,000 iterations with a burn-in of 10,000. Dashed vertical lines represent the 2.5th and 97.5th percentiles.
Empirical distributions of \(B = 2000\) samples based on Dataset 2: River Flood Discharge and Levee Capacity. Left: Bootstrap-p samples of \(\hat{\zeta }_{s,\kappa ,b}^*\). Middle: Bootstrap-t statistics \(T_b^*\). Right: Posterior MCMC samples of \(\zeta _{s,\kappa }^{(j)}\) generated from 50,000 iterations with a burn-in of 10,000 to ensure convergence. Vertical dashed lines represent the 2.5th and 97.5th percentiles.
Discussion of empirical findings
The analysis of the two real-world datasets provides compelling evidence for the practical utility of the proposed statistical framework and reinforces the conclusions from the simulation study.
Prior sensitivity analysis: A key component of a robust Bayesian analysis is to ensure that the findings are not unduly influenced by the choice of prior distributions, especially when using non-informative priors. Our primary analysis employed non-informative gamma priors with hyperparameters set to a small value (e.g., \(a_i = b_i = 0.001\)). To assess the sensitivity of our results to this choice, we conducted a sensitivity analysis by re-running the MCMC procedure for Dataset 1 using a different set of vague priors, specifically a gamma (0.1, 0.1) distribution for each parameter. Upon re-evaluation, the posterior estimates for the system reliability (\(\zeta _{3,5}\)) and its 95% HPD credible interval remained remarkably stable. The new point estimate and interval boundaries differed by less than 0.5% from those reported in Table 5. This stability demonstrates that the likelihood function, driven by the observed data, dominates the posterior distribution, and our conclusions are robust to reasonable variations in the specification of the non-informative priors.
Point estimation: For both the carbon fiber and river flood datasets, the Bayesian point estimates under SE loss are consistently close to the MLEs, but slightly higher, suggesting the incorporation of prior information leads to a modest adjustment. The TK and MCMC-based estimates are in very close agreement, validating the TK method as a computationally efficient and highly accurate alternative to full MCMC simulation for point estimation. The impact of the asymmetric LINEX loss function is clearly demonstrated34. For instance, in the carbon fiber analysis (Table 5), the reliability estimate under SE loss is 0.9641 (MCMC), whereas the LINEX estimate with \(c=1.5\) is a more conservative 0.9605, reflecting the higher penalty for overestimation. This feature is invaluable for engineers who must prioritize safety and avoid overstating system performance.
Interval estimation: The results for interval estimation clearly reveal a performance hierarchy that mirrors our simulation findings. The ACI is consistently the widest and thus the least precise for both datasets (0.0753 for Dataset 1, 0.1576 for Dataset 2). The bootstrap intervals offer a marked improvement in precision, with the Bootstrap-t interval being narrower than the Bootstrap-p. This underscores the benefit of resampling methods for complex data structures where large-sample normality may not hold perfectly. The Bayesian HPD credible interval stands out as the superior method for interval estimation. In both applications, it yielded the shortest interval length (0.0554 for Dataset 1 and 0.1166 for Dataset 2), providing the most precise range for the true reliability value. This superior performance is attributable to its construction, which efficiently captures the shape of the posterior distribution, providing the shortest possible interval for a given credibility level36.
Concluding remarks
This paper confronts a significant challenge in statistical inference: estimating multicomponent stress-strength reliability when data collection is complex. We were motivated by the practical reality that component strength and operational stress are often gathered using sophisticated but separate methods. Our work develops a unified framework to handle strength data from block adaptive Type-II progressive hybrid censoring alongside stress data structured as upper k-records with inter-k-record times, all modeled by the flexible inverted exponentiated Rayleigh distribution.
The result is a novel and exhaustive statistical framework that bridges a critical gap in the literature. For frequentist inference, we derived the MLE for system reliability and constructed both asymptotic and more robust bootstrap-p and bootstrap-t confidence intervals. In parallel, our Bayesian approach provides a complete pipeline, using the analytical efficiency of Tierney and Kadane’s approximation alongside the simulation-based power of the Metropolis-Hastings algorithm. This comprehensive approach allows for point estimation under both symmetric (SE) and asymmetric (LINEX) loss functions and the construction of highly accurate HPD credible intervals.
Our conclusions are supported by extensive Monte Carlo simulations and the analysis of two real-world engineering datasets from materials science and hydrology. These findings consistently highlight the strong performance of the Bayesian paradigm for this problem. Estimators derived from either TK or MCMC yielded consistently lower bias and mean squared error compared to the MLE. For interval estimation, the Bayesian HPD credible interval proved most effective, delivering the shortest intervals while maintaining excellent coverage probability. This efficiency in capturing posterior uncertainty makes it the most suitable method for practical applications where precise risk assessment is critical.
Ultimately, this research provides more than a theoretical exercise. It offers a robust toolkit for analyzing data from advanced censoring and record-value schemes, empowering risk analysts to make more informed decisions.
Data availability
The datasets analyzed during the current study and the complete computational code used to generate all numerical results are publicly available in the GitHub repository, which can be accessed at https://github.com/haidyali3/Simulation_Algorithm/blob/main/README.md?plain=1.
References
Breneman, J. E., Sahay, C. & Lewis, E. E. Introduction to reliability engineering (John Wiley & Sons, 2022).
Kotz, S., Lumelskii, Y. & Pensky, M. The stress-strength model and its generalizations: Theory and applications (Birkhäuser Boston, 2003).
Khan, A. H. & Jan, T. R. Estimation of multi component systems reliability in stress-strength models. J. Mod. Appl. Stat. Methods 13(2), 21 (2014).
Hassan, A. S., Elgarhy, M., Chesneau, C. & Nagy, H. F. Original research article Bayesian analysis of multi-component stress-strength reliability using improved record values. J. Auton. Intell. https://doi.org/10.32629/jai.v7i2.1017 (2024).
Baklizi, A. Inference on \(\text{ Pr(X } < \text{ Y) }\) in the two-parameter Weibull model based on records. Int. Sch. Res. Not. 2012(1), 263612 (2012).
Condino, F., Domma, F. & Latorre, G. Likelihood and Bayesian estimation of \(\text{ P(Y } < \text{ X) }\) using lower record values from a proportional reversed hazard family. Stat. Pap. 59, 467–485 (2018).
Mahmoud, M. A., El-Sagheer, R. M., Soliman, A. A. & Abd Ellah, A. H. Bayesian estimation of \(\text{ P[Y } < \text{ X] }\) based on record values from the Lomax distribution and MCMC technique. J. Mod. Appl. Stat. Methods 15(1), 25 (2016).
Nadar, M. & Kızılaslan, F. Classical and Bayesian estimation of \(\text{ P(X } < \text{ Y) }\) using upper record values from Kumaraswamy’s distribution. Stat. Pap. 55, 751–783 (2014).
Tarvirdizade, B. & Ahmadpour, M. Estimation of the stress–strength reliability for the two-parameter bathtub-shaped lifetime distribution based on upper record values. Stat. Methodol. 31, 58–72 (2016).
Wang, B. X. & Ye, Z. S. Inference on the Weibull distribution based on record values. Comput. Stat. & Data Anal. 83, 26–36 (2015).
Fan, J. & Gui, W. Statistical inference of inverted exponentiated Rayleigh distribution under joint progressively type-II censoring. Entropy 24(2), 171 (2022).
Balakrishnan, N. Progressive censoring methodology: An appraisal. Test 16(2), 211–259 (2007).
Balakrishnan, N. & Cramer, E. Art of progressive censoring (Birkhäuser, 2014).
Balakrishnan, N. & Kundu, D. Hybrid censoring: Models, inferential results and applications. Comput. Stat. & Data Anal. 57(1), 166–209 (2013).
Ng, H. K. T., Kundu, D. & Chan, P. S. Statistical analysis of exponential lifetimes under an adaptive Type-II progressive censoring scheme. Nav. Res. Logist. (NRL) 56(8), 687–698 (2009).
Dutta, S., Dey, S. & Kayal, S. Bayesian survival analysis of logistic exponential distribution for adaptive progressive Type-II censored data. Comput. Stat. 39(4), 2109–2155 (2024).
Elshahhat, A., Dutta, S., Abo-Kasem, O. E. & Mohammed, H. S. Statistical analysis of the Gompertz-Makeham model using adaptive progressively hybrid Type-II censoring and its applications in various sciences. J. Radiat. Res. Appl. Sci. 16(4), 100644 (2023).
Nassar, M. & Abo-Kasem, O. E. Estimation of the inverse Weibull parameters under adaptive type-II progressive hybrid censoring scheme. J. Comput. Appl. Math. 315, 228–239 (2017).
Zhu, T. Reliability estimation for two-parameter Weibull distribution under block censoring. Reliab. Eng. & Syst. Saf. 203, 107071 (2020).
Arnold, B. C., Balakrishnan, N. & Nagaraja, H. N. Records (John Wiley & Sons, 1998).
Safariyan, A., Arashi, M. & Arabi Belaghi, R. Improved point and interval estimation of the stress–strength reliability based on ranked set sampling. Statistics 53(1), 101–125 (2019).
Singh, P. K. & Singh, R. K. Record values and applications (CRC Press, 2018).
Pak, A., Parham, G. A. & Saraj, M. Inferences on the competing risk reliability problem for exponential distribution based on fuzzy data. IEEE Trans. Reliab. 63(1), 2–12 (2014).
Fatima, K. & Ahmad, S. P. The inverted exponentiated Rayleigh distribution: A new two-parameter lifetime distribution. J. Mod. Appl. Stat. Methods 16(1), 114–133 (2017).
Nassar, M., Al-Mezent, M., Al-Nozaili, R. & El-Sagheer, R. M. Estimation of the inverted exponentiated Weibull distribution parameters under adaptive type-II progressive hybrid censoring. J. Stat. Comput. Simul. 88(15), 3016–3034 (2018).
Singh, K., Tripathi, Y. M., Wang, L. & Wu, S.-J. Analysis of block adaptive Type-II progressive hybrid censoring with Weibull distribution. Mathematics 12(24), 4026 (2024).
MirMostafaee, S. M. T. K., Amini, M. & Balakrishnan, N. Exact nonparametric conditional inference based on k-records, given inter-k-record times. J. Korean Stat. Soc. 46(2), 308–320 (2017).
Rao, C. R. Linear statistical inference and its applications Vol. 2 (Wiley, 1973).
Pak, A., Raqab, M. Z., Mahmoudi, M. R., Band, S. S. & Mosavi, A. Estimation of stress-strength reliability \(\text{ R } = \text{ P(X } > \text{ Y) }\) based on Weibull record data in the presence of inter-record times. Alex. Eng. J. 61(3), 2130–2144 (2022).
Efron, B. & Tibshirani, R. J. An introduction to the bootstrap (Chapman and Hall/CRC, 1994).
Davison, A. C. & Hinkley, D. V. Bootstrap methods and their application (Cambridge University Press, 1997).
Hall, P. The bootstrap and Edgeworth expansion (Springer Science & Business Media, 2013).
Gelman, A. et al. Bayesian data analysis 3rd edn. (Chapman and Hall/CRC, 2013).
Zellner, A. Bayesian estimation and prediction using asymmetric loss functions. J. Am. Stat. Assoc. 81(394), 446–451 (1986).
Tierney, L. & Kadane, J. B. Accurate approximations for posterior moments and marginal densities. J. Am. Stat. Assoc. 81(393), 82–86 (1986).
Chen, M. H. & Shao, Q. M. Monte Carlo estimation of Bayesian credible and HPD intervals. J. Comput. Graph. Stat. 8(1), 69–92 (1999).
Agarwal, B. D., Broutman, L. J. & Chandrashekhara, K. Analysis and performance of fiber composites (John Wiley & Sons, 2017).
Apel, H., Thieken, A. H., Merz, B. & Blöschl, G. Flood risk assessment and associated uncertainty. Nat. Hazards Earth Syst. Sci. 4(2), 295–308 (2004).
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB).
Author information
Authors and Affiliations
Contributions
Haidy A. Newer wrote the main manuscript text, prepared figures and reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Newer, H.A. Multicomponent stress-strength reliability analysis using the inverted exponentiated rayleigh distribution under block adaptive type-II progressive hybrid censoring and k-records. Sci Rep 15, 43820 (2025). https://doi.org/10.1038/s41598-025-30570-9
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-30570-9













