Search results for: e2e reliability prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4099

Search results for: e2e reliability prediction

3139 Predicting Relative Performance of Sector Exchange Traded Funds Using Machine Learning

Authors: Jun Wang, Ge Zhang

Abstract:

Machine learning has been used in many areas today. It thrives at reviewing large volumes of data and identifying patterns and trends that might not be apparent to a human. Given the huge potential benefit and the amount of data available in the financial market, it is not surprising to see machine learning applied to various financial products. While future prices of financial securities are extremely difficult to forecast, we study them from a different angle. Instead of trying to forecast future prices, we apply machine learning algorithms to predict the direction of future price movement, in particular, whether a sector Exchange Traded Fund (ETF) would outperform or underperform the market in the next week or in the next month. We apply several machine learning algorithms for this prediction. The algorithms are Linear Discriminant Analysis (LDA), k-Nearest Neighbors (KNN), Decision Tree (DT), Gaussian Naive Bayes (GNB), and Neural Networks (NN). We show that these machine learning algorithms, most notably GNB and NN, have some predictive power in forecasting out-performance and under-performance out of sample. We also try to explore whether it is possible to utilize the predictions from these algorithms to outperform the buy-and-hold strategy of the S&P 500 index. The trading strategy to explore out-performance predictions does not perform very well, but the trading strategy to explore under-performance predictions can earn higher returns than simply holding the S&P 500 index out of sample.

Keywords: machine learning, ETF prediction, dynamic trading, asset allocation

Procedia PDF Downloads 98
3138 Fuzzy Availability Analysis of a Battery Production System

Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz

Abstract:

In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.

Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)

Procedia PDF Downloads 224
3137 Groundwater Flow Assessment Based on Numerical Simulation at Omdurman Area, Khartoum State, Sudan

Authors: Adil Balla Elkrail

Abstract:

Visual MODFLOW computer codes were selected to simulate head distribution, calculate the groundwater budgets of the area, and evaluate the effect of external stresses on the groundwater head and to demonstrate how the groundwater model can be used as a comparative technique in order to optimize utilization of the groundwater resource. A conceptual model of the study area, aquifer parameters, boundary, and initial conditions were used to simulate the flow model. The trial-and-error technique was used to calibrate the model. The most important criteria used to check the calibrated model were Root Mean Square error (RMS), Mean Absolute error (AM), Normalized Root Mean Square error (NRMS) and mass balance. The maps of the simulated heads elaborated acceptable model calibration compared to observed heads map. A time length of eight years and the observed heads of the year 2004 were used for model prediction. The predictive simulation showed that the continuation of pumping will cause relatively high changes in head distribution and components of groundwater budget whereas, the low deficit computed (7122 m3/d) between inflows and outflows cannot create a significant drawdown of the potentiometric level. Hence, the area under consideration may represent a high permeability and productive zone and strongly recommended for further groundwater development.

Keywords: aquifers, model simulation, groundwater, calibrations, trail-and- error, prediction

Procedia PDF Downloads 242
3136 Evaluation of Coastal Erosion in the Jurisdiction of the Municipalities of Puerto Colombia and Tubará, Atlántico – Colombia in Google Earth Engine with Landsat and Sentinel 2 Images

Authors: Francisco Reyes, Hector Ramirez

Abstract:

In the coastal zones are home to mangrove swamps, coral reefs, and seagrass ecosystems, which are the most biodiverse and fragile on the planet. These areas support a great diversity of marine life; they are also extraordinarily important for humans in the provision of food, water, wood, and other associated goods and services; they also contribute to climate regulation. The lack of an automated model that generates information on the dynamics of changes in coastlines and coastal erosion is identified as a central problem. Coastlines were determined from 1984 to 2020 on the Google Earth platform Engine from Landsat and Sentinel images, using the Normalized Differential Water Index (MNDWI) and Digital Shoreline Analysis System (DSAS) v5.0. Starting from the 2020 coastline, the 10-year prediction (Year 2031) was determined with the erosion of 238.32 hectares and an accretion of 181.96 hectares, while the 20-year prediction (Year 2041) will be presented an erosion of 544.04 hectares and an accretion of 133.94 hectares. The erosion and accretion of Playa Muelle in the municipality of Puerto Colombia were established, which will register the highest value of erosion. The coverage that presented the greatest change was that of artificialized Territories.

Keywords: coastline, coastal erosion, MNDWI, Google Earth Engine, Colombia

Procedia PDF Downloads 120
3135 Blind Hybrid ARQ Retransmissions with Different Multiplexing between Time and Frequency for Ultra-Reliable Low-Latency Communications in 5G

Authors: Mohammad Tawhid Kawser, Ishrak Kabir, Sadia Sultana, Tanjim Ahmad

Abstract:

A promising service category of 5G, popularly known as Ultra-Reliable Low-Latency Communications (URLLC), is devoted to providing users with the staunchest fail-safe connections in the splits of a second. The reliability of data transfer, as offered by Hybrid ARQ (HARQ), should be employed as URLLC applications are highly error-sensitive. However, the delay added by HARQ ACK/NACK and retransmissions can degrade performance as URLLC applications are highly delay-sensitive too. To improve latency while maintaining reliability, this paper proposes the use of blind transmissions of redundancy versions exploiting the frequency diversity of wide bandwidth of 5G. The blind HARQ retransmissions proposed so far consider narrow bandwidth cases, for example, dedicated short range communication (DSRC), shared channels for device-to-device (D2D) communication, etc., and thus, do not gain much from the frequency diversity. The proposal also combines blind and ACK/NACK based retransmissions for different multiplexing options between time and frequency depending on the current radio channel quality and stringency of latency requirements. The wide bandwidth of 5G justifies that the proposed blind retransmission, without waiting for ACK/NACK, is not palpably extravagant. A simulation is performed to demonstrate the improvement in latency of the proposed scheme.

Keywords: 5G, URLLC, HARQ, latency, frequency diversity

Procedia PDF Downloads 36
3134 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 96
3133 Evaluation of Intervention Effectiveness from the Client Perspective: Dimensions and Measurement of Wellbeing

Authors: Neşe Alkan

Abstract:

Purpose: The point that applied/clinical psychology, which is the practice and research discipline of the mental health field, has reached today can be summarized as the necessity of handling the psychological well-being of people from multiple perspectives and the goal of moving it to a higher level. Clients' subjective assessment of their own condition and wellbeing is an integral part of evidence-based interventions. There is a need for tools through which clients can evaluate the effectiveness of the psychotherapy/intervention performed with them and their contribution to the wellbeing and wellbeing of this process in a valid and reliable manner. The aim of this research is to meet this need, to test the reliability and validity of the index in Turkish, and explore its usability in the practices of both researchers and psychotherapists. Method: A total of 213 adults aged between 18-54, 69.5% working and 29.5% university students, were included in the study. Along with their demographic information, the participants were administered a set of scales: wellbeing, life satisfaction, spiritual satisfaction, shopping addiction, and loneliness, namely via an online platform. The construct validity of the wellbeing scale was tested with exploratory and confirmatory factor analyses, convergent and discriminant validity were tested with two-way full and partial correlation analyses and, measurement invariance was tested with one-way analysis of variance. Results: Factor analyzes showed that the scale consisted of six dimensions as it is in its original structure. The internal consistency of the scale was found to be Cronbach α = .82. Two-way correlation analyzes revealed that the wellbeing scale total score was positively correlated with general life satisfaction (r = .62) and spiritual satisfaction (r = .29), as expected. It was negatively correlated with loneliness (r = -.51) and shopping addiction (r = -.15). While the scale score did not vary by gender, previous illness, or nicotine addiction, it was found that the total wellbeing scale scores of the participants who had used antidepressant medication during the past year were lower than those who did not use antidepressant medication (F(1,204) = 7.713, p = .005). Conclusion: It has been concluded that the 12-item wellbeing scale consisting of six dimensions can be used in research and health sciences practices as a valid and reliable measurement tool. Further research which examines the reliability and validity of the scale in different widely used languages such as Spanish and Chinese is recommended.

Keywords: wellbeing, intervention effectiveness, reliability and validity, effectiveness

Procedia PDF Downloads 179
3132 Development and Testing of an Instrument to Measure Beliefs about Cervical Cancer Screening among Women in Botswana

Authors: Ditsapelo M. McFarland

Abstract:

Background: Despite the availability of the Pap smear services in urban areas in Botswana, most women in such areas do not seem to screen regular for prevention of the cervical cancer disease. Reasons for non-use of the available Pap smear services are not well understood. Beliefs about cancer may influence participation in cancer screening in these women. The purpose of this study was to develop an instrument to measure beliefs about cervical cancer and Pap smear screening among Black women in Botswana, and evaluate the psychometric properties of the instrument. Significance: Instruments that are designed to measure beliefs about cervical cancer and screening among black women in Botswana, as well as in the surrounding region, are presently not available. Valid and reliable instruments are needed for exploration of the women’s beliefs about cervical cancer. Conceptual Framework: The Health Belief Model (HBM) provided a conceptual framework for the study. Methodology: The study was done in four phases: Phase 1: item generation: 15 items were generated from literature review and qualitative data for each of four conceptually defined HBM constructs: Perceived susceptibility, severity, benefits, and barriers (Version 1). Phase 2: content validity: Four experts who were advanced practice nurses of African descent and were familiar with the content and the HBM evaluated the content. Experts rated the items on a 4-point Likert scale ranging from: 1=not relevant, 2=somewhat relevant, 3=relevant and 4=very relevant. Fifty-five items were retained for instrument development: perceived susceptibility - 11, severity - 14, benefits - 15 and barriers - 15, all measuring on a 4-point Likert scale ranging from strongly disagree (1) to strongly agree (4). (Version 2). Phase 3: pilot testing: The instrument was pilot tested on a convenient sample of 30 women in Botswana and revised as needed. Phase 4: reliability: the revised instrument (Version 3) was submitted to a larger sample of women in Botswana (n=300) for reliability testing. The sample included women who were Batswana by birth and decent, were aged 30 years and above and could complete an English questionnaire. Data were collected with the assistance of trained research assistants. Major findings: confirmatory factor analysis of the 55 items found that a number of items did not adequately load in a four-factor solution. Items that exhibited reasonable reliability and had low frequency of missing values (n=36) were retained: perceived barriers (14 items), perceived benefits (8 items), perceived severity (4 items), and perceived susceptibility (10 items). confirmatory factor analysis (principle components) for a four factor solution using varimax rotation demonstrated that these four factors explained 43% of the variation in these 36 items. Conclusion: reliability analysis using Cronbach’s Alpha gave generally satisfactory results with values from 0.53 to 0.89.

Keywords: cervical cancer, factor analysis, psychometric evaluation, varimax rotation

Procedia PDF Downloads 126
3131 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 162
3130 Defect Identification in Partial Discharge Patterns of Gas Insulated Switchgear and Straight Cable Joint

Authors: Chien-Kuo Chang, Yu-Hsiang Lin, Yi-Yun Tang, Min-Chiu Wu

Abstract:

With the trend of technological advancement, the harm caused by power outages is substantial, mostly due to problems in the power grid. This highlights the necessity for further improvement in the reliability of the power system. In the power system, gas-insulated switches (GIS) and power cables play a crucial role. Long-term operation under high voltage can cause insulation materials in the equipment to crack, potentially leading to partial discharges. If these partial discharges (PD) can be analyzed, preventative maintenance and replacement of equipment can be carried out, there by improving the reliability of the power grid. This research will diagnose defects by identifying three different defects in GIS and three different defects in straight cable joints, for a total of six types of defects. The partial discharge data measured will be converted through phase analysis diagrams and pulse sequence analysis. Discharge features will be extracted using convolutional image processing, and three different deep learning models, CNN, ResNet18, and MobileNet, will be used for training and evaluation. Class Activation Mapping will be utilized to interpret the black-box problem of deep learning models, with each model achieving an accuracy rate of over 95%. Lastly, the overall model performance will be enhanced through an ensemble learning voting method.

Keywords: partial discharge, gas-insulated switches, straight cable joint, defect identification, deep learning, ensemble learning

Procedia PDF Downloads 78
3129 Investigating Salience Theory’s Implications for Real-Life Decision Making: An Experimental Test for Whether the Allais Paradox Exists under Subjective Uncertainty

Authors: Christoph Ostermair

Abstract:

We deal with the effect of correlation between prospects on human decision making under uncertainty as proposed by the comparatively new and promising model of “salience theory of choice under risk”. In this regard, we show that the theory entails the prediction that the inconsistency of choices, known as the Allais paradox, should not be an issue in the context of “real-life decision making”, which typically corresponds to situations of subjective uncertainty. The Allais paradox, probably the best-known anomaly regarding expected utility theory, would then essentially have no practical relevance. If, however, empiricism contradicts this prediction, salience theory might suffer a serious setback. Explanations of the model for variable human choice behavior are mostly the result of a particular mechanism that does not come to play under perfect correlation. Hence, if it turns out that correlation between prospects – as typically found in real-world applications – does not influence human decision making in the expected way, this might to a large extent cost the theory its explanatory power. The empirical literature regarding the Allais paradox under subjective uncertainty is so far rather moderate. Beyond that, the results are hard to maintain as an argument, as the presentation formats commonly employed, supposably have generated so-called event-splitting effects, thereby distorting subjects’ choice behavior. In our own incentivized experimental study, we control for such effects by means of two different choice settings. We find significant event-splitting effects in both settings, thereby supporting the suspicion that the so far existing empirical results related to Allais paradoxes under subjective uncertainty may not be able to answer the question at hand. Nevertheless, we find that the basic tendency behind the Allais paradox, which is a particular switch of the preference relation due to a modified common consequence, shared by two prospects, is still existent both under an event-splitting and a coalesced presentation format. Yet, the modal choice pattern is in line with the prediction of salience theory. As a consequence, the effect of correlation, as proposed by the model, might - if anything - only weaken the systematic choice pattern behind the Allais paradox.

Keywords: Allais paradox, common consequence effect, models of decision making under risk and uncertainty, salience theory

Procedia PDF Downloads 199
3128 Qualitative and Quantitative Research Methodology Theoretical Framework and Descriptive Theory: PhD Construction Management

Authors: Samuel Quashie

Abstract:

PhDs in Construction Management often designs their methods based on those established in social sciences using theoretical models, to collect, gather and analysis data to answer research questions. Work aim is to apply qualitative and quantitative as a data analysis method, and as part of the theoretical framework - descriptive theory. To improve the ability to replicate the contribution to knowledge the research. Using practical triangulation approach, which covers, interviews and observations, literature review and (archival) document studies, project-based case studies, questionnaires surveys and review of integrated systems used in, construction and construction related industries. The clarification of organisational context and management delivery that influences organizational performance and quality of product and measures are achieved. Results illustrate improved reliability in this research approach when interpreting real world phenomena; cumulative results of research can be applied with confidence under similar environments. Assisted validity of the PhD research outcomes and strengthens the confidence to apply cumulative results of research under similar conditions in the Built Environment research systems, which have been criticised for the lack of reliability in approaches when interpreting real world phenomena.

Keywords: case studies, descriptive theory, theoretical framework, qualitative and quantitative research

Procedia PDF Downloads 386
3127 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study

Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa

Abstract:

The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.

Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity

Procedia PDF Downloads 415
3126 Globalization and Civil Society Organization of Nigeria: The Business Community

Authors: Mary I. Marire

Abstract:

This seminar examined globalization and civil society organization of Nigeria: The business community. The study examined the effect of globalization on the growth of civil society organizations in Nigeria. It equally evaluated the effect of globalization on the development of Nigerian business environment. The population consists of 562 members of Ohanaeze Ndigbo civil society organisation in Enugu State. The study used the survey approach. The primary sources used were used to administer 290 copies of questionnaire to the sampled members of the group, 282 were returned and accurately filled. The validity of the instrument was tested using content analysis and the result was good. The reliability was tested using the Pearson correlation coefficient (r). It gave a reliability co-efficient of 0.79 which was also good. The hypotheses were analyzed using f-statistics (ANOVA) tool. The findings indicated that that globalization has significant effect on the growth of civil society organizations in Nigeria and development of Nigerian business environment. Based on the findings, the study recommends that efforts should be directed at service delivery and the reduction of corruption to bring about a sustainable socio economic development in Nigeria. This will enable civil society groups to stand the test of time by organizing itself in a manner that will not make them apron or dependent on the government. There is the dire need for government at all levels to show and indeed demonstrate the political will and zeal to cope and meet with the current global realities in its totality.

Keywords: globalization, business environment, civil society, business growth

Procedia PDF Downloads 106
3125 Using Audit Tools to Maintain Data Quality for ACC/NCDR PCI Registry Abstraction

Authors: Vikrum Malhotra, Manpreet Kaur, Ayesha Ghotto

Abstract:

Background: Cardiac registries such as ACC Percutaneous Coronary Intervention Registry require high quality data to be abstracted, including data elements such as nuclear cardiology, diagnostic coronary angiography, and PCI. Introduction: The audit tool created is used by data abstractors to provide data audits and assess the accuracy and inter-rater reliability of abstraction performed by the abstractors for a health system. This audit tool solution has been developed across 13 registries, including ACC/NCDR registries, PCI, STS, Get with the Guidelines. Methodology: The data audit tool was used to audit internal registry abstraction for all data elements, including stress test performed, type of stress test, data of stress test, results of stress test, risk/extent of ischemia, diagnostic catheterization detail, and PCI data elements for ACC/NCDR PCI registries. This is being used across 20 hospital systems internally and providing abstraction and audit services for them. Results: The data audit tool had inter-rater reliability and accuracy greater than 95% data accuracy and IRR score for the PCI registry in 50 PCI registry cases in 2021. Conclusion: The tool is being used internally for surgical societies and across hospital systems. The audit tool enables the abstractor to be assessed by an external abstractor and includes all of the data dictionary fields for each registry.

Keywords: abstraction, cardiac registry, cardiovascular registry, registry, data

Procedia PDF Downloads 105
3124 Verification of Simulated Accumulated Precipitation

Authors: Nato Kutaladze, George Mikuchadze, Giorgi Sokhadze

Abstract:

Precipitation forecasts are one of the most demanding applications in numerical weather prediction (NWP). Georgia, as the whole Caucasian region, is characterized by very complex topography. The country territory is prone to flash floods and mudflows, quantitative precipitation estimation (QPE) and quantitative precipitation forecast (QPF) at any leading time are very important for Georgia. In this study, advanced research weather forecasting model’s skill in QPF is investigated over Georgia’s territory. We have analyzed several convection parameterization and microphysical scheme combinations for different rainy episodes and heavy rainy phenomena. We estimate errors and biases in accumulated 6 h precipitation using different spatial resolution during model performance verification for 12-hour and 24-hour lead time against corresponding rain gouge observations and satellite data. Various statistical parameters have been calculated for the 8-month comparison period, and some skills of model simulation have been evaluated. Our focus is on the formation and organization of convective precipitation systems in a low-mountain region. Several problems in connection with QPF have been identified for mountain regions, which include the overestimation and underestimation of precipitation on the windward and lee side of the mountains, respectively, and a phase error in the diurnal cycle of precipitation leading to the onset of convective precipitation in model forecasts several hours too early.

Keywords: extremal dependence index, false alarm, numerical weather prediction, quantitative precipitation forecasting

Procedia PDF Downloads 147
3123 Integrating Artificial Neural Network and Taguchi Method on Constructing the Real Estate Appraisal Model

Authors: Mu-Yen Chen, Min-Hsuan Fan, Chia-Chen Chen, Siang-Yu Jhong

Abstract:

In recent years, real estate prediction or valuation has been a topic of discussion in many developed countries. Improper hype created by investors leads to fluctuating prices of real estate, affecting many consumers to purchase their own homes. Therefore, scholars from various countries have conducted research in real estate valuation and prediction. With the back-propagation neural network that has been popular in recent years and the orthogonal array in the Taguchi method, this study aimed to find the optimal parameter combination at different levels of orthogonal array after the system presented different parameter combinations, so that the artificial neural network obtained the most accurate results. The experimental results also demonstrated that the method presented in the study had a better result than traditional machine learning. Finally, it also showed that the model proposed in this study had the optimal predictive effect, and could significantly reduce the cost of time in simulation operation. The best predictive results could be found with a fewer number of experiments more efficiently. Thus users could predict a real estate transaction price that is not far from the current actual prices.

Keywords: artificial neural network, Taguchi method, real estate valuation model, investors

Procedia PDF Downloads 489
3122 Transforming Breast Density Measurement with Artificial Intelligence: Population-Level Insights from BreastScreen NSW

Authors: Douglas Dunn, Ricahrd Walton, Matthew Warner-Smith, Chirag Mistry, Kan Ren, David Roder

Abstract:

Introduction: Breast density is a risk factor for breast cancer, both due to increased fibro glandular tissue that can harbor malignancy and the masking of lesions on mammography. Therefore, evaluation of breast density measurement is useful for risk stratification on an individual and population level. This study investigates the performance of Lunit INSIGHT MMG for automated breast density measurement. We analyze the reliability of Lunit compared to breast radiologists, explore density variations across the BreastScreen NSW population, and examine the impact of breast implants on density measurements. Methods: 15,518 mammograms were utilized for a comparative analysis of intra- and inter-reader reliability between Lunit INSIGHT MMG and breast radiologists. Subsequently, Lunit was used to evaluate 624,113 mammograms for investigation of density variations according to age and birth country, providing insights into diverse population subgroups. Finally, we compared breast density in 4,047 clients with implants to clients without implants, controlling for age and birth country. Results: Inter-reader variability between Lunit and Breast Radiologists weighted kappa coefficient was 0.72 (95%CI 0.71-0.73). Highest breast densities were seen in women with a North-East Asia background, whilst those of Aboriginal background had the lowest density. Across all backgrounds, density was demonstrated to reduce with age, though at different rates according to country of birth. Clients with implants had higher density relative to the age-matched no-implant strata. Conclusion: Lunit INSIGHT MMG demonstrates reasonable inter- and intra-observer reliability for automated breast density measurement. The scale of this study is significantly larger than any previous study assessing breast density due to the ability to process large volumes of data using AI. As a result, it provides valuable insights into population-level density variations. Our findings highlight the influence of age, birth country, and breast implants on density, emphasizing the need for personalized risk assessment and screening approaches. The large-scale and diverse nature of this study enhances the generalisability of our results, offering valuable information for breast cancer screening programs internationally.

Keywords: breast cancer, screening, breast density, artificial intelligence, mammography

Procedia PDF Downloads 3
3121 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units

Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz

Abstract:

Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.

Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting

Procedia PDF Downloads 222
3120 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94
3119 Optimization of a High-Growth Investment Portfolio for the South African Market Using Predictive Analytics

Authors: Mia Françoise

Abstract:

This report aims to develop a strategy for assisting short-term investors to benefit from the current economic climate in South Africa by utilizing technical analysis techniques and predictive analytics. As part of this research, value investing and technical analysis principles will be combined to maximize returns for South African investors while optimizing volatility. As an emerging market, South Africa offers many opportunities for high growth in sectors where other developed countries cannot grow at the same rate. Investing in South African companies with significant growth potential can be extremely rewarding. Although the risk involved is more significant in countries with less developed markets and infrastructure, there is more room for growth in these countries. According to recent research, the offshore market is expected to outperform the local market over the long term; however, short-term investments in the local market will likely be more profitable, as the Johannesburg Stock Exchange is predicted to outperform the S&P500 over the short term. The instabilities in the economy contribute to increased market volatility, which can benefit investors if appropriately utilized. Price prediction and portfolio optimization comprise the two primary components of this methodology. As part of this process, statistics and other predictive modeling techniques will be used to predict the future performance of stocks listed on the Johannesburg Stock Exchange. Following predictive data analysis, Modern Portfolio Theory, based on Markowitz's Mean-Variance Theorem, will be applied to optimize the allocation of assets within an investment portfolio. By combining different assets within an investment portfolio, this optimization method produces a portfolio with an optimal ratio of expected risk to expected return. This methodology aims to provide a short-term investment with a stock portfolio that offers the best risk-to-return profile for stocks listed on the JSE by combining price prediction and portfolio optimization.

Keywords: financial stocks, optimized asset allocation, prediction modelling, South Africa

Procedia PDF Downloads 97
3118 The Relationship between Celebrity Worship and Religiosity: A Study in Turkish Context

Authors: Saadet Taşyürek Demirel, Halide Sena Koçyiğit, Rümeysa Fatma Çetin

Abstract:

Celebrity worship, characterized by excessive admiration and devotion towards public figures, often mirrors elements of religious fervor. This study delves into the intricate connection between celebrity worship and religiosity, particularly within the Turkish cultural context, where Islamic values predominantly shape societal norms. The investigation involves the adaptation of the Celebrity Attitude Scale into Turkish and scrutinizes the interplay between young individuals' religiosity and their extreme adulation of celebrities. Additionally, the study explores potential moderating factors, such as age and gender, that might influence this relationship. A cohort of 197 young adults, aged 19 to 30, participated in this research, responding to self-administered questionnaires that assessed their attitudes towards celebrities using the adapted Celebrity Attitude Scale, along with their self-reported religiosity. The anticipated relationship between religiosity and celebrity worship is hypothesized to exhibit a non-linear pattern. Specifically, we expect religiosity to positively predict celebrity worship tendencies among individuals with minimal to moderate religiosity levels. Conversely, a negative association between religiosity and celebrity worship is expected to manifest among participants exhibiting moderate to high levels of religiosity. The findings of this study will contribute to the comprehension of the intricate dynamics between celebrity worship and religiosity, offering insights specifically within the Turkish cultural context. By shedding light on this relationship, the study aims to enhance our understanding of the multifaceted influences that shape individuals' perceptions and behaviors towards both celebrities and religious inclinations. Methodology of the study: A quantitative research will be conducted, where the factor analysis and correlational method will be used. The factor structure of the scale will be determined with exploratory and confirmatory factor analysis. The reliability, internal consistency, Objectives of the study: This study examines the relationship between religiosity and celebrity worship by young adults in the Turkish context. The other aim of the study is to assess the Turkish validity and reliability of the Celebrity Attitude Scale and contribute it to the literature. Main Contributions of the study: The study aims to introduce celebrity worship to Turkish literature, assess the Celebrity Attitude Scale's reliability in a Turkish sample, explore manifestations of celebrity worship, and examine its link to religiosity. This research addresses the lack of Turkish sources on celebrity worship and extends understanding of the concept.

Keywords: celebrity, worship, religiosity, god

Procedia PDF Downloads 83
3117 Assessment Power and Oscillation Damping Using the POD Controller and Proposed FOD Controller

Authors: Tohid Rahimi, Yahya Naderi, Babak Yousefi, Seyed Hossein Hoseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. However, FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. However, Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper, firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. Therefore, FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA)

Procedia PDF Downloads 476
3116 Assessment of Music Performance Anxiety in Portuguese Children and Adolescents

Authors: Pedro Dias, Lurdes Verissimo, Maria Joao Baptista, Ana Pinheiro, Patricia Oliveira-Silva, Sofia Serra, Daniela Coimbra

Abstract:

To achieve a high standard in performance, a musician must be well in all aspects of health (physical, mental and social). Anxiety in performance is related to the high level of coordination and skill needed in performance, as well as to the public evaluation of the performer. It affects some key elements of performance, such as concentration, memory, motor coordination, and relaxation. This work presents two studies focused on the adaptation and evaluation of the psychometric properties of the Music Performance Anxiety Inventory (MPAI-A) in young Portuguese music students. The first study was conducted with a sample of 161 adolescent music students, who responded to the Portuguese version of this instrument, and to the State-Trait Anxiety Inventory for Children (STAIC-c2). Validity and reliability were examined, and this measure revealed robust psychometric properties in this sample. The second study aimed to adapt the MPAI to a younger population (one hundred 8-10 years-old music students). Again, the MPAI and the STAIC c-2 were used in this study. Exploratory factor analysis, correlations, and internal consistency were used to evaluate the final children version of the instrument (MPAI-C), presenting a different factor structure compared to the adolescent version (10 items organized in 2 factors) and high levels of reliability and convergent validity.

Keywords: anxiety, assessment, children and adolescents, music performance

Procedia PDF Downloads 190
3115 A Second Spark Ignition Timing for the High Power Aircraft Radial Engine Using a CFD Transient Modeling

Authors: Tytus Tulwin, Adam Majczak

Abstract:

In aviation most important systems that impact the aircraft flight safety are duplicated. The ASz-62IR aircraft radial engine consists of two spark plugs powered by two separate magnetos. The relative difference in spark timing has an influence on the combustion process. The retardation of the second spark relative to the first spark was analyzed. The CFD simulation was developed as a multicycle transient model. Two independent spark sources imitate two flame fronts after an ignition period. It makes the combustion process shorter but only for certain range of second spark retardation. The model was validated by the in-cylinder pressure comparison. Combustion parameters were analyzed for different second spark retardation values. It was found that the most advantageous ignition timing in means of performance is simultaneous ignition. Nevertheless, for this engine the ignition time of the second spark plug is greatly retarded eliminating the advantageous performance influence. The reason behind this is maintaining high ignition certainty for all engine running conditions and for whole operating rpm range. In aviation the engine reliability is more important than its performance. Introducing electronic ignition system can yield from simultaneous ignition timing by increasing the engine performance and providing good reliability for all flight conditions. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: CFD, combustion, ignition, simulation, timing

Procedia PDF Downloads 383
3114 A Semantic and Concise Structure to Represent Human Actions

Authors: Tobias Strübing, Fatemeh Ziaeetabar

Abstract:

Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.

Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis

Procedia PDF Downloads 126
3113 Stress Concentration and Strength Prediction of Carbon/Epoxy Composites

Authors: Emre Ozaslan, Bulent Acar, Mehmet Ali Guler

Abstract:

Unidirectional composites are very popular structural materials used in aerospace, marine, energy and automotive industries thanks to their superior material properties. However, the mechanical behavior of composite materials is more complicated than isotropic materials because of their anisotropic nature. Also, a stress concentration availability on the structure, like a hole, makes the problem further complicated. Therefore, enormous number of tests require to understand the mechanical behavior and strength of composites which contain stress concentration. Accurate finite element analysis and analytical models enable to understand mechanical behavior and predict the strength of composites without enormous number of tests which cost serious time and money. In this study, unidirectional Carbon/Epoxy composite specimens with central circular hole were investigated in terms of stress concentration factor and strength prediction. The composite specimens which had different specimen wide (W) to hole diameter (D) ratio were tested to investigate the effect of hole size on the stress concentration and strength. Also, specimens which had same specimen wide to hole diameter ratio, but varied sizes were tested to investigate the size effect. Finite element analysis was performed to determine stress concentration factor for all specimen configurations. For quasi-isotropic laminate, it was found that the stress concentration factor increased approximately %15 with decreasing of W/D ratio from 6 to 3. Point stress criteria (PSC), inherent flaw method and progressive failure analysis were compared in terms of predicting the strength of specimens. All methods could predict the strength of specimens with maximum %8 error. PSC was better than other methods for high values of W/D ratio, however, inherent flaw method was successful for low values of W/D. Also, it is seen that increasing by 4 times of the W/D ratio rises the failure strength of composite specimen as %62.4. For constant W/D ratio specimens, all the strength prediction methods were more successful for smaller size specimens than larger ones. Increasing the specimen width and hole diameter together by 2 times reduces the specimen failure strength as %13.2.

Keywords: failure, strength, stress concentration, unidirectional composites

Procedia PDF Downloads 155
3112 Reliability Analysis of Glass Epoxy Composite Plate under Low Velocity

Authors: Shivdayal Patel, Suhail Ahmad

Abstract:

Safety assurance and failure prediction of composite material component of an offshore structure due to low velocity impact is essential for associated risk assessment. It is important to incorporate uncertainties associated with material properties and load due to an impact. Likelihood of this hazard causing a chain of failure events plays an important role in risk assessment. The material properties of composites mostly exhibit a scatter due to their in-homogeneity and anisotropic characteristics, brittleness of the matrix and fiber and manufacturing defects. In fact, the probability of occurrence of such a scenario is due to large uncertainties arising in the system. Probabilistic finite element analysis of composite plates due to low-velocity impact is carried out considering uncertainties of material properties and initial impact velocity. Impact-induced damage of composite plate is a probabilistic phenomenon due to a wide range of uncertainties arising in material and loading behavior. A typical failure crack initiates and propagates further into the interface causing de-lamination between dissimilar plies. Since individual crack in the ply is difficult to track. The progressive damage model is implemented in the FE code by a user-defined material subroutine (VUMAT) to overcome these problems. The limit state function is accordingly established while the stresses in the lamina are such that the limit state function (g(x)>0). The Gaussian process response surface method is presently adopted to determine the probability of failure. A comparative study is also carried out for different combination of impactor masses and velocities. The sensitivity based probabilistic design optimization procedure is investigated to achieve better strength and lighter weight of composite structures. Chain of failure events due to different modes of failure is considered to estimate the consequences of failure scenario. Frequencies of occurrence of specific impact hazards yield the expected risk due to economic loss.

Keywords: composites, damage propagation, low velocity impact, probability of failure, uncertainty modeling

Procedia PDF Downloads 279
3111 Predicting Stack Overflow Accepted Answers Using Features and Models with Varying Degrees of Complexity

Authors: Osayande Pascal Omondiagbe, Sherlock a Licorish

Abstract:

Stack Overflow is a popular community question and answer portal which is used by practitioners to solve technology-related challenges during software development. Previous studies have shown that this forum is becoming a substitute for official software programming languages documentation. While tools have looked to aid developers by presenting interfaces to explore Stack Overflow, developers often face challenges searching through many possible answers to their questions, and this extends the development time. To this end, researchers have provided ways of predicting acceptable Stack Overflow answers by using various modeling techniques. However, less interest is dedicated to examining the performance and quality of typically used modeling methods, and especially in relation to models’ and features’ complexity. Such insights could be of practical significance to the many practitioners that use Stack Overflow. This study examines the performance and quality of various modeling methods that are used for predicting acceptable answers on Stack Overflow, drawn from 2014, 2015 and 2016. Our findings reveal significant differences in models’ performance and quality given the type of features and complexity of models used. Researchers examining classifiers’ performance and quality and features’ complexity may leverage these findings in selecting suitable techniques when developing prediction models.

Keywords: feature selection, modeling and prediction, neural network, random forest, stack overflow

Procedia PDF Downloads 132
3110 Effects of Main Contractors’ Service Quality on Subcontractors’ Behaviours and Project Outcomes

Authors: Zhuoyuan Wang, Benson T. H. Lim, Imriyas Kamardeen

Abstract:

Effective service quality management has long been touted as a means of improving project and organisational performance. Particularly, in construction projects, main contractors are often seen as a broker between clients and subcontractors, and their service quality is thus associated with the overall project affinity and outcomes. While a considerable amount of research has focused on the aspect of clients-main contractors, very little research has been done to explore the effect of contractors’ service quality on subcontractors’ behaviours and so project outcomes. In addressing this gap, this study surveyed 97 subcontractors in the Chinese Construction industry and data was analysed using the Partial Least Square (PLS) Structural Equation Modelling (SEM) technique. The overall findings reveal that subcontractors categorised main contractors’ service quality into three dimensions: assurance; responsiveness; reliability and empathy. Of these, it is found that main contractors’ ‘assurance’ and ‘responsiveness’ positively influence subcontractors’ intention to engage in contractual behaviours. The results further show that the subcontractors’ intention to engage in organizational citizenship behaviours is associated with how flexible and committed the main contractors are in reliability and empathy. Collectively, both subcontractors’ contractual and organizational citizenship behaviours positively influence the overall project outcomes. In conclusion, the findings inform contractors different strategies towards managing and gaining subcontractors’ behaviour commitment in a socially connected, yet complex and uncertain, business environment.

Keywords: construction firms, organisational citizenship behaviour, service quality, social exchange theory

Procedia PDF Downloads 214