Search results for: software cumulative failure prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9051

Search results for: software cumulative failure prediction

6141 Development of Structural Deterioration Models for Flexible Pavement Using Traffic Speed Deflectometer Data

Authors: Sittampalam Manoharan, Gary Chai, Sanaul Chowdhury, Andrew Golding

Abstract:

The primary objective of this paper is to present a simplified approach to develop the structural deterioration model using traffic speed deflectometer data for flexible pavements. Maintaining assets to meet functional performance is not economical or sustainable in the long terms, and it would end up needing much more investments for road agencies and extra costs for road users. Performance models have to be included for structural and functional predicting capabilities, in order to assess the needs, and the time frame of those needs. As such structural modelling plays a vital role in the prediction of pavement performance. A structural condition is important for the prediction of remaining life and overall health of a road network and also major influence on the valuation of road pavement. Therefore, the structural deterioration model is a critical input into pavement management system for predicting pavement rehabilitation needs accurately. The Traffic Speed Deflectometer (TSD) is a vehicle-mounted Doppler laser system that is capable of continuously measuring the structural bearing capacity of a pavement whilst moving at traffic speeds. The device’s high accuracy, high speed, and continuous deflection profiles are useful for network-level applications such as predicting road rehabilitations needs and remaining structural service life. The methodology adopted in this model by utilizing time series TSD maximum deflection (D0) data in conjunction with rutting, rutting progression, pavement age, subgrade strength and equivalent standard axle (ESA) data. Then, regression analyses were undertaken to establish a correlation equation of structural deterioration as a function of rutting, pavement age, seal age and equivalent standard axle (ESA). This study developed a simple structural deterioration model which will enable to incorporate available TSD structural data in pavement management system for developing network-level pavement investment strategies. Therefore, the available funding can be used effectively to minimize the whole –of- life cost of the road asset and also improve pavement performance. This study will contribute to narrowing the knowledge gap in structural data usage in network level investment analysis and provide a simple methodology to use structural data effectively in investment decision-making process for road agencies to manage aging road assets.

Keywords: adjusted structural number (SNP), maximum deflection (D0), equant standard axle (ESA), traffic speed deflectometer (TSD)

Procedia PDF Downloads 138
6140 Predicting the Success of Bank Telemarketing Using Artificial Neural Network

Authors: Mokrane Selma

Abstract:

The shift towards decision making (DM) based on artificial intelligence (AI) techniques will change the way in which consumer markets and our societies function. Through AI, predictive analytics is being used by businesses to identify these patterns and major trends with the objective to improve the DM and influence future business outcomes. This paper proposes an Artificial Neural Network (ANN) approach to predict the success of telemarketing calls for selling bank long-term deposits. To validate the proposed model, we uses the bank marketing data of 41188 phone calls. The ANN attains 98.93% of accuracy which outperforms other conventional classifiers and confirms that it is credible and valuable approach for telemarketing campaign managers.

Keywords: bank telemarketing, prediction, decision making, artificial intelligence, artificial neural network

Procedia PDF Downloads 136
6139 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 120
6138 Slip Limit Prediction of High-Strength Bolt Joints Based on Local Approach

Authors: Chang He, Hiroshi Tamura, Hiroshi Katsuchi, Jiaqi Wang

Abstract:

In this study, the aim is to infer the slip limit (static friction limit) of contact interfaces in bolt friction joints by analyzing other bolt friction joints with the same contact surface but in a different shape. By using the Weibull distribution to deal with microelements on the contact surface statistically, the slip limit of a certain type of bolt joint was predicted from other types of bolt joint with the same contact surface. As a result, this research succeeded in predicting the slip limit of bolt joins with different numbers of contact surfaces and with different numbers of bolt rows.

Keywords: bolt joints, slip coefficient, finite element method, Weibull distribution

Procedia PDF Downloads 151
6137 A Research on Tourism Market Forecast and Its Evaluation

Authors: Min Wei

Abstract:

The traditional prediction methods of the forecast for tourism market are paid more attention to the accuracy of the forecasts, ignoring the results of the feasibility of forecasting and predicting operability, which had made it difficult to predict the results of scientific testing. With the application of Linear Regression Model, this paper attempts to construct a scientific evaluation system for predictive value, both to ensure the accuracy, stability of the predicted value, and to ensure the feasibility of forecasting and predicting the results of operation. The findings show is that a scientific evaluation system can implement the scientific concept of development, the harmonious development of man and nature co-ordinate.

Keywords: linear regression model, tourism market, forecast, tourism economics

Procedia PDF Downloads 314
6136 Linkage Disequilibrium and Haplotype Blocks Study from Two High-Density Panels and a Combined Panel in Nelore Beef Cattle

Authors: Priscila A. Bernardes, Marcos E. Buzanskas, Luciana C. A. Regitano, Ricardo V. Ventura, Danisio P. Munari

Abstract:

Genotype imputation has been used to reduce genomic selections costs. In order to increase haplotype detection accuracy in methods that considers the linkage disequilibrium, another approach could be used, such as combined genotype data from different panels. Therefore, this study aimed to evaluate the linkage disequilibrium and haplotype blocks in two high-density panels before and after the imputation to a combined panel in Nelore beef cattle. A total of 814 animals were genotyped with the Illumina BovineHD BeadChip (IHD), wherein 93 animals (23 bulls and 70 progenies) were also genotyped with the Affymetrix Axion Genome-Wide BOS 1 Array Plate (AHD). After the quality control, 809 IHD animals (509,107 SNPs) and 93 AHD (427,875 SNPs) remained for analyses. The combined genotype panel (CP) was constructed by merging both panels after quality control, resulting in 880,336 SNPs. Imputation analysis was conducted using software FImpute v.2.2b. The reference (CP) and target (IHD) populations consisted of 23 bulls and 786 animals, respectively. The linkage disequilibrium and haplotype blocks studies were carried out for IHD, AHD, and imputed CP. Two linkage disequilibrium measures were considered; the correlation coefficient between alleles from two loci (r²) and the |D’|. Both measures were calculated using the software PLINK. The haplotypes' blocks were estimated using the software Haploview. The r² measurement presented different decay when compared to |D’|, wherein AHD and IHD had almost the same decay. For r², even with possible overestimation by the sample size for AHD (93 animals), the IHD presented higher values when compared to AHD for shorter distances, but with the increase of distance, both panels presented similar values. The r² measurement is influenced by the minor allele frequency of the pair of SNPs, which can cause the observed difference comparing the r² decay and |D’| decay. As a sum of the combinations between Illumina and Affymetrix panels, the CP presented a decay equivalent to a mean of these combinations. The estimated haplotype blocks detected for IHD, AHD, and CP were 84,529, 63,967, and 140,336, respectively. The IHD were composed by haplotype blocks with mean of 137.70 ± 219.05kb, the AHD with mean of 102.10kb ± 155.47, and the CP with mean of 107.10kb ± 169.14. The majority of the haplotype blocks of these three panels were composed by less than 10 SNPs, with only 3,882 (IHD), 193 (AHD) and 8,462 (CP) haplotype blocks composed by 10 SNPs or more. There was an increase in the number of chromosomes covered with long haplotypes when CP was used as well as an increase in haplotype coverage for short chromosomes (23-29), which can contribute for studies that explore haplotype blocks. In general, using CP could be an alternative to increase density and number of haplotype blocks, increasing the probability to obtain a marker close to a quantitative trait loci of interest.

Keywords: Bos taurus indicus, decay, genotype imputation, single nucleotide polymorphism

Procedia PDF Downloads 259
6135 Empirical Study of Correlation between the Cost Performance Index Stability and the Project Cost Forecast Accuracy in Construction Projects

Authors: Amin AminiKhafri, James M. Dawson-Edwards, Ryan M. Simpson, Simaan M. AbouRizk

Abstract:

Earned value management (EVM) has been introduced as an integrated method to combine schedule, budget, and work breakdown structure (WBS). EVM provides various indices to demonstrate project performance including the cost performance index (CPI). CPI is also used to forecast final project cost at completion based on the cost performance during the project execution. Knowing the final project cost during execution can initiate corrective actions, which can enhance project outputs. CPI, however, is not constant during the project, and calculating the final project cost using a variable index is an inaccurate and challenging task for practitioners. Since CPI is based on the cumulative progress values and because of the learning curve effect, CPI variation dampens and stabilizes as project progress. Although various definitions for the CPI stability have been proposed in literature, many scholars have agreed upon the definition that considers a project as stable if the CPI at 20% completion varies less than 0.1 from the final CPI. While 20% completion point is recognized as the stability point for military development projects, construction projects stability have not been studied. In the current study, an empirical study was first conducted using construction project data to determine the stability point for construction projects. Early findings have demonstrated that a majority of construction projects stabilize towards completion (i.e., after 70% completion point). To investigate the effect of CPI stability on cost forecast accuracy, the correlation between CPI stability and project cost at completion forecast accuracy was also investigated. It was determined that as projects progress closer towards completion, variation of the CPI decreases and final project cost forecast accuracy increases. Most projects were found to have 90% accuracy in the final cost forecast at 70% completion point, which is inlined with findings from the CPI stability findings. It can be concluded that early stabilization of the project CPI results in more accurate cost at completion forecasts.

Keywords: cost performance index, earned value management, empirical study, final project cost

Procedia PDF Downloads 146
6134 A Programming Assessment Software Artefact Enhanced with the Help of Learners

Authors: Romeo A. Botes, Imelda Smit

Abstract:

The demands of an ever changing and complex higher education environment, along with the profile of modern learners challenge current approaches to assessment and feedback. More learners enter the education system every year. The younger generation expects immediate feedback. At the same time, feedback should be meaningful. The assessment of practical activities in programming poses a particular problem, since both lecturers and learners in the information and computer science discipline acknowledge that paper-based assessment for programming subjects lacks meaningful real-life testing. At the same time, feedback lacks promptness, consistency, comprehensiveness and individualisation. Most of these aspects may be addressed by modern, technology-assisted assessment. The focus of this paper is the continuous development of an artefact that is used to assist the lecturer in the assessment and feedback of practical programming activities in a senior database programming class. The artefact was developed using three Design Science Research cycles. The first implementation allowed one programming activity submission per assessment intervention. This pilot provided valuable insight into the obstacles regarding the implementation of this type of assessment tool. A second implementation improved the initial version to allow multiple programming activity submissions per assessment. The focus of this version is on providing scaffold feedback to the learner – allowing improvement with each subsequent submission. It also has a built-in capability to provide the lecturer with information regarding the key problem areas of each assessment intervention.

Keywords: programming, computer-aided assessment, technology-assisted assessment, programming assessment software, design science research, mixed-method

Procedia PDF Downloads 283
6133 Using Large Databases and Interviews to Explore the Temporal Phases of Technology-Based Entrepreneurial Ecosystems

Authors: Elsie L. Echeverri-Carroll

Abstract:

Entrepreneurial ecosystems have become an important concept to explain the birth and sustainability of technology-based entrepreneurship within regions. However, as a theoretical concept, the temporal evolution of entrepreneurship systems remain underdeveloped, making it difficult to understand their dynamic contributions to entrepreneurs. This paper argues that successful technology-based ecosystems go over three cumulative spawning stages: corporate spawning, entrepreneurial spawning, and community spawning. The importance of corporate incubation in vibrant entrepreneurial ecosystems is well documented in the entrepreneurial literature. Similarly, entrepreneurial spawning processes for venture capital-backed startups are well documented in the financial literature. In contrast, there is little understanding of both the third stage of entrepreneurial spawning (when a community of entrepreneurs become a source of firm spawning) and the temporal sequence in which spawning effects occur in a region. We test this three-stage model of entrepreneurial spawning using data from two large databases on firm births—the Secretary of State (160,000 observations) and the National Establishment Time Series (NEST with 150,000 observations)—and information collected from 60 1½-hour interviews with startup founders and representatives of key entrepreneurial organizations. This temporal model is illustrated with case study of Austin, Texas ranked by the Kauffman Foundation as the number one entrepreneurial city in the United States in 2015 and 2016. The 1½-year study founded by the Kauffman Foundation demonstrates the importance of taken into consideration the temporal contributions of both large and entrepreneurial firms in understanding the factors that contribute to the birth and growth of technology-based entrepreneurial regions. More important, these learnings could offer an important road map for regions that pursue to advance their entrepreneurial ecosystems.

Keywords: entrepreneurial ecosystems, entrepreneurial industrial clusters, high-technology, temporal changes

Procedia PDF Downloads 256
6132 Finite Element Modeling and Analysis of Reinforced Concrete Coupled Shear Walls Strengthened with Externally Bonded Carbon Fiber Reinforced Polymer Composites

Authors: Sara Honarparast, Omar Chaallal

Abstract:

Reinforced concrete (RC) coupled shear walls (CSWs) are very effective structural systems in resisting lateral loads due to winds and earthquakes and are particularly used in medium- to high-rise RC buildings. However, most of existing old RC structures were designed for gravity loads or lateral loads well below the loads specified in the current modern seismic international codes. These structures may behave in non-ductile manner due to poorly designed joints, insufficient shear reinforcement and inadequate anchorage length of the reinforcing bars. This has been the main impetus to investigate an appropriate strengthening method to address or attenuate the deficiencies of these structures. The objective of this paper is to twofold: (i) evaluate the seismic performance of existing reinforced concrete coupled shear walls under reversed cyclic loading; and (ii) investigate the seismic performance of RC CSWs strengthened with externally bonded (EB) carbon fiber reinforced polymer (CFRP) sheets. To this end, two CSWs were considered as follows: (a) the first one is representative of old CSWs and therefore was designed according to the 1941 National Building Code of Canada (NBCC, 1941) with conventionally reinforced coupling beams; and (b) the second one, representative of new CSWs, was designed according to modern NBCC 2015 and CSA/A23.3 2014 requirements with diagonally reinforced coupling beam. Both CSWs were simulated using ANSYS software. Nonlinear behavior of concrete is modeled using multilinear isotropic hardening through a multilinear stress strain curve. The elastic-perfectly plastic stress-strain curve is used to simulate the steel material. Bond stress–slip is modeled between concrete and steel reinforcement in conventional coupling beam rather than considering perfect bond to better represent the slip of the steel bars observed in the coupling beams of these CSWs. The old-designed CSW was strengthened using CFRP sheets bonded to the concrete substrate and the interface was modeled using an adhesive layer. The behavior of CFRP material is considered linear elastic up to failure. After simulating the loading and boundary conditions, the specimens are analyzed under reversed cyclic loading. The comparison of results obtained for the two unstrengthened CSWs and the one retrofitted with EB CFRP sheets reveals that the strengthening method improves the seismic performance in terms of strength, ductility, and energy dissipation capacity.

Keywords: carbon fiber reinforced polymer, coupled shear wall, coupling beam, finite element analysis, modern code, old code, strengthening

Procedia PDF Downloads 184
6131 The Quantitative Analysis of the Influence of the Superficial Abrasion on the Lifetime of the Frog Rail

Authors: Dong Jiang

Abstract:

Turnout is the essential equipment on the railway, which also belongs to one of the strongest demanded infrastructural facilities of railway on account of the more seriously frog rail failures. In cooperation with Germany Company (DB Systemtechnik AG), our research team focuses on the quantitative analysis about the frog rails to predict their lifetimes. Moreover, the suggestions for the timely and effective maintenances are made to improve the economy of the frog rails. The lifetime of the frog rail depends strongly on the internal damage of the running surface until the breakages occur. On the basis of Hertzian theory of the contact mechanics, the dynamic loads of the running surface are calculated in form of the contact pressures on the running surface and the equivalent tensile stress inside the running surface. According to material mechanics, the strength of the frog rail is determined quantitatively in form of the Stress-cycle (S-N) curve. Under the interaction between the dynamic loads and the strength, the internal damage of the running surface is calculated by means of the linear damage hypothesis of the Miner’s rule. The emergence of the first Breakage on the running surface is to be defined as the failure criterion that the damage degree equals 1.0. From the microscopic perspective, the running surface of the frog rail is divided into numerous segments for the detailed analysis. The internal damage of the segment grows slowly in the beginning and disproportionately quickly in the end until the emergence of the breakage. From the macroscopic perspective, the internal damage of the running surface develops simply always linear along the lifetime. With this linear growth of the internal damages, the lifetime of the frog rail could be predicted simply through the immediate introduction of the slope of the linearity. However, the superficial abrasion plays an essential role in the results of the internal damages from the both perspectives. The influences of the superficial abrasion on the lifetime are described in form of the abrasion rate. It has two contradictory effects. On the one hand, the insufficient abrasion rate causes the concentration of the damage accumulation on the same position below the running surface to accelerate the rail failure. On the other hand, the excessive abrasion rate advances the disappearance of the head hardened surface of the frog rail to result in the untimely breakage on the surface. Thus, the relationship between the abrasion rate and the lifetime is subdivided into an initial phase of the increased lifetime and a subsequent phase of the more rapid decreasing lifetime with the continuous growth of the abrasion rate. Through the compensation of these two effects, the critical abrasion rate is discussed to reach the optimal lifetime.

Keywords: breakage, critical abrasion rate, frog rail, internal damage, optimal lifetime

Procedia PDF Downloads 195
6130 Nonlinear Homogenized Continuum Approach for Determining Peak Horizontal Floor Acceleration of Old Masonry Buildings

Authors: Andreas Rudisch, Ralf Lampert, Andreas Kolbitsch

Abstract:

It is a well-known fact among the engineering community that earthquakes with comparatively low magnitudes can cause serious damage to nonstructural components (NSCs) of buildings, even when the supporting structure performs relatively well. Past research works focused mainly on NSCs of nuclear power plants and industrial plants. Particular attention should also be given to architectural façade elements of old masonry buildings (e.g. ornamental figures, balustrades, vases), which are very vulnerable under seismic excitation. Large numbers of these historical nonstructural components (HiNSCs) can be found in highly frequented historical city centers and in the event of failure, they pose a significant danger to persons. In order to estimate the vulnerability of acceleration sensitive HiNSCs, the peak horizontal floor acceleration (PHFA) is used. The PHFA depends on the dynamic characteristics of the building, the ground excitation, and induced nonlinearities. Consequently, the PHFA can not be generalized as a simple function of height. In the present research work, an extensive case study was conducted to investigate the influence of induced nonlinearity on the PHFA for old masonry buildings. Probabilistic nonlinear FE time-history analyses considering three different hazard levels were performed. A set of eighteen synthetically generated ground motions was used as input to the structure models. An elastoplastic macro-model (multiPlas) for nonlinear homogenized continuum FE-calculation was calibrated to multiple scales and applied, taking specific failure mechanisms of masonry into account. The macro-model was calibrated according to the results of specific laboratory and cyclic in situ shear tests. The nonlinear macro-model is based on the concept of multi-surface rate-independent plasticity. Material damage or crack formation are detected by reducing the initial strength after failure due to shear or tensile stress. As a result, shear forces can only be transmitted to a limited extent by friction when the cracking begins. The tensile strength is reduced to zero. The first goal of the calibration was the consistency of the load-displacement curves between experiment and simulation. The calibrated macro-model matches well with regard to the initial stiffness and the maximum horizontal load. Another goal was the correct reproduction of the observed crack image and the plastic strain activities. Again the macro-model proved to work well in this case and shows very good correlation. The results of the case study show that there is significant scatter in the absolute distribution of the PHFA between the applied ground excitations. An absolute distribution along the normalized building height was determined in the framework of probability theory. It can be observed that the extent of nonlinear behavior varies for the three hazard levels. Due to the detailed scope of the present research work, a robust comparison with code-recommendations and simplified PHFA distributions are possible. The chosen methodology offers a chance to determine the distribution of PHFA along the building height of old masonry structures. This permits a proper hazard assessment of HiNSCs under seismic loads.

Keywords: nonlinear macro-model, nonstructural components, time-history analysis, unreinforced masonry

Procedia PDF Downloads 153
6129 Knowledge Management: Why is So Difficult? From “A Good Idea” to Organizational Contribute

Authors: Lisandro Blas, Héctor Tamanini

Abstract:

From earliest 90 to now, no many companies or organization can “really” implement a knowledge management (KM) system that works (no only viewed from a measurement model, but in this continuity). Which are the reasons of that? Some of the reason maybe could be embedded in how KM is demanded (usefulness, priority, experts, a definition of KM) vs the importance and resources that the organizations afford (budget, responsible of a specific area of KM, intangibility). Many organizations “claim” the importance of Knowledge Management but thhese demands are not reflecting these claims in their future actions. With another’s tools or managements ideas the organizations put the economics and human resources to work. Why it´s not occur in KM? This paper tray to explain some of this reasons and tray to deal with this situations through a survey done in 2011 for a IAPG (Argentinean Institute from Oil & Gas) Congress.

Keywords: knowledge management into organizations, new perspectives, failure in implementation, claim

Procedia PDF Downloads 408
6128 Mechanical Ventilation: Relationship between Body Mass Index and Selected Patients' Outcomes at a University Hospital in Cairo

Authors: Mohamed Mamdouh Al-Banna, Warda Youssef Mohamed Morsy, Hanaa Ali El-Feky, Ashraf Hussein Abdelmohsen

Abstract:

Background: The mechanically ventilated patients need a special nursing care with continuous closed observation. The patients’ body mass index may affect their prognosis or outcomes. Aim of the study: to investigate the relationship between BMI and selected outcomes of critically ill mechanically ventilated patients. Research Design: A descriptive correlational research design was utilized Research questions: a) what is the BMI profile of mechanically ventilated patients admitted to critical care units over a period of six months? b) What is the relationship between body mass index and frequency of organ dysfunction, length of ICU stay, weaning from mechanical ventilation, and the mortality rate among adult critically ill mechanically ventilated patients? Setting: different intensive care units of Cairo University Hospitals. Sample: A convenience sample of 30 mechanically ventilated patients for at least 72 hours. Tools of data collection: Three tools were utilized to collect data pertinent to the current study: tool 1: patients’ sociodemographic and medical data sheet, tool 2: BURNS Wean Assessment Program (BWAP) checklist, tool 3: Sequential organ failure assessment (SOFA score) sheet. Results: The majority of the studied sample (77%) was males, and (26.7 %) of the studied sample were in the age group of 18-28 years old, and (26.7 %) were in the age group of 40-50 years old. Moreover, two thirds (66.7%) of the studied sample were within normal BMI. No significant statistical relationship between BMI category and ICU length of stay or the mortality rate among the studied sample, (X² = 11.31, P value = 0.79), (X² = 0.15, P value = 0.928) respectively. No significant statistical relationship between BMI category and the weaning trials from mechanical ventilation among the studied sample, (X² = 0.15, P value = 0.928). No significant statistical relationship was found between BMI category and the occurrence of organ dysfunction among the studied sample, (X² = 2.54, P value = 0.637). Conclusion: No relationship between the BMI categories and the selected patients’ outcomes (weaning from MV, length of ICU stay, occurrence of organ dysfunction, mortality rate). Recommendations: Replication of this study on a larger sample from different geographical locations in Arab Republic of Egypt, conducting farther studies to assess the effect of the quality of nursing care on the mechanically ventilated patients’ outcomes.

Keywords: mechanical ventilation, body mass index, outcomes of mechanically ventilated patient, organ failure

Procedia PDF Downloads 237
6127 Investigation of Zinc Corrosion in Tropical Soil Solution

Authors: M. Lebrini, L. Salhi, C. Deyrat, C. Roos, O. Nait-Rabah

Abstract:

The paper presents a large experimental study on the corrosion of zinc in tropical soil and in the ground water at the various depths. Through this study, the corrosion rate prediction was done on the basis of two methods the electrochemical method and the gravimetric. The electrochemical results showed that the corrosion rate is more important at the depth levels 0 m to 0.5 m and 0.5 m to 1 m and beyond these depth levels, the corrosion rate is less important. The electrochemical results indicated also that a passive layer is formed on the zinc surface. The found SEM and EDX micrographs displayed that the surface is extremely attacked and confirmed that a zinc oxide layer is present on the surface whose thickness and relief increase as the contact with soil increases.

Keywords: soil corrosion, galvanized steel, electrochemical technique, SEM and EDX

Procedia PDF Downloads 109
6126 Reconstruction Spectral Reflectance Cube Based on Artificial Neural Network for Multispectral Imaging System

Authors: Iwan Cony Setiadi, Aulia M. T. Nasution

Abstract:

The multispectral imaging (MSI) technique has been used for skin analysis, especially for distant mapping of in-vivo skin chromophores by analyzing spectral data at each reflected image pixel. For ergonomic purpose, our multispectral imaging system is decomposed in two parts: a light source compartment based on LED with 11 different wavelenghts and a monochromatic 8-Bit CCD camera with C-Mount Objective Lens. The software based on GUI MATLAB to control the system was also developed. Our system provides 11 monoband images and is coupled with a software reconstructing hyperspectral cubes from these multispectral images. In this paper, we proposed a new method to build a hyperspectral reflectance cube based on artificial neural network algorithm. After preliminary corrections, a neural network is trained using the 32 natural color from X-Rite Color Checker Passport. The learning procedure involves acquisition, by a spectrophotometer. This neural network is then used to retrieve a megapixel multispectral cube between 380 and 880 nm with a 5 nm resolution from a low-spectral-resolution multispectral acquisition. As hyperspectral cubes contain spectra for each pixel; comparison should be done between the theoretical values from the spectrophotometer and the reconstructed spectrum. To evaluate the performance of reconstruction, we used the Goodness of Fit Coefficient (GFC) and Root Mean Squared Error (RMSE). To validate reconstruction, the set of 8 colour patches reconstructed by our MSI system and the one recorded by the spectrophotometer were compared. The average GFC was 0.9990 (standard deviation = 0.0010) and the average RMSE is 0.2167 (standard deviation = 0.064).

Keywords: multispectral imaging, reflectance cube, spectral reconstruction, artificial neural network

Procedia PDF Downloads 310
6125 Non-Cytotoxic Natural Sourced Inorganic Hydroxyapatite (HAp) Scaffold Facilitate Bone-like Mechanical Support and Cell Proliferation

Authors: Sudip Mondal, Biswanath Mondal, Sudit S. Mukhopadhyay, Apurba Dey

Abstract:

Bioactive materials improve devices for a long lifespan but have mechanical limitations. Mechanical characterization is one of the very important characteristics to evaluate the life span and functionality of the scaffold material. After implantation of scaffold material the primary stage rejection of scaffold occurs due to non biocompatible effect of host body system. The second major problems occur due to the effect of mechanical failure. The mechanical and biocompatibility failure of the scaffold materials can be overcome by the prior evaluation of the scaffold materials. In this study chemically treated Labeo rohita scale is used for synthesizing hydroxyapatite (HAp) biomaterial. Thermo-gravimetric and differential thermal analysis (TG-DTA) is carried out to ensure thermal stability. The chemical composition and bond structures of wet ball-milled calcined HAp powder is characterized by Fourier Transform Infrared spectroscopy (FTIR), X-ray Diffraction (XRD), Field Emission Scanning Electron Microscopy (FE-SEM), Transmission Electron Microscopy (TEM), Energy Dispersive X-ray (EDX) analysis. Fish scale derived apatite materials consists of nano-sized particles with Ca/P ratio of 1.71. The biocompatibility through cytotoxicity evaluation and MTT assay are carried out in MG63 osteoblast cell lines. In the cell attachment study, the cells are tightly attached with HAp scaffolds developed in the laboratory. The result clearly suggests that HAp material synthesized in this study do not have any cytotoxic effect, as well as it has a natural binding affinity for mammalian cell lines. The synthesized HAp powder further successfully used to develop porous scaffold material with suitable mechanical property of ~0.8GPa compressive stress, ~1.10 GPa a hardness and ~ 30-35% porosity which is acceptable for implantation in trauma region for animal model. The histological analysis also supports the bio-affinity of processed HAp biomaterials in Wistar rat model for investigating the contact reaction and stability at the artificial or natural prosthesis interface for biomedical function. This study suggests the natural sourced fish scale-derived HAp material could be used as a suitable alternative biomaterial for tissue engineering application in near future.

Keywords: biomaterials, hydroxyapatite, scaffold, mechanical property, tissue engineering

Procedia PDF Downloads 445
6124 The Role of Information Technology in Supply Chain Management

Authors: V. Jagadeesh, K. Venkata Subbaiah, P. Govinda Rao

Abstract:

This paper explaining about the significance of information technology tools and software packages in supply chain management (SCM) in order to manage the entire supply chain. Managing materials flow and financial flow and information flow effectively and efficiently with the aid of information technology tools and packages in order to deliver right quantity with right quality of goods at right time by using right methods and technology. Information technology plays a vital role in streamlining the sales forecasting and demand planning and Inventory control and transportation in supply networks and finally deals with production planning and scheduling. It achieves the objectives by streamlining the business process and integrates within the enterprise and its extended enterprise. SCM starts with customer and it involves sequence of activities from customer, retailer, distributor, manufacturer and supplier within the supply chain framework. It is the process of integrating demand planning and supply network planning and production planning and control. Forecasting indicates the direction for planning raw materials in order to meet the production planning requirements. Inventory control and transportation planning allocate the optimal or economic order quantity by utilizing shortest possible routes to deliver the goods to the customer. Production planning and control utilize the optimal resources mix in order to meet the capacity requirement planning. The above operations can be achieved by using appropriate information technology tools and software packages for the supply chain management.

Keywords: supply chain management, information technology, business process, extended enterprise

Procedia PDF Downloads 364
6123 People Who Live in Poverty Usually Do So Due to Circumstances Far Beyond Their Control: A Multiple Case Study on Poverty Simulation Events

Authors: Tracy Smith-Carrier

Abstract:

Burgeoning research extols the benefits of innovative experiential learning activities to increase participants’ engagement, enhance their individual learning, and bridge the gap between theory and practice. This presentation discusses findings from a multiple case study on poverty simulation events conducted with two samples: undergraduate students and community participants. After exploring the nascent research on the benefits and limitations of poverty simulation activities, the study explores whether participating in a poverty simulation resulted in changes to participants’ beliefs about the causes and effects of poverty, as well as shifts in their attitudes and actions toward people experiencing poverty. For the purposes of triangulation, quantitative and qualitative data from a variety of sources were analyzed: participant feedback surveys, qualitative responses, and pre, post, and follow-up questionnaires. Findings show statistically significant results (p<.05) from both samples on cumulative scores of the modified Attitudes Toward Poverty Scale, indicating an improvement in participants’ attitudes toward poverty. Although generally positive about their experiences, participating in the simulation did not appear to have prompted participants to take specific actions to reduce poverty. Conclusions drawn from the research study suggest that poverty simulation planners should be wary of adopting scenarios that emphasize, or fail to adequately contextualize, behaviours or responses that might perpetuate individual explanations of poverty. Moreover, organizers must carefully consider how to ensure participants in their audience currently experiencing low-income do not become emotionally distressed, triggered or further marginalized in the process. While overall participants were positive about their experiences in the simulation, the events did not appear to have prompted them to action. Moving beyond the goal of increasing participants’ understandings of poverty, interventions that foster greater engagement in poverty issues over the long-term are necessary.

Keywords: empathy, experiential learning, poverty awareness, poverty simulation

Procedia PDF Downloads 247
6122 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: anthropomorphic phantom, computed tomography, CT-expo, radiation dose

Procedia PDF Downloads 204
6121 Development of a Matlab® Program for the Bi-Dimensional Truss Analysis Using the Stiffness Matrix Method

Authors: Angel G. De Leon Hernandez

Abstract:

A structure is defined as a physical system or, in certain cases, an arrangement of connected elements, capable of bearing certain loads. The structures are presented in every part of the daily life, e.g., in the designing of buildings, vehicles and mechanisms. The main goal of a structure designer is to develop a secure, aesthetic and maintainable system, considering the constraint imposed to every case. With the advances in the technology during the last decades, the capabilities of solving engineering problems have increased enormously. Nowadays the computers, play a critical roll in the structural analysis, pitifully, for university students the vast majority of these software are inaccessible due to the high complexity and cost they represent, even when the software manufacturers offer student versions. This is exactly the reason why the idea of developing a more reachable and easy-to-use computing tool. This program is designed as a tool for the university students enrolled in courser related to the structures analysis and designs, as a complementary instrument to achieve a better understanding of this area and to avoid all the tedious calculations. Also, the program can be useful for graduated engineers in the field of structural design and analysis. A graphical user interphase is included in the program to make it even simpler to operate it and understand the information requested and the obtained results. In the present document are included the theoretical basics in which the program is based to solve the structural analysis, the logical path followed in order to develop the program, the theoretical results, a discussion about the results and the validation of those results.

Keywords: stiffness matrix method, structural analysis, Matlab® applications, programming

Procedia PDF Downloads 106
6120 Spatial Variation of Nitrogen, Phosphorus and Potassium Contents of Tomato (Solanum lycopersicum L.) Plants Grown in Greenhouses (Springs) in Elmali-Antalya Region

Authors: Namik Kemal Sonmez, Sahriye Sonmez, Hasan Rasit Turkkan, Hatice Tuba Selcuk

Abstract:

In this study, the spatial variation of plant and soil nutrition contents of tomato plants grown in greenhouses was investigated in Elmalı region of Antalya. For this purpose, total of 19 sampling points were determined. Coordinates of each sampling points were recorded by using a hand-held GPS device and were transferred to satellite data in GIS. Soil samples were collected from two different depths, 0-20 and 20-40 cm, and leaf were taken from different tomato greenhouses. The soil and plant samples were analyzed for N, P and K. Then, attribute tables were created with the analyses results by using GIS. Data were analyzed and semivariogram models and parameters (nugget, sill and range) of variables were determined by using GIS software. Kriged maps of variables were created by using nugget, sill and range values with geostatistical extension of ArcGIS software. Kriged maps of the N, P and K contents of plant and soil samples showed patchy or a relatively smooth distribution in the study areas. As a result, the N content of plants were sufficient approximately 66% portion of the tomato productions. It was determined that the P and K contents were sufficient of 70% and 80% portion of the areas, respectively. On the other hand, soil total K contents were generally adequate and available N and P contents were found to be highly good enough in two depths (0-20 and 20-40 cm) 90% portion of the areas.

Keywords: Elmali, nutrients, springs greenhouses, spatial variation, tomato

Procedia PDF Downloads 229
6119 Performance and Availability Analysis of 2N Redundancy Models

Authors: Yutae Lee

Abstract:

In this paper, we consider the performance and availability of a redundancy model. The redundancy model is a form of resilience that ensures service availability in the event of component failure. This paper considers a 2N redundancy model. In the model there are at most one active service unit and at most one standby service unit. The active one is providing the service while the standby is prepared to take over the active role when the active fails. We design our analysis model using Stochastic Reward Nets, and then evaluate the performance and availability of 2N redundancy model using Stochastic Petri Net Package (SPNP).

Keywords: availability, performance, stochastic reward net, 2N redundancy

Procedia PDF Downloads 402
6118 Use of the Occupational Repetitive Action Method in Different Productive Sectors: A Literature Review 2007-2018

Authors: Aanh Eduardo Dimate-Garcia, Diana Carolina Rodriguez-Romero, Edna Yuliana Gonzalez Rincon, Diana Marcela Pardo Lopez, Yessica Garibello Cubillos

Abstract:

Musculoskeletal disorders (MD) are the new epidemic of chronic diseases, are multifactorial and affect the different productive sectors. Although there are multiple instruments to evaluate the static and dynamic load, the method of repetitive occupational action (OCRA) seems to be an attractive option. Objective: It is aimed to analyze the use of the OCRA method and the prevalence of MD in workers of various productive sectors according to the literature (2007-2018). Materials and Methods: A literature review (following the PRISMA statement) of studies aimed at assessing the level of biomechanical risk (OCRA) and the prevalence of MD in the databases Scielo, Science Direct, Scopus, ProQuest, Gale, PubMed, Lilacs and Ebsco was realized; 7 studies met the selection criteria; the majority are quantitative (cross section). Results: it was evidenced (gardening and flower-growers) in this review that 79% of the conditions related to the task require physical requirements and involve repetitive movements. In addition, of the high appearance of DM in the high-low back, upper and lower extremities that are produced by the frequency of the activities carried out (footwear production). Likewise, there was evidence of 'very high risks' of developing MD (salmon industry) and a medium index (OCRA) for repetitive movements that require special care (U-Assembly line). Conclusions: the review showed the limited use of the OCRA method for the detection of MD in workers from different sectors, and this method can be used for the detection of biomechanical risk and the appearance of MD.

Keywords: checklist, cumulative trauma disorders, musculoskeletal diseases, repetitive movements

Procedia PDF Downloads 161
6117 Optimizing Communications Overhead in Heterogeneous Distributed Data Streams

Authors: Rashi Bhalla, Russel Pears, M. Asif Naeem

Abstract:

In this 'Information Explosion Era' analyzing data 'a critical commodity' and mining knowledge from vertically distributed data stream incurs huge communication cost. However, an effort to decrease the communication in the distributed environment has an adverse influence on the classification accuracy; therefore, a research challenge lies in maintaining a balance between transmission cost and accuracy. This paper proposes a method based on Bayesian inference to reduce the communication volume in a heterogeneous distributed environment while retaining prediction accuracy. Our experimental evaluation reveals that a significant reduction in communication can be achieved across a diverse range of dataset types.

Keywords: big data, bayesian inference, distributed data stream mining, heterogeneous-distributed data

Procedia PDF Downloads 144
6116 Navigating Construction Project Outcomes: Synergy Through the Evolution of Digital Innovation and Strategic Management

Authors: Derrick Mirindi, Frederic Mirindi, Oluwakemi Oshineye

Abstract:

The ongoing high rate of construction project failures worldwide is often blamed on the difficulties of managing stakeholders. This highlights the crucial role of strategic management (SM) in achieving project success. This study investigates how integrating digital tools into the SM framework can effectively address stakeholder-related challenges. This work specifically focuses on the impact of evolving digital tools, such as Project Management Software (PMS) (e.g., Basecamp and Wrike), Building Information Modeling (BIM) (e.g., Tekla BIMsight and Autodesk Navisworks), Virtual and Augmented Reality (VR/AR) (e.g., Microsoft HoloLens), drones and remote monitoring, and social media and Web-Based platforms, in improving stakeholder engagement and project outcomes. Through existing literature with examples of failed projects, the study highlights how the evolution of digital tools will serve as facilitators within the strategic management process. These tools offer benefits such as real-time data access, enhanced visualization, and more efficient workflows to mitigate stakeholder challenges in construction projects. The findings indicate that integrating digital tools with SM principles effectively addresses stakeholder challenges, resulting in improved project outcomes and stakeholder satisfaction. The research advocates for a combined approach that embraces both strategic management and digital innovation to navigate the complex stakeholder landscape in construction projects.

Keywords: strategic management, digital tools, virtual and augmented reality, stakeholder management, building information modeling, project management software

Procedia PDF Downloads 55
6115 A Review on Design and Analysis of Structure Against Blast Forces

Authors: Akshay Satishrao Kawtikwar

Abstract:

The effect of blast masses on structures is an essential aspect that need to be considered. This type of assault could be very horrifying, who where we take it into consideration in the course of the design system. While designing a building, now not only the wind and seismic masses however also the consequences of the blast have to be take into consideration. Blast load is the burden implemented to a structure form a blast wave that comes straight away after an explosion. A blast in or close to a constructing can reason catastrophic harm to the interior and exterior of the building, inner structural framework, wall collapsing, and so on. The most important feature of blast resistant construction is the ability to absorb blast energy without causing catastrophic failure of the structure as a whole. Construction materials in blastprotective structures must have ductility as well as strength.

Keywords: blast resistant design, blast load, explosion, ETABS

Procedia PDF Downloads 83
6114 A Survey on Various Technique of Modified TORA over MANET

Authors: Shreyansh Adesara, Sneha Pandiya

Abstract:

The mobile ad-hoc network (MANET) is an important and open area research for the examination and determination of the performance evolution. Temporary ordered routing algorithm (TORA) is adaptable and distributed MANET routing algorithm which is totally dependent on internet MANET Encapsulation protocol (IMEP) for the detection of the link and sensing of the link. If IMEP detect the wrong link failure then the network suffer from congestion and unnecessary route maintenance. Thus, the improvement in link detection method of TORA is introduced by various methods on IMEP by different perspective from different person. There are also different reactive routing protocols like AODV, TORA and DSR has been compared for the knowledge of the routing scenario for different parameter and using different model.

Keywords: IMEP, mobile ad-hoc network, protocol, TORA

Procedia PDF Downloads 431
6113 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 132
6112 Scientific Forecasting in International Relations

Authors: Djehich Mohamed Yousri

Abstract:

In this research paper, the future of international relations is believed to have an important place on the theoretical and applied levels because policy makers in the world are in dire need of such analyzes that are useful in drawing up the foreign policies of their countries, and protecting their national security from potential future threats, and in this context, The topic raised a lot of scientific controversy and intellectual debate, especially in terms of the extent of the effectiveness, accuracy, and ability of foresight methods to identify potential futures, and this is what attributed the controversy to the scientific foundations for foreseeing international relations. An arena for intellectual discussion between different thinkers in international relations belonging to different theoretical schools, which confirms to us the conceptual and implied development of prediction in order to reach the scientific level.

Keywords: foresight, forecasting, international relations, international relations theory, concept of international relations

Procedia PDF Downloads 197