Search results for: prediction of publications
1766 Side Effects of Dental Whitening: Published Data from the Literature
Authors: Ilma Robo, Saimir Heta, Emela Dalloshi, Nevila Alliu, Vera Ostreni
Abstract:
The dental whitening process, beyond the fact that it is a mini-invasive dental treatment, has effects on the dental structure, or on the pulp of the tooth, where it is applied. The electronic search was performed using keywords to find articles published within the last 10 years about side effects, assessed as such, of minimally invasive dental bleaching treatment. Methodology: In selected articles, the other aim of the study was to evaluate the side effects of bleaching based on the percentage and type of solution used, where the latter was evaluated on the basic solution used for bleaching. Results: The side effects of bleaching are evaluated in selected articles depending on the method of bleaching application, which means it is carried out with recommended solutions, or with mixtures of alternative solutions or substances based on Internet information. Short conclusion: The dental bleaching process has side effects which have not yet been definitively evaluated, experimentally in large samples of individuals or animals (mice or cattle) to arrive at accurate numerical conclusions. The trend of publications about this topic is increasing in recent years, as long as the trend for aesthetic facial treatments, including dental ones, is increasing.Keywords: teeth whitening, side effects, permanent teeth, formed dental apex
Procedia PDF Downloads 631765 Energy System Analysis Using Data-Driven Modelling and Bayesian Methods
Authors: Paul Rowley, Adam Thirkill, Nick Doylend, Philip Leicester, Becky Gough
Abstract:
The dynamic performance of all energy generation technologies is impacted to varying degrees by the stochastic properties of the wider system within which the generation technology is located. This stochasticity can include the varying nature of ambient renewable energy resources such as wind or solar radiation, or unpredicted changes in energy demand which impact upon the operational behaviour of thermal generation technologies. An understanding of these stochastic impacts are especially important in contexts such as highly distributed (or embedded) generation, where an understanding of issues affecting the individual or aggregated performance of high numbers of relatively small generators is especially important, such as in ESCO projects. Probabilistic evaluation of monitored or simulated performance data is one technique which can provide an insight into the dynamic performance characteristics of generating systems, both in a prognostic sense (such as the prediction of future performance at the project’s design stage) as well as in a diagnostic sense (such as in the real-time analysis of underperforming systems). In this work, we describe the development, application and outcomes of a new approach to the acquisition of datasets suitable for use in the subsequent performance and impact analysis (including the use of Bayesian approaches) for a number of distributed generation technologies. The application of the approach is illustrated using a number of case studies involving domestic and small commercial scale photovoltaic, solar thermal and natural gas boiler installations, and the results as presented show that the methodology offers significant advantages in terms of plant efficiency prediction or diagnosis, along with allied environmental and social impacts such as greenhouse gas emission reduction or fuel affordability.Keywords: renewable energy, dynamic performance simulation, Bayesian analysis, distributed generation
Procedia PDF Downloads 4951764 Count Data Regression Modeling: An Application to Spontaneous Abortion in India
Authors: Prashant Verma, Prafulla K. Swain, K. K. Singh, Mukti Khetan
Abstract:
Objective: In India, around 20,000 women die every year due to abortion-related complications. In the modelling of count variables, there is sometimes a preponderance of zero counts. This article concerns the estimation of various count regression models to predict the average number of spontaneous abortion among women in the Punjab state of India. It also assesses the factors associated with the number of spontaneous abortions. Materials and methods: The study included 27,173 married women of Punjab obtained from the DLHS-4 survey (2012-13). Poisson regression (PR), Negative binomial (NB) regression, zero hurdle negative binomial (ZHNB), and zero-inflated negative binomial (ZINB) models were employed to predict the average number of spontaneous abortions and to identify the determinants affecting the number of spontaneous abortions. Results: Statistical comparisons among four estimation methods revealed that the ZINB model provides the best prediction for the number of spontaneous abortions. Antenatal care (ANC) place, place of residence, total children born to a woman, woman's education and economic status were found to be the most significant factors affecting the occurrence of spontaneous abortion. Conclusions: The study offers a practical demonstration of techniques designed to handle count variables. Statistical comparisons among four estimation models revealed that the ZINB model provided the best prediction for the number of spontaneous abortions and is recommended to be used to predict the number of spontaneous abortions. The study suggests that women receive institutional Antenatal care to attain limited parity. It also advocates promoting higher education among women in Punjab, India.Keywords: count data, spontaneous abortion, Poisson model, negative binomial model, zero hurdle negative binomial, zero-inflated negative binomial, regression
Procedia PDF Downloads 1551763 Comparison of Feedforward Back Propagation and Self-Organizing Map for Prediction of Crop Water Stress Index of Rice
Authors: Aschalew Cherie Workneh, K. S. Hari Prasad, Chandra Shekhar Prasad Ojha
Abstract:
Due to the increase in water scarcity, the crop water stress index (CWSI) is receiving significant attention these days, especially in arid and semiarid regions, for quantifying water stress and effective irrigation scheduling. Nowadays, machine learning techniques such as neural networks are being widely used to determine CWSI. In the present study, the performance of two artificial neural networks, namely, Self-Organizing Maps (SOM) and Feed Forward-Back Propagation Artificial Neural Networks (FF-BP-ANN), are compared while determining the CWSI of rice crop. Irrigation field experiments with varying degrees of irrigation were conducted at the irrigation field laboratory of the Indian Institute of Technology, Roorkee, during the growing season of the rice crop. The CWSI of rice was computed empirically by measuring key meteorological variables (relative humidity, air temperature, wind speed, and canopy temperature) and crop parameters (crop height and root depth). The empirically computed CWSI was compared with SOM and FF-BP-ANN predicted CWSI. The upper and lower CWSI baselines are computed using multiple regression analysis. The regression analysis showed that the lower CWSI baseline for rice is a function of crop height (h), air vapor pressure deficit (AVPD), and wind speed (u), whereas the upper CWSI baseline is a function of crop height (h) and wind speed (u). The performance of SOM and FF-BP-ANN were compared by computing Nash-Sutcliffe efficiency (NSE), index of agreement (d), root mean squared error (RMSE), and coefficient of correlation (R²). It is found that FF-BP-ANN performs better than SOM while predicting the CWSI of rice crops.Keywords: artificial neural networks; crop water stress index; canopy temperature, prediction capability
Procedia PDF Downloads 1171762 Effect of Wettability Alteration on Production Performance in Unconventional Tight Oil Reservoirs
Authors: Rashid S. Mohammad, Shicheng Zhang, Xinzhe Zhao
Abstract:
In tight oil reservoirs, wettability alteration has generally been considered as an effective way to remove fracturing fluid retention on the surface of the fracture and consequently improved oil production. However, there is a lack of a reliable productivity prediction model to show the relationship between the wettability and oil production in tight oil well. In this paper, a new oil productivity prediction model of immiscible oil-water flow and miscible CO₂-oil flow accounting for wettability is developed. This mathematical model is established by considering two different length scales: nonporous network and propped fractures. CO₂ flow diffuses in the nonporous network and high velocity non-Darcy flow in propped fractures are considered by taking into account the effect of wettability alteration on capillary pressure and relative permeability. A laboratory experiment is also conducted here to validate this model. Laboratory experiments have been designed to compare the water saturation profiles for different contact angle, revealing the fluid retention in rock pores that affects capillary force and relative permeability. Four kinds of brines with different concentrations are selected here to create different contact angles. In water-wet porous media, as the system becomes more oil-wet, water saturation decreases. As a result, oil relative permeability increases. On the other hand, capillary pressure which is the resistance for the oil flow increases as well. The oil production change due to wettability alteration is the result of the comprehensive changes of oil relative permeability and capillary pressure. The results indicate that wettability is a key factor for fracturing fluid retention removal and oil enhancement in tight reservoirs. By incorporating laboratory test into a mathematical model, this work shows the relationship between wettability and oil production is not a simple linear pattern but a parabolic one. Additionally, it can be used for a better understanding of optimization design of fracturing fluids.Keywords: wettability, relative permeability, fluid retention, oil production, unconventional and tight reservoirs
Procedia PDF Downloads 2361761 A Neural Network for the Prediction of Contraction after Burn Injuries
Authors: Ginger Egberts, Marianne Schaaphok, Fred Vermolen, Paul van Zuijlen
Abstract:
A few years ago, a promising morphoelastic model was developed for the simulation of contraction formation after burn injuries. Contraction can lead to a serious reduction in physical mobility, like a reduction in the range-of-motion of joints. If this is the case in a healing burn wound, then this is referred to as a contracture that needs medical intervention. The morphoelastic model consists of a set of partial differential equations describing both a chemical part and a mechanical part in dermal wound healing. These equations are solved with the numerical finite element method (FEM). In this method, many calculations are required on each of the chosen elements. In general, the more elements, the more accurate the solution. However, the number of elements increases rapidly if simulations are performed in 2D and 3D. In that case, it not only takes longer before a prediction is available, the computation also becomes more expensive. It is therefore important to investigate alternative possibilities to generate the same results, based on the input parameters only. In this study, a surrogate neural network has been designed to mimic the results of the one-dimensional morphoelastic model. The neural network generates predictions quickly, is easy to implement, and there is freedom in the choice of input and output. Because a neural network requires extensive training and a data set, it is ideal that the one-dimensional FEM code generates output quickly. These feed-forward-type neural network results are very promising. Not only can the network give faster predictions, but it also has a performance of over 99%. It reports on the relative surface area of the wound/scar, the total strain energy density, and the evolutions of the densities of the chemicals and mechanics. It is, therefore, interesting to investigate the applicability of a neural network for the two- and three-dimensional morphoelastic model for contraction after burn injuries.Keywords: biomechanics, burns, feasibility, feed-forward NN, morphoelasticity, neural network, relative surface area wound
Procedia PDF Downloads 551760 Utilizing Extended Reality in Disaster Risk Reduction Education: A Scoping Review
Authors: Stefano Scippo, Damiana Luzzi, Stefano Cuomo, Maria Ranieri
Abstract:
Background: In response to the rise in natural disasters linked to climate change, numerous studies on Disaster Risk Reduction Education (DRRE) have emerged since the '90s, mainly using a didactic transmission-based approach. Effective DRRE should align with an interactive, experiential, and participatory educational model, which can be costly and risky. A potential solution is using simulations facilitated by eXtended Reality (XR). Research Question: This study aims to conduct a scoping review to explore educational methodologies that use XR to enhance knowledge among teachers, students, and citizens about environmental risks, natural disasters (including climate-related ones), and their management. Method: A search string of 66 keywords was formulated, spanning three domains: 1) education and target audience, 2) environment and natural hazards, and 3) technologies. On June 21st, 2023, the search string was used across five databases: EBSCOhost, IEEE Xplore, PubMed, Scopus, and Web of Science. After deduplication and removing papers without abstracts, 2,152 abstracts (published between 2013 and 2023) were analyzed and 2,062 papers were excluded, followed by the exclusion of 56 papers after full-text scrutiny. Excluded studies focused on unrelated technologies, non-environmental risks, and lacked educational outcomes or accessible texts. Main Results: The 34 reviewed papers were analyzed for context, risk type, research methodology, learning objectives, XR technology use, outcomes, and educational affordances of XR. Notably, since 2016, there has been a rise in scientific publications, focusing mainly on seismic events (12 studies) and floods (9), with a significant contribution from Asia (18 publications), particularly Japan (7 studies). Methodologically, the studies were categorized into empirical (26) and non-empirical (8). Empirical studies involved user or expert validation of XR tools, while non-empirical studies included systematic reviews and theoretical proposals without experimental validation. Empirical studies were further classified into quantitative, qualitative, or mixed-method approaches. Six qualitative studies involved small groups of users or experts, while 20 quantitative or mixed-method studies used seven different research designs, with most (17) employing a quasi-experimental, one-group post-test design, focusing on XR technology usability over educational effectiveness. Non-experimental studies had methodological limitations, making their results hypothetical and in need of further empirical validation. Educationally, the learning objectives centered on knowledge and skills for surviving natural disaster emergencies. All studies recommended XR technologies for simulations or serious games but did not develop comprehensive educational frameworks around these tools. XR-based tools showed potential superiority over traditional methods in teaching risk and emergency management skills. However, conclusions were more valid in studies with experimental designs; otherwise, they remained hypothetical without empirical evidence. The educational affordances of XR, mainly user engagement, were confirmed by the studies. Authors’ Conclusions: The analyzed literature lacks specific educational frameworks for XR in DRRE, focusing mainly on survival knowledge and skills. There is a need to expand educational approaches to include uncertainty education, developing competencies that encompass knowledge, skills, and attitudes like risk perception.Keywords: disaster risk reduction education, educational technologies, scoping review, XR technologies
Procedia PDF Downloads 241759 The Hawza Al-’Ilmiyya and Its Role in Preserving the Shia Identity through Jurisprudence
Authors: Raied Khayou
Abstract:
The Hawza Al-'Ilmiyya is a network of religious seminaries in the Shia branch of Islam. This research mainly focuses on the oldest school located in Najaf, Iraq, because its core curriculum and main characteristics have been unchanged since the fourth century of Islam. Relying on a thorough literature review of Arabic and English publications, and interviews with current and previous students of the seminary, the current research outlines the factors proving how this seminary was crucial in keeping the Shia religious identity intact despite sometimes gruesome attempts of interference and persecution. There are several factors that helped the seminary to preserve its central importance. First, rooted in their theology, Shia Muslims believe that the Hawza Al-’Ilmiyya and its graduates carry a sacred authority. Secondly, the financial independence of the Seminary helped to keep it intact from any governmental or political meddling. Third, its unique teaching method, its matchless openness for new students, and its flexible curriculum made it attractive for many students who were interested in learning more about Shia theology and jurisprudence. The Hawza Al-‘Ilmiyya has the exclusive right to train clerics who hold the religious authority of Shia Islamic jurisprudence, and the seminary’s success in staying independent throughout history kept Shia Islamic theology independent, as well.Keywords: Hawza Al'Ilmiyya, religious seminary, Shia Muslim education, Islamic jurisprudence
Procedia PDF Downloads 1011758 Prediction of Endotracheal Tube Size in Children by Predicting Subglottic Diameter Using Ultrasonographic Measurement versus Traditional Formulas
Authors: Parul Jindal, Shubhi Singh, Priya Ramakrishnan, Shailender Raghuvanshi
Abstract:
Background: Knowledge of the influence of the age of the child on laryngeal dimensions is essential for all practitioners who are dealing with paediatric airway. Choosing the correct endotracheal tube (ETT) size is a crucial step in pediatric patients because a large-sized tube may cause complications like post-extubation stridor and subglottic stenosis. On the other hand with a smaller tube, there will be increased gas flow resistance, aspiration risk, poor ventilation, inaccurate monitoring of end-tidal gases and reintubation may also be required with a different size of the tracheal tube. Recent advancement in ultrasonography (USG) techniques should now allow for accurate and descriptive evaluation of pediatric airway. Aims and objectives: This study was planned to determine the accuracy of Ultrasonography (USG) to assess the appropriate ETT size and compare it with physical indices based formulae. Methods: After obtaining approval from Institute’s Ethical and Research committee, and parental written and informed consent, the study was conducted on 100 subjects of either sex between 12-60 months of age, undergoing various elective surgeries under general anesthesia requiring endotracheal intubation. The same experienced radiologist performed ultrasonography. The transverse diameter was measured at the level of cricoids cartilage by USG. After USG, general anesthesia was administered using standard techniques followed by the institute. An experienced anesthesiologist performed the endotracheal intubations with uncuffed endotracheal tube (Portex Tracheal Tube Smiths Medical India Pvt. Ltd.) with Murphy’s eye. He was unaware of the finding of the ultrasonography. The tracheal tube was considered best fit if air leak was satisfactory at 15-20 cm H₂O of airway pressure. The obtained values were compared with the values of endotracheal tube size calculated by ultrasonography, various age, height, weight-based formulas and diameter of right and left little finger. The correlation of the size of the endotracheal tube by different modalities was done and Pearson's correlation coefficient was obtained. The comparison of the mean size of the endotracheal tube by ultrasonography and by traditional formula was done by the Friedman’s test and Wilcoxon sign-rank test. Results: The predicted tube size was equal to best fit and best determined by ultrasonography (100%) followed by comparison to left little finger (98%) and right little finger (97%) and age-based formula (95%) followed by multivariate formula (83%) and body length (81%) formula. According to Pearson`s correlation, there was a moderate correlation of best fit endotracheal tube with endotracheal tube size by age-based formula (r=0.743), body length based formula (r=0.683), right little finger based formula (r=0.587), left little finger based formula (r=0.587) and multivariate formula (r=0.741). There was a strong correlation with ultrasonography (r=0.943). Ultrasonography was the most sensitive (100%) method of prediction followed by comparison to left (98%) and right (97%) little finger and age-based formula (95%), the multivariate formula had an even lesser sensitivity (83%) whereas body length based formula was least sensitive with a sensitivity of 78%. Conclusion: USG is a reliable method of estimation of subglottic diameter and for prediction of ETT size in children.Keywords: endotracheal intubation, pediatric airway, subglottic diameter, traditional formulas, ultrasonography
Procedia PDF Downloads 2401757 A Hybrid Model of Structural Equation Modelling-Artificial Neural Networks: Prediction of Influential Factors on Eating Behaviors
Authors: Maryam Kheirollahpour, Mahmoud Danaee, Amir Faisal Merican, Asma Ahmad Shariff
Abstract:
Background: The presence of nonlinearity among the risk factors of eating behavior causes a bias in the prediction models. The accuracy of estimation of eating behaviors risk factors in the primary prevention of obesity has been established. Objective: The aim of this study was to explore the potential of a hybrid model of structural equation modeling (SEM) and Artificial Neural Networks (ANN) to predict eating behaviors. Methods: The Partial Least Square-SEM (PLS-SEM) and a hybrid model (SEM-Artificial Neural Networks (SEM-ANN)) were applied to evaluate the factors affecting eating behavior patterns among university students. 340 university students participated in this study. The PLS-SEM analysis was used to check the effect of emotional eating scale (EES), body shape concern (BSC), and body appreciation scale (BAS) on different categories of eating behavior patterns (EBP). Then, the hybrid model was conducted using multilayer perceptron (MLP) with feedforward network topology. Moreover, Levenberg-Marquardt, which is a supervised learning model, was applied as a learning method for MLP training. The Tangent/sigmoid function was used for the input layer while the linear function applied for the output layer. The coefficient of determination (R²) and mean square error (MSE) was calculated. Results: It was proved that the hybrid model was superior to PLS-SEM methods. Using hybrid model, the optimal network happened at MPLP 3-17-8, while the R² of the model was increased by 27%, while, the MSE was decreased by 9.6%. Moreover, it was found that which one of these factors have significantly affected on healthy and unhealthy eating behavior patterns. The p-value was reported to be less than 0.01 for most of the paths. Conclusion/Importance: Thus, a hybrid approach could be suggested as a significant methodological contribution from a statistical standpoint, and it can be implemented as software to be able to predict models with the highest accuracy.Keywords: hybrid model, structural equation modeling, artificial neural networks, eating behavior patterns
Procedia PDF Downloads 1551756 An Overview of Bioinformatics Methods to Detect Novel Riboswitches Highlighting the Importance of Structure Consideration
Authors: Danny Barash
Abstract:
Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is that many are found in prokaryotes but only a small percentage of known riboswitches have been found in certain eukaryotic organisms. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods that include some slight structural considerations. These pattern-matching methods were the first ones to be applied for the purpose of riboswitch detection and they can also be programmed very efficiently using a data structure called affix arrays, making them suitable for genome-wide searches of riboswitch patterns. However, they are limited by their ability to detect harder to find riboswitches that deviate from the known patterns. Several methods have been developed since then to tackle this problem. The most commonly used by practitioners is Infernal that relies on Hidden Markov Models (HMMs) and Covariance Models (CMs). Profile Hidden Markov Models were also carried out in the pHMM Riboswitch Scanner web application, independently from Infernal. Other computational approaches that have been developed include RMDetect by the use of 3D structural modules and RNAbor that utilizes Boltzmann probability of structural neighbors. We have tried to incorporate more sophisticated secondary structure considerations based on RNA folding prediction using several strategies. The first idea was to utilize window-based methods in conjunction with folding predictions by energy minimization. The moving window approach is heavily geared towards secondary structure consideration relative to sequence that is treated as a constraint. However, the method cannot be used genome-wide due to its high cost because each folding prediction by energy minimization in the moving window is computationally expensive, enabling to scan only at the vicinity of genes of interest. The second idea was to remedy the inefficiency of the previous approach by constructing a pipeline that consists of inverse RNA folding considering RNA secondary structure, followed by a BLAST search that is sequence-based and highly efficient. This approach, which relies on inverse RNA folding in general and our own in-house fragment-based inverse RNA folding program called RNAfbinv in particular, shows capability to find attractive candidates that are missed by Infernal and other standard methods being used for riboswitch detection. We demonstrate attractive candidates found by both the moving-window approach and the inverse RNA folding approach performed together with BLAST. We conclude that structure-based methods like the two strategies outlined above hold considerable promise in detecting riboswitches and other conserved RNAs of functional importance in a variety of organisms.Keywords: riboswitches, RNA folding prediction, RNA structure, structure-based methods
Procedia PDF Downloads 2341755 A Convolution Neural Network Approach to Predict Pes-Planus Using Plantar Pressure Mapping Images
Authors: Adel Khorramrouz, Monireh Ahmadi Bani, Ehsan Norouzi, Morvarid Lalenoor
Abstract:
Background: Plantar pressure distribution measurement has been used for a long time to assess foot disorders. Plantar pressure is an important component affecting the foot and ankle function and Changes in plantar pressure distribution could indicate various foot and ankle disorders. Morphologic and mechanical properties of the foot may be important factors affecting the plantar pressure distribution. Accurate and early measurement may help to reduce the prevalence of pes planus. With recent developments in technology, new techniques such as machine learning have been used to assist clinicians in predicting patients with foot disorders. Significance of the study: This study proposes a neural network learning-based flat foot classification methodology using static foot pressure distribution. Methodologies: Data were collected from 895 patients who were referred to a foot clinic due to foot disorders. Patients with pes planus were labeled by an experienced physician based on clinical examination. Then all subjects (with and without pes planus) were evaluated for static plantar pressures distribution. Patients who were diagnosed with the flat foot in both feet were included in the study. In the next step, the leg length was normalized and the network was trained for plantar pressure mapping images. Findings: From a total of 895 image data, 581 were labeled as pes planus. A computational neural network (CNN) ran to evaluate the performance of the proposed model. The prediction accuracy of the basic CNN-based model was performed and the prediction model was derived through the proposed methodology. In the basic CNN model, the training accuracy was 79.14%, and the test accuracy was 72.09%. Conclusion: This model can be easily and simply used by patients with pes planus and doctors to predict the classification of pes planus and prescreen for possible musculoskeletal disorders related to this condition. However, more models need to be considered and compared for higher accuracy.Keywords: foot disorder, machine learning, neural network, pes planus
Procedia PDF Downloads 3601754 The Use of Emergency Coronary Angiography in Patients Following Out-Of-Hospital Cardiac Arrest and Subsequent Cardio-Pulmonary Resuscitation
Authors: Scott Ashby, Emily Granger, Mark Connellan
Abstract:
Objectives: 1) To identify if emergency coronary angiography improves outcomes in studies examining OHCA from assumed cardiac aetiology? 2) If so, is it indicated in all patients resuscitated following OHCA, and if not, who is it indicated for? 3) How effective are investigations for screening for the appropriate patients? Background: Out-of-hospital cardiac arrest is one of the leading mechanisms of death, and the most common causative pathology is coronary artery disease. In-hospital treatment following resuscitation greatly affects outcomes, yet there is debate over the most effective protocol. Methods: A literature search was conducted over multiple databases to identify all relevant articles published from 2005. An inclusion criterion was applied to all publications retrieved, which were then sorted by type. Results: A total of 3 existing reviews and 29 clinical studies were analysed in this review. There were conflicting conclusions, however increased use of angiography has shown to improve outcomes in the majority of studies, which cover a variety of settings and cohorts. Recommendations: Currently, emergency coronary angiography appears to improve outcomes in all/most cases of OHCA of assumed cardiac aetiology, regardless of ECG findings. Until a better tool for screening is available to reduce unnecessary procedures, the benefits appear to outweigh the costs/risks.Keywords: out of hospital cardiac arrest, coronary angiography, resuscitation, emergency medicine
Procedia PDF Downloads 2991753 Fault Prognostic and Prediction Based on the Importance Degree of Test Point
Authors: Junfeng Yan, Wenkui Hou
Abstract:
Prognostics and Health Management (PHM) is a technology to monitor the equipment status and predict impending faults. It is used to predict the potential fault and provide fault information and track trends of system degradation by capturing characteristics signals. So how to detect characteristics signals is very important. The select of test point plays a very important role in detecting characteristics signal. Traditionally, we use dependency model to select the test point containing the most detecting information. But, facing the large complicated system, the dependency model is not built so easily sometimes and the greater trouble is how to calculate the matrix. Rely on this premise, the paper provide a highly effective method to select test point without dependency model. Because signal flow model is a diagnosis model based on failure mode, which focuses on system’s failure mode and the dependency relationship between the test points and faults. In the signal flow model, a fault information can flow from the beginning to the end. According to the signal flow model, we can find out location and structure information of every test point and module. We break the signal flow model up into serial and parallel parts to obtain the final relationship function between the system’s testability or prediction metrics and test points. Further, through the partial derivatives operation, we can obtain every test point’s importance degree in determining the testability metrics, such as undetected rate, false alarm rate, untrusted rate. This contributes to installing the test point according to the real requirement and also provides a solid foundation for the Prognostics and Health Management. According to the real effect of the practical engineering application, the method is very efficient.Keywords: false alarm rate, importance degree, signal flow model, undetected rate, untrusted rate
Procedia PDF Downloads 3771752 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 701751 Bibliometric Analysis of Global Research Trends on Organization Culture, Strategic Leadership and Performance Using Scopus Database
Authors: Anyia Nduka, Aslan Bin Amad Senin
Abstract:
Taking a behavioral perspective of Organization Culture, Strategic Leadership, and performance (OC, SLP). We examine the role of Strategic Leadership as key vicious mechanism linking OC,SLP to the organizational capacities. Given the increasing degree of dependence of modern businesses on the use and scientific discovery of relevant data, research efforts around the entire globe have been accelerated. In today's corporate world, Strategic Leadership is still the most sustainable option of performance and competitive advantage. This is why it is critical to gain a deep understanding of research area and to strengthen new collaborative networks in efforts to support research transition towards these integrative efforts. This bibliometric analysis is aimed to examine global trends in OC,SLP research based on publication output, author co-authorships, and co-occurrences of author keywords among authors and affiliated countries. 2829 journal articles were retrieved from the Scopus database Between 1974 and 2021. From the research findings, there is a significant increase in number of publications with strong global collaboration (e.g., USA & UK). We also discovered that while most countries/territories without affiliations were centered in developing countries, the outstanding performance of Asian countries and the volume of their collaborations should be emulated.Keywords: organizational culture, strategic leadership, organizational resilience, performance
Procedia PDF Downloads 851750 Modification of Rk Equation of State for Liquid and Vapor of Ammonia by Genetic Algorithm
Authors: S. Mousavian, F. Mousavian, V. Nikkhah Rashidabad
Abstract:
Cubic equations of state like Redlich–Kwong (RK) EOS have been proved to be very reliable tools in the prediction of phase behavior. Despite their good performance in compositional calculations, they usually suffer from weaknesses in the predictions of saturated liquid density. In this research, RK equation was modified. The result of this study shows that modified equation has good agreement with experimental data.Keywords: equation of state, modification, ammonia, genetic algorithm
Procedia PDF Downloads 3821749 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers
Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist
Abstract:
Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden
Procedia PDF Downloads 1121748 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder
Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa
Abstract:
Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami
Procedia PDF Downloads 4931747 The Importance of Functioning and Disability Status Follow-Up in People with Multiple Sclerosis
Authors: Sanela Slavkovic, Congor Nad, Spela Golubovic
Abstract:
Background: The diagnosis of multiple sclerosis (MS) is a major life challenge and has repercussions on all aspects of the daily functioning of those attained by it – personal activities, social participation, and quality of life. Regular follow-up of only the neurological status is not informative enough so that it could provide data on the sort of support and rehabilitation that is required. Objective: The aim of this study was to establish the current level of functioning of persons attained by MS and the factors that influence it. Methods: The study was conducted in Serbia, on a sample of 108 persons with relapse-remitting form of MS, aged 20 to 53 (mean 39.86 years; SD 8.20 years). All participants were fully ambulatory. Methods applied in the study include Expanded Disability Status Scale-EDSS and World Health Organization Disability Assessment Schedule, WHODAS 2.0 (36-item version, self-administered). Results: Participants were found to experience the most problems in the domains of Participation, Mobility, Life activities and Cognition. The least difficulties were found in the domain of Self-care. Symptom duration was the only control variable with a significant partial contribution to the prediction of the WHODAS scale score (β=0.30, p < 0.05). The total EDSS score correlated with the total WHODAS 2.0 score (r=0.34, p=0.00). Statistically significant differences in the domain of EDSS 0-5.5 were found within categories (0-1.5; 2-3.5; 4-5.5). The more pronounced a participant’s EDSS score was, although not indicative of large changes in the neurological status, the more apparent the changes in the functional domain, i.e. in all areas covered by WHODAS 2.0. Pyramidal (β=0.34, p < 0.05) and Bowel and bladder (β=0.24, p < 0.05) functional systems were found to have a significant partial contribution to the prediction of the WHODAS score. Conclusion: Measuring functioning and disability is important in the follow-up of persons suffering from MS in order to plan rehabilitation and define areas in which additional support is needed.Keywords: disability, functionality, multiple sclerosis, rehabilitation
Procedia PDF Downloads 1201746 The Use of Building Energy Simulation Software in Case Studies: A Literature Review
Authors: Arman Ameen, Mathias Cehlin
Abstract:
The use of Building Energy Simulation (BES) software has increased in the last two decades, parallel to the development of increased computing power and easy to use software applications. This type of software is primarily used to simulate the energy use and the indoor environment for a building. The rapid development of these types of software has raised their level of user-friendliness, better parameter input options and the increased possibility of analysis, both for a single building component or an entire building. This, in turn, has led to many researchers utilizing BES software in their research in various degrees. The aim of this paper is to carry out a literature review concerning the use of the BES software IDA Indoor Climate and Energy (IDA ICE) in the scientific community. The focus of this paper will be specifically the use of the software for whole building energy simulation, number and types of articles and publications dates, the area of application, types of parameters used, the location of the studied building, type of building, type of analysis and solution methodology. Another aspect that is examined, which is of great interest, is the method of validations regarding the simulation results. The results show that there is an upgoing trend in the use of IDA ICE and that researchers use the software in their research in various degrees depending on case and aim of their research. The satisfactory level of validation of the simulations carried out in these articles varies depending on the type of article and type of analysis.Keywords: building simulation, IDA ICE, literature review, validation
Procedia PDF Downloads 1351745 Decision Making Communication in the Process of Technologies Commercialization: Archival Analysis of the Process Content
Authors: Vaida Zemlickiene
Abstract:
Scientists around the world and practitioners are working to identify the factors that influence the results of technology commercialization and to propose the ideal model for the technology commercialization process. In other words, all stakeholders of technology commercialization seek to find a formula or set of rules to succeed in commercializing technologies in order to avoid unproductive investments. In this article, the process of commercialization technology is understood as the process of transforming inventions into marketable products, services, and processes, or the path from the idea of using an invention to a product that incorporates process from 1 to 9 technology readiness level (TRL). There are many publications in the field of management literature, which are aimed at managing the commercialization process. However, there is an apparent lack of research for communication in decision-making in the process of technology commercialization. Works were done in the past, and the last decade's global research analysis led to the unambiguous conclusion that the methodological framework is not mature enough to be of practical use in business. The process of technology commercialization and the decisions made in the process should be explored in-depth. An archival analysis is performed to find insights into decision-making communication in the process of technologies commercialization, to find out the content of technology commercialization process: decision-making stages and participants, to analyze the internal factors of technology commercialization, to perform their critical analysis, to analyze the concept of successful/unsuccessful technology commercialization.Keywords: the process of technology commercialization, communication in decision-making process, the content of technology commercialization process, successful/unsuccessful technology commercialization
Procedia PDF Downloads 1531744 Improvement of Environment and Climate Change Canada’s Gem-Hydro Streamflow Forecasting System
Authors: Etienne Gaborit, Dorothy Durnford, Daniel Deacu, Marco Carrera, Nathalie Gauthier, Camille Garnaud, Vincent Fortin
Abstract:
A new experimental streamflow forecasting system was recently implemented at the Environment and Climate Change Canada’s (ECCC) Canadian Centre for Meteorological and Environmental Prediction (CCMEP). It relies on CaLDAS (Canadian Land Data Assimilation System) for the assimilation of surface variables, and on a surface prediction system that feeds a routing component. The surface energy and water budgets are simulated with the SVS (Soil, Vegetation, and Snow) Land-Surface Scheme (LSS) at 2.5-km grid spacing over Canada. The routing component is based on the Watroute routing scheme at 1-km grid spacing for the Great Lakes and Nelson River watersheds. The system is run in two distinct phases: an analysis part and a forecast part. During the analysis part, CaLDAS outputs are used to force the routing system, which performs streamflow assimilation. In forecast mode, the surface component is forced with the Canadian GEM atmospheric forecasts and is initialized with a CaLDAS analysis. Streamflow performances of this new system are presented over 2019. Performances are compared to the current ECCC’s operational streamflow forecasting system, which is different from the new experimental system in many aspects. These new streamflow forecasts are also compared to persistence. Overall, the new streamflow forecasting system presents promising results, highlighting the need for an elaborated assimilation phase before performing the forecasts. However, the system is still experimental and is continuously being improved. Some major recent improvements are presented here and include, for example, the assimilation of snow cover data from remote sensing, a backward propagation of assimilated flow observations, a new numerical scheme for the routing component, and a new reservoir model.Keywords: assimilation system, distributed physical model, offline hydro-meteorological chain, short-term streamflow forecasts
Procedia PDF Downloads 1301743 The Impact of COVID-19 on Antibiotic Prescribing in Primary Care in England: Evaluation and Risk Prediction of the Appropriateness of Type and Repeat Prescribing
Authors: Xiaomin Zhong, Alexander Pate, Ya-Ting Yang, Ali Fahmi, Darren M. Ashcroft, Ben Goldacre, Brian Mackenna, Amir Mehrkar, Sebastian C. J. Bacon, Jon Massey, Louis Fisher, Peter Inglesby, Kieran Hand, Tjeerd van Staa, Victoria Palin
Abstract:
Background: This study aimed to predict risks of potentially inappropriate antibiotic type and repeat prescribing and assess changes during COVID-19. Methods: With the approval of NHS England, we used the OpenSAFELY platform to access the TPP SystmOne electronic health record (EHR) system and selected patients prescribed antibiotics from 2019 to 2021. Multinomial logistic regression models predicted the patient’s probability of receiving an inappropriate antibiotic type or repeating the antibiotic course for each common infection. Findings: The population included 9.1 million patients with 29.2 million antibiotic prescriptions. 29.1% of prescriptions were identified as repeat prescribing. Those with same-day incident infection coded in the EHR had considerably lower rates of repeat prescribing (18.0%), and 8.6% had a potentially inappropriate type. No major changes in the rates of repeat antibiotic prescribing during COVID-19 were found. In the ten risk prediction models, good levels of calibration and moderate levels of discrimination were found. Important predictors included age, prior antibiotic prescribing, and region. Patients varied in their predicted risks. For sore throat, the range from 2.5 to 97.5th percentile was 2.7 to 23.5% (inappropriate type) and 6.0 to 27.2% (repeat prescription). For otitis externa, these numbers were 25.9 to 63.9% and 8.5 to 37.1%, respectively. Interpretation: Our study found no evidence of changes in the level of inappropriate or repeat antibiotic prescribing after the start of COVID-19. Repeat antibiotic prescribing was frequent and varied according to regional and patient characteristics. There is a need for treatment guidelines to be developed around antibiotic failure and clinicians provided with individualised patient information.Keywords: antibiotics, infection, COVID-19 pandemic, antibiotic stewardship, primary care
Procedia PDF Downloads 1201742 Interpretable Deep Learning Models for Medical Condition Identification
Authors: Dongping Fang, Lian Duan, Xiaojing Yuan, Mike Xu, Allyn Klunder, Kevin Tan, Suiting Cao, Yeqing Ji
Abstract:
Accurate prediction of a medical condition with straight clinical evidence is a long-sought topic in the medical management and health insurance field. Although great progress has been made with machine learning algorithms, the medical community is still, to a certain degree, suspicious about the model's accuracy and interpretability. This paper presents an innovative hierarchical attention deep learning model to achieve good prediction and clear interpretability that can be easily understood by medical professionals. This deep learning model uses a hierarchical attention structure that matches naturally with the medical history data structure and reflects the member’s encounter (date of service) sequence. The model attention structure consists of 3 levels: (1) attention on the medical code types (diagnosis codes, procedure codes, lab test results, and prescription drugs), (2) attention on the sequential medical encounters within a type, (3) attention on the medical codes within an encounter and type. This model is applied to predict the occurrence of stage 3 chronic kidney disease (CKD3), using three years’ medical history of Medicare Advantage (MA) members from a top health insurance company. The model takes members’ medical events, both claims and electronic medical record (EMR) data, as input, makes a prediction of CKD3 and calculates the contribution from individual events to the predicted outcome. The model outcome can be easily explained with the clinical evidence identified by the model algorithm. Here are examples: Member A had 36 medical encounters in the past three years: multiple office visits, lab tests and medications. The model predicts member A has a high risk of CKD3 with the following well-contributed clinical events - multiple high ‘Creatinine in Serum or Plasma’ tests and multiple low kidneys functioning ‘Glomerular filtration rate’ tests. Among the abnormal lab tests, more recent results contributed more to the prediction. The model also indicates regular office visits, no abnormal findings of medical examinations, and taking proper medications decreased the CKD3 risk. Member B had 104 medical encounters in the past 3 years and was predicted to have a low risk of CKD3, because the model didn’t identify diagnoses, procedures, or medications related to kidney disease, and many lab test results, including ‘Glomerular filtration rate’ were within the normal range. The model accurately predicts members A and B and provides interpretable clinical evidence that is validated by clinicians. Without extra effort, the interpretation is generated directly from the model and presented together with the occurrence date. Our model uses the medical data in its most raw format without any further data aggregation, transformation, or mapping. This greatly simplifies the data preparation process, mitigates the chance for error and eliminates post-modeling work needed for traditional model explanation. To our knowledge, this is the first paper on an interpretable deep-learning model using a 3-level attention structure, sourcing both EMR and claim data, including all 4 types of medical data, on the entire Medicare population of a big insurance company, and more importantly, directly generating model interpretation to support user decision. In the future, we plan to enrich the model input by adding patients’ demographics and information from free-texted physician notes.Keywords: deep learning, interpretability, attention, big data, medical conditions
Procedia PDF Downloads 911741 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 561740 Heart Rate Variability Analysis for Early Stage Prediction of Sudden Cardiac Death
Authors: Reeta Devi, Hitender Kumar Tyagi, Dinesh Kumar
Abstract:
In present scenario, cardiovascular problems are growing challenge for researchers and physiologists. As heart disease have no geographic, gender or socioeconomic specific reasons; detecting cardiac irregularities at early stage followed by quick and correct treatment is very important. Electrocardiogram is the finest tool for continuous monitoring of heart activity. Heart rate variability (HRV) is used to measure naturally occurring oscillations between consecutive cardiac cycles. Analysis of this variability is carried out using time domain, frequency domain and non-linear parameters. This paper presents HRV analysis of the online dataset for normal sinus rhythm (taken as healthy subject) and sudden cardiac death (SCD subject) using all three methods computing values for parameters like standard deviation of node to node intervals (SDNN), square root of mean of the sequences of difference between adjacent RR intervals (RMSSD), mean of R to R intervals (mean RR) in time domain, very low-frequency (VLF), low-frequency (LF), high frequency (HF) and ratio of low to high frequency (LF/HF ratio) in frequency domain and Poincare plot for non linear analysis. To differentiate HRV of healthy subject from subject died with SCD, k –nearest neighbor (k-NN) classifier has been used because of its high accuracy. Results show highly reduced values for all stated parameters for SCD subjects as compared to healthy ones. As the dataset used for SCD patients is recording of their ECG signal one hour prior to their death, it is therefore, verified with an accuracy of 95% that proposed algorithm can identify mortality risk of a patient one hour before its death. The identification of a patient’s mortality risk at such an early stage may prevent him/her meeting sudden death if in-time and right treatment is given by the doctor.Keywords: early stage prediction, heart rate variability, linear and non-linear analysis, sudden cardiac death
Procedia PDF Downloads 3411739 The Classical and Hellenistic Architectural Elements of the Temple of Echmun in Sidon
Authors: Amal Alatar
Abstract:
The paper focuses on the exploration of architectural characteristics and decorative elements of the temple of Echmun, emphasizing the socio-economic significance of Sidon during the Greek and Roman periods to understand the implications of their spread and development on the Phoenician cities, as well as reveal the symbolical and societal connotations that may have been connected with the buildings, in order to allow a well-founded examination of common characteristics. In general, studying Phoenician archaeology posed some problems. The main problem is that most major Phoenician settlements lay beneath modern urban centers. This situation often prevented or largely restricted full archaeological investigations; the publications are frequently not complete enough to determine the basic characteristics of the architectural elements. Another key problem is the political instability of the region, which affected the archaeological research in the Phoenician homeland for many years. Nevertheless, during the past decades, an ever-growing cache of data was acquired from the archaeological surroundings of the Phoenician sites. Both the architectural elements from the Greek and Roman period have never been studied as a group before. Surprisingly, they have been largely ignored, despite their apparent profusion throughout the cities. The Roman period of Sidon has generally been neglected in preference to earlier periods, where it is often difficult to distinguish between Roman, Bronze age, medieval and Ottoman structures.Keywords: archaeology, classical, Hellenistic, Eshmun Temple, architecture, Sidon, Lebanon
Procedia PDF Downloads 1011738 The Concept of an Agile Enterprise Research Model
Authors: Maja Sajdak
Abstract:
The aim of this paper is to present the concept of an agile enterprise model and to initiate discussion on the research assumptions of the model presented. The implementation of the research project "The agility of enterprises in the process of adapting to the environment and its changes" began in August 2014 and is planned to last three years. The article has the form of a work-in-progress paper which aims to verify and initiate a debate over the proposed research model. In the literature there are very few publications relating to research into agility; it can be concluded that the most controversial issue in this regard is the method of measuring agility. In previous studies the operationalization of agility was often fragmentary, focusing only on selected areas of agility, for example manufacturing, or analysing only selected sectors. As a result the measures created to date can only be treated as contributory to the development of precise measurement tools. This research project aims to fill a cognitive gap in the literature with regard to the conceptualization and operationalization of an agile company. Thus, the original contribution of the author of this project is the construction of a theoretical model that integrates manufacturing agility (consisting mainly in adaptation to the environment) and strategic agility (based on proactive measures). The author of this research project is primarily interested in the attributes of an agile enterprise which indicate that the company is able to rapidly adapt to changing circumstances and behave pro-actively.Keywords: agile company, acuity, entrepreneurship, flexibility, research model, strategic leadership
Procedia PDF Downloads 3431737 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 137