Search results for: beta binomial posterior predictive (BBPP) distribution
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6734

Search results for: beta binomial posterior predictive (BBPP) distribution

5834 Stress Hyperglycemia: A Predictor of Major Adverse Cardiac Events in Non-Diabetic Patients With Acute Heart Failure

Authors: Fahad Raj Khan, Suleman Khan

Abstract:

There is a lack of consensus about the predictive value of raised blood glucose levels in terms of major adverse cardiac events (MACEs) in non-diabetic patients admitted for acute decompensated heart failure. The purpose of this research was to examine the long-term prognosis of acute decompensated heart failure (ADHF) in non-diabetic persons who had increased blood glucose levels, i.e., stress hyperglycemia, at the time of their ADHF hospitalization. The research involved 650 non-diabetic patients. Based on their admission stress hyperglycemia, they were divided into two groups.ie with and without (SHGL). The two groups' one-year outcomes for major adverse cardiac events (MACEs) were compared, and key predictors of MACEs were discovered. For statistical analysis, the two-tailed Mann-Whitney U test, Fisher's exact test, and binary logistic regression analysis were utilized. SHGL was found in 353 (54.3%) individuals. It was more frequent in men than in women. About 27% of patients with SHGL had previously been admitted for ADHF. Almost 62% were hypertensive, whereas 14 % had CKD. MACEs were significantly predicted by SHGL, HTN, prior hospitalization for ADHF, CKD, and cardiogenic shock upon admission. SHGL at the time of ADHF admission, independent of DM status, may be a predictive indication of MACEs.

Keywords: stress hyperglycemia, acute heart failure, major adverse cardiac events, MACEs

Procedia PDF Downloads 94
5833 Development of an Atmospheric Radioxenon Detection System for Nuclear Explosion Monitoring

Authors: V. Thomas, O. Delaune, W. Hennig, S. Hoover

Abstract:

Measurement of radioactive isotopes of atmospheric xenon is used to detect, locate and identify any confined nuclear tests as part of the Comprehensive Nuclear Test-Ban Treaty (CTBT). In this context, the Alternative Energies and French Atomic Energy Commission (CEA) has developed a fixed device to continuously measure the concentration of these fission products, the SPALAX process. During its atmospheric transport, the radioactive xenon will undergo a significant dilution between the source point and the measurement station. Regarding the distance between fixed stations located all over the globe, the typical volume activities measured are near 1 mBq m⁻³. To avoid the constraints induced by atmospheric dilution, the development of a mobile detection system is in progress; this system will allow on-site measurements in order to confirm or infringe a suspicious measurement detected by a fixed station. Furthermore, this system will use beta/gamma coincidence measurement technique in order to drastically reduce environmental background (which masks such activities). The detector prototype consists of a gas cell surrounded by two large silicon wafers, coupled with two square NaI(Tl) detectors. The gas cell has a sample volume of 30 cm³ and the silicon wafers are 500 µm thick with an active surface area of 3600 mm². In order to minimize leakage current, each wafer has been segmented into four independent silicon pixels. This cell is sandwiched between two low background NaI(Tl) detectors (70x70x40 mm³ crystal). The expected Minimal Detectable Concentration (MDC) for each radio-xenon is in the order of 1-10 mBq m⁻³. Three 4-channels digital acquisition modules (Pixie-NET) are used to process all the signals. Time synchronization is ensured by a dedicated PTP-network, using the IEEE 1588 Precision Time Protocol. We would like to present this system from its simulation to the laboratory tests.

Keywords: beta/gamma coincidence technique, low level measurement, radioxenon, silicon pixels

Procedia PDF Downloads 125
5832 Reliability and Probability Weighted Moment Estimation for Three Parameter Mukherjee-Islam Failure Model

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

The Mukherjee-Islam Model is commonly used as a simple life time distribution to assess system reliability. The model exhibits a better fit for failure information and provides more appropriate information about hazard rate and other reliability measures as shown by various authors. It is possible to introduce a location parameter at a time (i.e., a time before which failure cannot occur) which makes it a more useful failure distribution than the existing ones. Even after shifting the location of the distribution, it represents a decreasing, constant and increasing failure rate. It has been shown to represent the appropriate lower tail of the distribution of random variables having fixed lower bound. This study presents the reliability computations and probability weighted moment estimation of three parameter model. A comparative analysis is carried out between three parameters finite range model and some existing bathtub shaped curve fitting models. Since probability weighted moment method is used, the results obtained can also be applied on small sample cases. Maximum likelihood estimation method is also applied in this study.

Keywords: comparative analysis, maximum likelihood estimation, Mukherjee-Islam failure model, probability weighted moment estimation, reliability

Procedia PDF Downloads 273
5831 Grain Size Characteristics and Sediments Distribution in the Eastern Part of Lekki Lagoon

Authors: Mayowa Philips Ibitola, Abe Oluwaseun Banji, Olorunfemi Akinade-Solomon

Abstract:

A total of 20 bottom sediment samples were collected from the Lekki Lagoon during the wet and dry season. The study was carried out to determine the textural characteristics, sediment distribution pattern and energy of transportation within the lagoon system. The sediment grain sizes and depth profiling was analyzed using dry sieving method and MATLAB algorithm for processing. The granulometric reveals fine grained sand both for the wet and dry season with an average mean value of 2.03 ϕ and -2.88 ϕ, respectively. Sediments were moderately sorted with an average inclusive standard deviation of 0.77 ϕ and -0.82 ϕ. Skewness varied from strongly coarse and near symmetrical 0.34- ϕ and 0.09 ϕ. The kurtosis average value was 0.87 ϕ and -1.4 ϕ (platykurtic and leptokurtic). Entirely, the bathymetry shows an average depth of 4.0 m. The deepest and shallowest area has a depth of 11.2 m and 0.5 m, respectively. High concentration of fine sand was observed at deep areas compared to the shallow areas during wet and dry season. Statistical parameter results show that the overall sediments are sorted, and deposited under low energy condition over a long distance. However, sediment distribution and sediment transport pattern of Lekki Lagoon is controlled by a low energy current and the down slope configuration of the bathymetry enhances the sorting and the deposition rate in the Lekki Lagoon.

Keywords: Lekki Lagoon, Marine sediment, bathymetry, grain size distribution

Procedia PDF Downloads 231
5830 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing

Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar

Abstract:

The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic waste

Keywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development

Procedia PDF Downloads 30
5829 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling

Authors: Taehan Bae

Abstract:

In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.

Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm

Procedia PDF Downloads 224
5828 Patient Outcomes Following Out-of-Hospital Cardiac Arrest

Authors: Scott Ashby, Emily Granger, Mark Connellan

Abstract:

Background: In-hospital management of Out-of-Hospital Cardiac Arrest (OHCA) is complex as the aetiologies are varied. Acute coronary angiography has been shown to improve outcomes for patients with coronary occlusion as the cause; however, these patients are difficult to identify. ECG results may help identify these patients, but the accuracy of this diagnostic test is under debate, and requires further investigation. Methods: Arrest and hospital management information was collated retrospectively for OHCA patients who presented to a single clinical site between 2009 and 2013. Angiography results were then collected and checked for significance with survival to discharge. The presence of a severe lesion (>70%) was then compared to categorised ECG findings, and the accuracy of the test was calculated. Results: 104 patients were included in this study, 44 survived to discharge, 52 died and 8 were transferred to other clinical sites. Angiography appears to significantly correlate with survival to discharge. ECG showed 54.8% sensitivity for detecting the presence of a severe lesion within the group that received angiography. A combined criterion including any ECG pathology showed 100% sensitivity and negative predictive value, however, a low specificity and positive predictive value. Conclusion: In the cohort investigated, ST elevation on ECG is not a sensitive enough screening test to be used to determine whether OHCA patients have coronary stenosis as the likely cause of their arrest, and more investigation into whether screening with a combined ECG criterion, or whether all patients should receive angiography routinely following OHCA is needed.

Keywords: out of hospital cardiac arrest, coronary angiography, resuscitation, emergency medicine

Procedia PDF Downloads 395
5827 Comparative Performance of Standing Whole Body Monitor and Shielded Chair Counter for In-vivo Measurements

Authors: M. Manohari, S. Priyadharshini, K. Bajeer Sulthan, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

In-vivo monitoring facility at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, caters to the monitoring of internal exposure of occupational radiation workers from various radioactive facilities of IGCAR. Internal exposure measurement is done using Na(Tl) based Scintillation detectors. Two types of whole-body counters, namely Shielded Chair Counter (SC) and Standing Whole-Body Monitor (SWBM), are being used. The shielded Chair is based on a NaI detector of 20.3 cm diameter and 10.15 cm thick. The chair of the system is shielded using lead shots of 10 cm lead equivalent and the detector with 8 cm lead bricks. Counting geometry is sitting geometry. Calibration is done using 95 percentile BOMAB phantom. The minimum Detectable Activity (MDA) for 137Cs for the 60s is 1150 Bq. Standing Wholebody monitor (SWBM) has two NaI(Tl) detectors of size 10.16 x 10.16 x 40.64 cm3 positioned serially, one over the other. It has a shielding thickness of 5cm lead equivalent. Counting is done in standup geometry. Calibration is done with the help of Ortec Phantom, having a uniform distribution of mixed radionuclides for the thyroid, thorax and pelvis. The efficiency of SWBM is 2.4 to 3.5 times higher than that of the shielded chair in the energy range of 279 to 1332 keV. MDA of 250 Bq for 137Cs can be achieved with a counting time of 60s. MDA for 131I in the thyroid was estimated as 100 Bq from the MDA of whole-body for one-day post intake. Standing whole body monitor is better in terms of efficiency, MDA and ease of positioning. In case of emergency situations, the optimal MDAs for in-vivo monitoring service are 1000 Bq for 137Cs and 100 Bq for 131I. Hence, SWBM is more suitable for the rapid screening of workers as well as the public in the case of an emergency. While a person reports for counting, there is a potential for external contamination. In SWBM, there is a feasibility to discriminate them as the subject can be counted in anterior or posterior geometry which is not possible in SC.

Keywords: minimum detectable activity, shielded chair, shielding thickness, standing whole body monitor

Procedia PDF Downloads 46
5826 Biological Treatment of Bacterial Biofilms from Drinking Water Distribution System in Lebanon

Authors: A. Hamieh, Z. Olama, H. Holail

Abstract:

Drinking Water Distribution Systems provide opportunities for microorganisms that enter the drinking water to develop into biofilms. Antimicrobial agents, mainly chlorine, are used to disinfect drinking water, however, there are not yet standardized disinfection strategies with reliable efficacy and development of novel anti-biofilm strategies is still of major concern. In the present study the ability of Lactobacillus acidophilus and Streptomyces sp. cell free supernatants to inhibit the bacterial biofilm formation in Drinking Water Distribution System in Lebanon was investigated. Treatment with cell free supernatants of Lactobacillus acidophilus and Streptomyces sp. at 20% concentration resulted in average biofilm inhibition (52.89 and 39.66% respectively). A preliminary investigation about the mode of action of biofilm inhibition revealed that cell free supernatants showed no bacteriostatic or bactericidal activity against all the tested isolates. Pre-coating wells with supernatants revealed that Lactobacillus acidophilus cell free supernatant inhibited average biofilm formation (62.53%) by altering the adhesion of bacterial isolates to the surface, preventing the initial attachment step, which is important for biofilm production.

Keywords: biofilm, cell free supernatant, distribution system, drinking water, lactobacillus acidophilus, streptomyces sp, adhesion

Procedia PDF Downloads 434
5825 Emotional Awareness and Working Memory as Predictive Factors for the Habitual Use of Cognitive Reappraisal among Adolescents

Authors: Yuri Kitahara

Abstract:

Background: Cognitive reappraisal refers to an emotion regulation strategy in which one changes the interpretation of emotion-eliciting events. Numerous studies show that cognitive reappraisal is associated with mental health and better social functioning. However the examination of the predictive factors of adaptive emotion regulation remains as an issue. The present study examined the factors contributing to the habitual use of cognitive reappraisal, with a focus on emotional awareness and working memory. Methods: Data was collected from 30 junior high school students, using a Japanese version of the Emotion Regulation Questionnaire (ERQ), the Levels of Emotional Awareness Scale for Children (LEAS-C), and N-back task. Results: A positive correlation between emotional awareness and cognitive reappraisal was observed in the high-working-memory group (r = .54, p < .05), whereas no significant relationship was found in the low-working-memory group. In addition, the results of the analysis of variance (ANOVA) showed a significant interaction between emotional awareness and working memory capacity (F(1, 26) = 7.74, p < .05). Subsequent analysis of simple main effects confirmed that high working memory capacity significantly increases the use of cognitive reappraisal for high-emotional-awareness subjects, and significantly decreases the use of cognitive reappraisal for low-emotional-awareness subjects. Discussion: These results indicate that under the condition when one has an adequate ability for simultaneous processing of information, explicit understanding of emotion would contribute to adaptive cognitive emotion regulation. The findings are discussed along with neuroscientific claims.

Keywords: cognitive reappraisal, emotional awareness, emotion regulation, working memory

Procedia PDF Downloads 231
5824 Temperature Dependent Interaction Energies among X (=Ru, Rh) Impurities in Pd-Rich PdX Alloys

Authors: M. Asato, C. Liu, N. Fujima, T. Hoshino, Y. Chen, T. Mohri

Abstract:

We study the temperature dependence of the interaction energies (IEs) of X (=Ru, Rh) impurities in Pd, due to the Fermi-Dirac (FD) distribution and the thermal vibration effect by the Debye-Grüneisen model. The n-body (n=2~4) IEs among X impurities in Pd, being used to calculate the internal energies in the free energies of the Pd-rich PdX alloys, are determined uniquely and successively from the lower-order to higher-order, by the full-potential Korringa-Kohn-Rostoker Green’s function method (FPKKR), combined with the generalized gradient approximation in the density functional theory. We found that the temperature dependence of IEs due to the FD distribution, being usually neglected, is very important to reproduce the X-concentration dependence of the observed solvus temperatures of the Pd-rich PdX (X=Ru, Rh) alloys.

Keywords: full-potential KKR-green’s function method, Fermi-Dirac distribution, GGA, phase diagram of Pd-rich PdX (X=Ru, Rh) alloys, thermal vibration effect

Procedia PDF Downloads 275
5823 On Consolidated Predictive Model of the Natural History of Breast Cancer Considering Primary Tumor and Primary Distant Metastases Growth

Authors: Ella Tyuryumina, Alexey Neznanov

Abstract:

Finding algorithms to predict the growth of tumors has piqued the interest of researchers ever since the early days of cancer research. A number of studies were carried out as an attempt to obtain reliable data on the natural history of breast cancer growth. Mathematical modeling can play a very important role in the prognosis of tumor process of breast cancer. However, mathematical models describe primary tumor growth and metastases growth separately. Consequently, we propose a mathematical growth model for primary tumor and primary metastases which may help to improve predicting accuracy of breast cancer progression using an original mathematical model referred to CoM-IV and corresponding software. We are interested in: 1) modelling the whole natural history of primary tumor and primary metastases; 2) developing adequate and precise CoM-IV which reflects relations between PT and MTS; 3) analyzing the CoM-IV scope of application; 4) implementing the model as a software tool. The CoM-IV is based on exponential tumor growth model and consists of a system of determinate nonlinear and linear equations; corresponds to TNM classification. It allows to calculate different growth periods of primary tumor and primary metastases: 1) ‘non-visible period’ for primary tumor; 2) ‘non-visible period’ for primary metastases; 3) ‘visible period’ for primary metastases. The new predictive tool: 1) is a solid foundation to develop future studies of breast cancer models; 2) does not require any expensive diagnostic tests; 3) is the first predictor which makes forecast using only current patient data, the others are based on the additional statistical data. Thus, the CoM-IV model and predictive software: a) detect different growth periods of primary tumor and primary metastases; b) make forecast of the period of primary metastases appearance; c) have higher average prediction accuracy than the other tools; d) can improve forecasts on survival of BC and facilitate optimization of diagnostic tests. The following are calculated by CoM-IV: the number of doublings for ‘nonvisible’ and ‘visible’ growth period of primary metastases; tumor volume doubling time (days) for ‘nonvisible’ and ‘visible’ growth period of primary metastases. The CoM-IV enables, for the first time, to predict the whole natural history of primary tumor and primary metastases growth on each stage (pT1, pT2, pT3, pT4) relying only on primary tumor sizes. Summarizing: a) CoM-IV describes correctly primary tumor and primary distant metastases growth of IV (T1-4N0-3M1) stage with (N1-3) or without regional metastases in lymph nodes (N0); b) facilitates the understanding of the appearance period and manifestation of primary metastases.

Keywords: breast cancer, exponential growth model, mathematical modelling, primary metastases, primary tumor, survival

Procedia PDF Downloads 334
5822 Numerical Modeling to Validate Theoretical Models of Toppling Failure in Rock Slopes

Authors: Hooman Dabirmanesh, Attila M. Zsaki

Abstract:

Traditionally, rock slope stability is carried out using limit equilibrium analysis when investigating toppling failure. In these equilibrium methods, internal forces exerted between columns are not clearly defined, and to the authors’ best knowledge, there is no consensus in literature with respect to the results of analysis. A discrete element method-based numerical model was developed and applied to simulate the behavior of rock layers subjected to toppling failure. Based on this calibrated numerical model, a study of the location and distribution of internal forces that result in equilibrium was carried out. The sum of side forces was applied at a point on a block which properly represents the force to determine the inter-column force distribution. In terms of the side force distribution coefficient, the result was compared to those obtained from laboratory centrifuge tests. The results of the simulation show the suitable criteria to select the correct position for the internal exerted force between rock layers. In addition, the numerical method demonstrates how a theoretical method could be reliable by considering the interaction between the rock layers.

Keywords: contact bond, discrete element, force distribution, limit equilibrium, tensile stress

Procedia PDF Downloads 143
5821 Pattern of Stress Distribution in Different Ligature-Wire-Brackets Systems: A FE and Experimental Analysis

Authors: Afef Dridi, Salah Mezlini

Abstract:

Since experimental devices cannot calculate stress and deformation of complex structures. The Finite Element Method FEM has been widely used in several fields of research. One of these fields is orthodontics. The advantage of using such a method is the use of an accurate and non invasive method that allows us to have a sufficient data about the physiological reactions can happening in soft tissues. Most of researches done in this field were interested in the study of stresses and deformations induced by orthodontic apparatus in soft tissues (alveolar tissues). Only few studies were interested in the distribution of stress and strain in the orthodontic brackets. These studies, although they tried to be as close as possible to real conditions, their models did not reproduce the clinical cases. For this reason, the model generated by our research is the closest one to reality. In this study, a numerical model was developed to explore the stress and strain distribution under the application of real conditions. A comparison between different material properties was also done.

Keywords: visco-hyperelasticity, FEM, orthodontic treatment, inverse method

Procedia PDF Downloads 259
5820 Design and Analysis of Adaptive Type-I Progressive Hybrid Censoring Plan under Step Stress Partially Accelerated Life Testing Using Competing Risk

Authors: Ariful Islam, Showkat Ahmad Lone

Abstract:

Statistical distributions have long been employed in the assessment of semiconductor devices and product reliability. The power function-distribution is one of the most important distributions in the modern reliability practice and can be frequently preferred over mathematically more complex distributions, such as the Weibull and the lognormal, because of its simplicity. Moreover, it may exhibit a better fit for failure data and provide more appropriate information about reliability and hazard rates in some circumstances. This study deals with estimating information about failure times of items under step-stress partially accelerated life tests for competing risk based on adoptive type-I progressive hybrid censoring criteria. The life data of the units under test is assumed to follow Mukherjee-Islam distribution. The point and interval maximum-likelihood estimations are obtained for distribution parameters and tampering coefficient. The performances of the resulting estimators of the developed model parameters are evaluated and investigated by using a simulation algorithm.

Keywords: adoptive progressive hybrid censoring, competing risk, mukherjee-islam distribution, partially accelerated life testing, simulation study

Procedia PDF Downloads 347
5819 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
5818 Extreme Rainfall Frequency Analysis For Meteorological Sub-Division 4 Of India Using L-Moments.

Authors: Arti Devi, Parthasarthi Choudhury

Abstract:

Extreme rainfall frequency analysis for Meteorological Sub-Division 4 of India was analysed using L-moments approach. Serial Correlation and Mann Kendall tests were conducted for checking serially independent and stationarity of the observations. The discordancy measure for the sites was conducted to detect the discordant sites. The regional homogeneity was tested by comparing with 500 generated homogeneous regions using a 4 parameter Kappa distribution. The best fit distribution was selected based on ZDIST statistics and L-moments ratio diagram from the five extreme value distributions GPD, GLO, GEV, P3 and LP3. The LN3 distribution was selected and regional rainfall frequency relationship was established using index-rainfall procedure. A regional mean rainfall relationship was developed using multiple linear regression with latitude and longitude of the sites as variables.

Keywords: L-moments, ZDIST statistics, serial correlation, Mann Kendall test

Procedia PDF Downloads 441
5817 Fuzzy Neuro Approach for Integrated Water Management System

Authors: Stuti Modi, Aditi Kambli

Abstract:

This paper addresses the need for intelligent water management and distribution system in smart cities to ensure optimal consumption and distribution of water for drinking and sanitation purposes. Water being a limited resource in cities require an effective system for collection, storage and distribution. In this paper, applications of two mostly widely used particular types of data-driven models, namely artificial neural networks (ANN) and fuzzy logic-based models, to modelling in the water resources management field are considered. The objective of this paper is to review the principles of various types and architectures of neural network and fuzzy adaptive systems and their applications to integrated water resources management. Final goal of the review is to expose and formulate progressive direction of their applicability and further research of the AI-related and data-driven techniques application and to demonstrate applicability of the neural networks, fuzzy systems and other machine learning techniques in the practical issues of the regional water management. Apart from this the paper will deal with water storage, using ANN to find optimum reservoir level and predicting peak daily demands.

Keywords: artificial neural networks, fuzzy systems, peak daily demand prediction, water management and distribution

Procedia PDF Downloads 186
5816 Mathematical Model for Progressive Phase Distribution of Ku-band Reflectarray Antennas

Authors: M. Y. Ismail, M. Inam, A. F. M. Zain, N. Misran

Abstract:

Progressive phase distribution is an important consideration in reflect array antenna design which is required to form a planar wave in front of the reflect array aperture. This paper presents a detailed mathematical model in order to determine the required reflection phase values from individual element of a reflect array designed in Ku-band frequency range. The proposed technique of obtaining reflection phase can be applied for any geometrical design of elements and is independent of number of array elements. Moreover the model also deals with the solution of reflect array antenna design with both centre and off-set feed configurations. The theoretical modeling has also been implemented for reflect arrays constructed on 0.508 mm thickness of different dielectric substrates. The results show an increase in the slope of the phase curve from 4.61°/mm to 22.35°/mm by varying the material properties.

Keywords: mathematical modeling, progressive phase distribution, reflect array antenna, reflection phase

Procedia PDF Downloads 383
5815 Bayesian Variable Selection in Quantile Regression with Application to the Health and Retirement Study

Authors: Priya Kedia, Kiranmoy Das

Abstract:

There is a rich literature on variable selection in regression setting. However, most of these methods assume normality for the response variable under consideration for implementing the methodology and establishing the statistical properties of the estimates. In many real applications, the distribution for the response variable may be non-Gaussian, and one might be interested in finding the best subset of covariates at some predetermined quantile level. We develop dynamic Bayesian approach for variable selection in quantile regression framework. We use a zero-inflated mixture prior for the regression coefficients, and consider the asymmetric Laplace distribution for the response variable for modeling different quantiles of its distribution. An efficient Gibbs sampler is developed for our computation. Our proposed approach is assessed through extensive simulation studies, and real application of the proposed approach is also illustrated. We consider the data from health and retirement study conducted by the University of Michigan, and select the important predictors when the outcome of interest is out-of-pocket medical cost, which is considered as an important measure for financial risk. Our analysis finds important predictors at different quantiles of the outcome, and thus enhance our understanding on the effects of different predictors on the out-of-pocket medical cost.

Keywords: variable selection, quantile regression, Gibbs sampler, asymmetric Laplace distribution

Procedia PDF Downloads 156
5814 Story-Wise Distribution of Slit Dampers for Seismic Retrofit of RC Shear Wall Structures

Authors: Minjung Kim, Hyunkoo Kang, Jinkoo Kim

Abstract:

In this study, a seismic retrofit scheme for a reinforced concrete shear wall structure using steel slit dampers was presented. The stiffness and the strength of the slit damper used in the retrofit were verified by cyclic loading test. A genetic algorithm was applied to find out the optimum location of the slit dampers. The effects of the slit dampers on the seismic retrofit of the model were compared with those of jacketing shear walls. The seismic performance of the model structure with optimally positioned slit dampers was evaluated by nonlinear static and dynamic analyses. Based on the analysis results, the simple procedure for determining required damping ratio using capacity spectrum method along with the damper distribution pattern proportional to the inter-story drifts was validated. The analysis results showed that the seismic retrofit of the model structure using the slit dampers was more economical than the jacketing of the shear walls and that the capacity spectrum method combined with the simple damper distribution pattern led to satisfactory damper distribution pattern compatible with the solution obtained from the genetic algorithm.

Keywords: seismic retrofit, slit dampers, genetic algorithm, jacketing, capacity spectrum method

Procedia PDF Downloads 274
5813 A Proposal for a Combustion Model Considering the Lewis Number and Its Evaluation

Authors: Fujio Akagi, Hiroaki Ito, Shin-Ichi Inage

Abstract:

The aim of this study is to develop a combustion model that can be applied uniformly to laminar and turbulent premixed flames while considering the effect of the Lewis number (Le). The model considers the effect of Le on the transport equations of the reaction progress, which varies with the chemical species and temperature. The distribution of the reaction progress variable is approximated by a hyperbolic tangent function, while the other distribution of the reaction progress variable is estimated using the approximated distribution and transport equation of the reaction progress variable considering the Le. The validity of the model was evaluated under the conditions of propane with Le > 1 and methane with Le = 1 (equivalence ratios of 0.5 and 1). The estimated results were found to be in good agreement with those of previous studies under all conditions. A method of introducing a turbulence model into this model is also described. It was confirmed that conventional turbulence models can be expressed as an approximate theory of this model in a unified manner.

Keywords: combustion model, laminar flame, Lewis number, turbulent flame

Procedia PDF Downloads 123
5812 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever

Authors: Sudha T., Naveen C.

Abstract:

Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.

Keywords: deep learning model, dengue fever, prediction, optimization

Procedia PDF Downloads 65
5811 Character Development Outcomes: A Predictive Model for Behaviour Analysis in Tertiary Institutions

Authors: Rhoda N. Kayongo

Abstract:

As behavior analysts in education continue to debate on how higher institutions can continue to benefit from their social and academic related programs, higher education is facing challenges in the area of character development. This is manifested in the percentages of college completion rates, teen pregnancies, drug abuse, sexual abuse, suicide, plagiarism, lack of academic integrity, and violence among their students. Attending college is a perceived opportunity to positively influence the actions and behaviors of the next generation of society; thus colleges and universities have to provide opportunities to develop students’ values and behaviors. Prior studies were mainly conducted in private institutions and more so in developed countries. However, with the complexity of the nature of student body currently due to the changing world, a multidimensional approach combining multiple factors that enhance character development outcomes is needed to suit the changing trends. The main purpose of this study was to identify opportunities in colleges and develop a model for predicting character development outcomes. A survey questionnaire composed of 7 scales including in-classroom interaction, out-of-classroom interaction, school climate, personal lifestyle, home environment, and peer influence as independent variables and character development outcomes as the dependent variable was administered to a total of five hundred and one students of 3rd and 4th year level in selected public colleges and universities in the Philippines and Rwanda. Using structural equation modelling, a predictive model explained 57% of the variance in character development outcomes. Findings from the results of the analysis showed that in-classroom interactions have a substantial direct influence on character development outcomes of the students (r = .75, p < .05). In addition, out-of-classroom interaction, school climate, and home environment contributed to students’ character development outcomes but in an indirect way. The study concluded that in the classroom are many opportunities for teachers to teach, model and integrate character development among their students. Thus, suggestions are made to public colleges and universities to deliberately boost and implement experiences that cultivate character within the classroom. These may contribute tremendously to the students' character development outcomes and hence render effective models of behaviour analysis in higher education.

Keywords: character development, tertiary institutions, predictive model, behavior analysis

Procedia PDF Downloads 136
5810 Estimation of Location and Scale Parameters of Extended Exponential Distribution Based on Record Statistics

Authors: E. Krishna

Abstract:

An Extended form of exponential distribution using Marshall and Olkin method is introduced.The location scale family of these distributions is considered. For location scale free family, exact expressions for single and product moments of upper record statistics are derived. The mean, variance and covariance of record values are computed for various values of the shape parameter. Using these the BLUE's of location and scale parameters are derived.The variances and covariance of estimates are obtained.Through Monte Carlo simulation the con dence intervals for location and scale parameters are constructed.The Best liner unbiased Predictor (BLUP) of future records are also discussed.

Keywords: BLUE, BLUP, con dence interval, Marshall-Olkin distribution, Monte Carlo simulation, prediction of future records, record statistics

Procedia PDF Downloads 417
5809 Distribution Routs Redesign through the Vehicle Problem Routing in Havana Distribution Center

Authors: Sonia P. Marrero Duran, Lilian Noya Dominguez, Lisandra Quintana Alvarez, Evert Martinez Perez, Ana Julia Acevedo Urquiaga

Abstract:

Cuban business and economic policy are in the constant update as well as facing a client ever more knowledgeable and demanding. For that reason become fundamental for companies competitiveness through the optimization of its processes and services. One of the Cuban’s pillars, which has been sustained since the triumph of the Cuban Revolution back in 1959, is the free health service to all those who need it. This service is offered without any charge under the concept of preserving human life, but it implied costly management processes and logistics services to be able to supply the necessary medicines to all the units who provide health services. One of the key actors on the medicine supply chain is the Havana Distribution Center (HDC), which is responsible for the delivery of medicines in the province; as well as the acquisition of medicines from national and international producers and its subsequent transport to health care units and pharmacies in time, and with the required quality. This HDC also carries for all distribution centers in the country. Given the eminent need to create an actor in the supply chain that specializes in the medicines supply, the possibility of centralizing this operation in a logistics service provider is analyzed. Based on this decision, pharmacies operate as clients of the logistic service center whose main function is to centralize all logistics operations associated with the medicine supply chain. The HDC is precisely the logistic service provider in Havana and it is the center of this research. In 2017 the pharmacies had affectations in the availability of medicine due to deficiencies in the distribution routes. This is caused by the fact that they are not based on routing studies, besides the long distribution cycle. The distribution routs are fixed, attend only one type of customer and there respond to a territorial location by the municipality. Taking into consideration the above-mentioned problem, the objective of this research is to optimize the routes system in the Havana Distribution Center. To accomplish this objective, the techniques applied were document analysis, random sampling, statistical inference and tools such as Ishikawa diagram and the computerized software’s: ArcGis, Osmand y MapIfnfo. As a result, were analyzed four distribution alternatives; the actual rout, by customer type, by the municipality and the combination of the two last. It was demonstrated that the territorial location alternative does not take full advantage of the transportation capacities or the distance of the trips, which leads to elevated costs breaking whit the current ways of distribution and the currents characteristics of the clients. The principal finding of the investigation was the optimum option distribution rout is the 4th one that is formed by hospitals and the join of pharmacies, stomatology clinics, polyclinics and maternal and elderly homes. This solution breaks the territorial location by the municipality and permits different distribution cycles in dependence of medicine consumption and transport availability.

Keywords: computerized geographic software, distribution, distribution routs, vehicle problem routing (VPR)

Procedia PDF Downloads 160
5808 The Predictive Power of Successful Scientific Theories: An Explanatory Study on Their Substantive Ontologies through Theoretical Change

Authors: Damian Islas

Abstract:

Debates on realism in science concern two different questions: (I) whether the unobservable entities posited by theories can be known; and (II) whether any knowledge we have of them is objective or not. Question (I) arises from the doubt that since observation is the basis of all our factual knowledge, unobservable entities cannot be known. Question (II) arises from the doubt that since scientific representations are inextricably laden with the subjective, idiosyncratic, and a priori features of human cognition and scientific practice, they cannot convey any reliable information on how their objects are in themselves. A way of understanding scientific realism (SR) is through three lines of inquiry: ontological, semantic, and epistemological. Ontologically, scientific realism asserts the existence of a world independent of human mind. Semantically, scientific realism assumes that theoretical claims about reality show truth values and, thus, should be construed literally. Epistemologically, scientific realism believes that theoretical claims offer us knowledge of the world. Nowadays, the literature on scientific realism has proceeded rather far beyond the realism versus antirealism debate. This stance represents a middle-ground position between the two according to which science can attain justified true beliefs concerning relational facts about the unobservable realm but cannot attain justified true beliefs concerning the intrinsic nature of any objects occupying that realm. That is, the structural content of scientific theories about the unobservable can be known, but facts about the intrinsic nature of the entities that figure as place-holders in those structures cannot be known. There are two possible versions of SR: Epistemological Structural Realism (ESR) and Ontic Structural Realism (OSR). On ESR, an agnostic stance is preserved with respect to the natures of unobservable entities, but the possibility of knowing the relations obtaining between those entities is affirmed. OSR includes the rather striking claim that when it comes to the unobservables theorized about within fundamental physics, relations exist, but objects do not. Focusing on ESR, questions arise concerning its ability to explain the empirical success of a theory. Empirical success certainly involves predictive success, and predictive success implies a theory’s power to make accurate predictions. But a theory’s power to make any predictions at all seems to derive precisely from its core axioms or laws concerning unobservable entities and mechanisms, and not simply the sort of structural relations often expressed in equations. The specific challenge to ESR concerns its ability to explain the explanatory and predictive power of successful theories without appealing to their substantive ontologies, which are often not preserved by their successors. The response to this challenge will depend on the various and subtle different versions of ESR and OSR stances, which show a sort of progression through eliminativist OSR to moderate OSR of gradual increase in the ontological status accorded to objects. Knowing the relations between unobserved entities is methodologically identical to assert that these relations between unobserved entities exist.

Keywords: eliminativist ontic structural realism, epistemological structuralism, moderate ontic structural realism, ontic structuralism

Procedia PDF Downloads 118
5807 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency De ciency Syndrome (AIDS)

Authors: Mohd Asrul Affendi Bin Abdullah

Abstract:

Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.

Keywords: power sample size, Wald test, standardize rate, ZINBDR

Procedia PDF Downloads 435
5806 On Flow Consolidation Modelling in Urban Congested Areas

Authors: Serban Stere, Stefan Burciu

Abstract:

The challenging and continuously growing competition in the urban freight transport market emphasizes the need for optimal planning of transportation processes in terms of identifying the solution of consolidating traffic flows in congested urban areas. The aim of the present paper is to present the mathematical framework and propose a methodology of combining urban traffic flows between the distribution centers located at the boundary of a congested urban area. The three scenarios regarding traffic flow between consolidation centers that are taken into consideration in the paper are based on the same characteristics of traffic flows. The scenarios differ in terms of the accessibility of the four consolidation centers given by the infrastructure, the connections between them, and the possibility of consolidating traffic flows for one or multiple destinations. Also, synthetical indicators will allow us to compare the scenarios considered and chose the indicated for our distribution system.

Keywords: distribution system, single and multiple destinations, urban consolidation centers, traffic flow consolidation schemes

Procedia PDF Downloads 156
5805 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic system, Primal-dual interior point method, Three-phase optimal power flow, Voltage unbalance

Procedia PDF Downloads 332