Search results for: TensorFlow probability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1229

Search results for: TensorFlow probability

929 A New Approach – A Numerical Assessment of Ground Strata Failure Potentials in Underground Mines

Authors: Omer Yeni

Abstract:

Ground strata failure or fall-of-ground is one of the underground mines' most prominent catastrophic risks. Mining companies use various methods/technics to prevent and critically control the associated risks. Some of those are safety by design, excavation methods, ground support, training, and competency, which all require quality control and assurance activities to confirm their efficiencies and performances and identify improvement opportunities through monitoring. However, many mining companies use quality control (QC) methods without quality assurance (QA), and they call it QA/QC together as a habit. From a simple definition, QC is a method of detecting defects, and QA is a method of preventing defects. Testing the final products at the end of the production line is not the way of proper QA/QC application but testing every component before assembly and the final product once completed. The installed ground support elements are some final products mining companies use to prevent ground strata failure. Testing the final product (i.e., rock bolt pull testing, shotcrete strength test, etc.) with QC methods only while those areas are already accessible; is not like testing an airplane full of passengers right after the production line or testing a car after the sale. Can only QC methods be called QA/QC? Can QA/QC activities be numerically scored for each critical control implemented to assess ground strata failure potential? Can numerical scores be used to identify Geotechnical Risk Rating (GRR) to determine the ground strata failure risk and its probability? This paper sets out to provide a specific QA/QC methodology to manage and confirm efficiencies and performances of the implemented critical controls and a numerical approach through the Geotechnical Risk Rating (GRR) process to assess ground strata failure risk to determine the gaps where proactive action is required to evaluate the probability of ground strata failures in underground mines.

Keywords: fall of ground, ground strata failure, QA/QC, underground

Procedia PDF Downloads 62
928 A Study on Net Profit Associated with Queueing System Subject to Catastrophical Events

Authors: M. Reni Sagayaraj, S. Anand Gnana Selvam, R. Reynald Susainathan

Abstract:

In this paper we study that the catastrophic events arrive independently at the service facility according to a Poisson process with rate λ. The nature of a catastrophic event is that upon its arrival at a service station, it destroys all the customers there waiting and in the service. We will derive the net profit associated with queuing system and obtain its probability of the busy period.

Keywords: queueing system, net-profit, busy period, catastrophical events

Procedia PDF Downloads 345
927 Adaptation of Requirement Engineering Practices in Pakistan

Authors: Waqas Ali, Nadeem Majeed

Abstract:

Requirement engineering is an essence of software development life cycle. The more time we spend on requirement engineering, higher the probability of success. Effective requirement engineering ensures and predicts successful software product. This paper presents the adaptation of requirement engineering practices in small and medium size companies of Pakistan. The study is conducted by questionnaires to show how much of requirement engineering models and practices are followed in Pakistan.

Keywords: requirement engineering, Pakistan, models, practices, organizations

Procedia PDF Downloads 694
926 Transverse Momentum Dependent Factorization and Evolution for Spin Physics

Authors: Bipin Popat Sonawane

Abstract:

After 1988 Electron muon Collaboration (EMC) announcement of measurement of spin dependent structure function, it has been found that it has become a need to understand spin structure of a hadron. In the study of three-dimensional spin structure of a proton, we need to understand the foundation of quantum field theory in terms of electro-weak and strong theories using rigorous mathematical theories and models. In the process of understanding the inner dynamical stricture of proton we need understand the mathematical formalism in perturbative quantum chromodynamics (pQCD). In QCD processes like proton-proton collision at high energy we calculate cross section using conventional collinear factorization schemes. In this calculations, parton distribution functions (PDFs) and fragmentation function are used which provide the information about probability density of finding quarks and gluons ( partons) inside the proton and probability density of finding final hadronic state from initial partons. In transverse momentum dependent (TMD) PDFs and FFs, collectively called as TMDs, take an account for intrinsic transverse motion of partons. The TMD factorization in the calculation of cross sections provide a scheme of hadronic and partonic states in the given QCD process. In this study we review Transverse Momentum Dependent (TMD) factorization scheme using Collins-Soper-Sterman (CSS) Formalism. CSS formalism considers the transverse momentum dependence of the partons, in this formalism the cross section is written as a Fourier transform over a transverse position variable which has physical interpretation as impact parameter. Along with this we compare this formalism with improved CSS formalism. In this work we study the TMD evolution schemes and their comparison with other schemes. This would provide description in the process of measurement of transverse single spin asymmetry (TSSA) in hadro-production and electro-production of J/psi meson at RHIC, LHC, ILC energy scales. This would surely help us to understand J/psi production mechanism which is an appropriate test of QCD.

Keywords: QCD, PDF, TMD, CSS

Procedia PDF Downloads 46
925 Tracking the Effect of Ibutilide on Amplitude and Frequency of Fibrillatory Intracardiac Electrograms Using the Regression Analysis

Authors: H. Hajimolahoseini, J. Hashemi, D. Redfearn

Abstract:

Background: Catheter ablation is an effective therapy for symptomatic atrial fibrillation (AF). The intracardiac electrocardiogram (IEGM) collected during this procedure contains precious information that has not been explored to its full capacity. Novel processing techniques allow looking at these recordings from different perspectives which can lead to improved therapeutic approaches. In our previous study, we showed that variation in amplitude measured through Shannon Entropy could be used as an AF recurrence risk stratification factor in patients who received Ibutilide before the electrograms were recorded. The aim of this study is to further investigate the effect of Ibutilide on characteristics of the recorded signals from the left atrium (LA) of a patient with persistent AF before and after administration of the drug. Methods: The IEGMs collected from different intra-atrial sites of 12 patients were studied and compared before and after Ibutilide administration. First, the before and after Ibutilide IEGMs that were recorded within a Euclidian distance of 3 mm in LA were selected as pairs for comparison. For every selected pair of IEGMs, the Probability Distribution Function (PDF) of the amplitude in time domain and magnitude in frequency domain was estimated using the regression analysis. The PDF represents the relative likelihood of a variable falling within a specific range of values. Results: Our observations showed that in time domain, the PDF of amplitudes was fitted to a Gaussian distribution while in frequency domain, it was fitted to a Rayleigh distribution. Our observations also revealed that after Ibutilide administration, the IEGMs would have significantly narrower short-tailed PDFs both in time and frequency domains. Conclusion: This study shows that the PDFs of the IEGMs before and after administration of Ibutilide represents significantly different properties, both in time and frequency domains. Hence, by fitting the PDF of IEGMs in time domain to a Gaussian distribution or in frequency domain to a Rayleigh distribution, the effect of Ibutilide can easily be tracked using the statistics of their PDF (e.g., standard deviation) while this is difficult through the waveform of IEGMs itself.

Keywords: atrial fibrillation, catheter ablation, probability distribution function, time-frequency characteristics

Procedia PDF Downloads 145
924 Implementation of Statistical Parameters to Form an Entropic Mathematical Models

Authors: Gurcharan Singh Buttar

Abstract:

It has been discovered that although these two areas, statistics, and information theory, are independent in their nature, they can be combined to create applications in multidisciplinary mathematics. This is due to the fact that where in the field of statistics, statistical parameters (measures) play an essential role in reference to the population (distribution) under investigation. Information measure is crucial in the study of ambiguity, assortment, and unpredictability present in an array of phenomena. The following communication is a link between the two, and it has been demonstrated that the well-known conventional statistical measures can be used as a measure of information.

Keywords: probability distribution, entropy, concavity, symmetry, variance, central tendency

Procedia PDF Downloads 140
923 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 193
922 Comparison of Receiver Operating Characteristic Curve Smoothing Methods

Authors: D. Sigirli

Abstract:

The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.

Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve

Procedia PDF Downloads 131
921 Determining Optimum Locations for Runoff Water Harvesting in W. Watir, South Sinai, Using RS, GIS, and WMS Techniques

Authors: H. H. Elewa, E. M. Ramadan, A. M. Nosair

Abstract:

Rainfall water harvesting is considered as an important tool for overcoming water scarcity in arid and semi-arid region. Wadi Watir in the southeastern part of Sinai Peninsula is considered as one of the main and active basins in the Gulf of Aqaba drainage system. It is characterized by steep hills mainly consist of impermeable rocks, whereas the streambeds are covered by a highly permeable mixture of gravel and sand. A comprehensive approach involving the integration of geographic information systems, remote sensing and watershed modeling was followed to identify the RWH capability in this area. Eight thematic layers, viz volume of annual flood, overland flow distance, maximum flow distance, rock or soil infiltration, drainage frequency density, basin area, basin slope and basin length were used as a multi-parametric decision support system for conducting weighted spatial probability models (WSPMs) to determine the potential areas for the RWH. The WSPMs maps classified the area into five RWH potentiality classes ranging from the very low to very high. Three performed WSPMs' scenarios for W. Watir reflected identical results among their maps for the high and very high RWH potentiality classes, which are the most suitable ones for conducting surface water harvesting techniques. There is also a reasonable match with respect to the potentiality of runoff harvesting areas with a probability of moderate, low and very low among the three scenarios. WSPM results have shown that the high and very high classes, which are the most suitable for the RWH are representing approximately 40.23% of the total area of the basin. Accordingly, several locations were decided for the establishment of water harvesting dams and cisterns to improve the water conditions and living environment in the study area.

Keywords: Sinai, Wadi Watir, remote sensing, geographic information systems, watershed modeling, runoff water harvesting

Procedia PDF Downloads 339
920 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.

Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence

Procedia PDF Downloads 126
919 Internal Migration and Poverty Dynamic Analysis Using a Bayesian Approach: The Tunisian Case

Authors: Amal Jmaii, Damien Rousseliere, Besma Belhadj

Abstract:

We explore the relationship between internal migration and poverty in Tunisia. We present a methodology combining potential outcomes approach with multiple imputation to highlight the effect of internal migration on poverty states. We find that probability of being poor decreases when leaving the poorest regions (the west areas) to the richer regions (greater Tunis and the east regions).

Keywords: internal migration, potential outcomes approach, poverty dynamics, Tunisia

Procedia PDF Downloads 286
918 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 321
917 A Novel Approach of Secret Communication Using Douglas-Peucker Algorithm

Authors: R. Kiruthika, A. Kannan

Abstract:

Steganography is the problem of hiding secret messages in 'innocent – looking' public communication so that the presence of the secret message cannot be detected. This paper introduces a steganographic security in terms of computational in-distinguishability from a channel of probability distributions on cover messages. This method first splits the cover image into two separate blocks using Douglas – Peucker algorithm. The text message and the image will be hided in the Least Significant Bit (LSB) of the cover image.

Keywords: steganography, lsb, embedding, Douglas-Peucker algorithm

Procedia PDF Downloads 338
916 Water Access and Food Security: A Cross-Sectional Study of SSA Countries in 2017

Authors: Davod Ahmadi, Narges Ebadi, Ethan Wang, Hugo Melgar-Quiñonez

Abstract:

Compared to the other Least Developed Countries (LDCs), major countries in sub-Saharan Africa (SSA) have limited access to the clean water. People in this region, and more specifically females, suffer from acute water scarcity problems. They are compelled to spend too much of their time bringing water for domestic use like drinking and washing. Apart from domestic use, water through affecting agriculture and livestock contributes to the food security status of people in vulnerable regions like SSA. Livestock needs water to grow, and agriculture requires enormous quantities of water for irrigation. The main objective of this study is to explore the association between access to water and individuals’ food security status. Data from 2017 Gallup World Poll (GWP) for SSA were analyzed (n=35,000). The target population in GWP is the entire civilian, non-institutionalized, aged 15 and older population. All samples selection is probability based and nationally representative. The Gallup surveys an average of 1,000 samples of individuals per country. Three questions related to water (i.e., water quality, availability of water for crops and availability of water for livestock) were used as the exposure variables. Food Insecurity Experience Scale (FIES) was used as the outcome variable. FIES measures individuals’ food security status, and it is composed of eight questions with simple dichotomous responses (1=Yes and 0=No). Different statistical analyses such as descriptive, crosstabs and binary logistic regression, form the basis of this study. Results from descriptive analyses showed that more than 50% of the respondents had no access to enough water for crops and livestock. More than 85% of respondents were categorized as “food insecure”. Findings from cross-tabulation analyses showed that food security status was significantly associated with water quality (0.135; P=0.000), water for crops (0.106; P=0.000) and water for livestock (0.112; P=0.000). In regression analyses, the probability of being food insecure increased among people who expressed no satisfaction with water quality (OR=1.884 (OR=1.768-2.008)), not enough water for crops (OR=1.721 (1.616-1.834)) and not enough water for livestock (OR=1.706 (1.819)). In conclusion, it should note that water access affects food security status in SSA.

Keywords: water access, agriculture, livestock, FIES

Procedia PDF Downloads 128
915 Future Trends of Mechatronics Engineering in Pakistan

Authors: Aqeela Mir, Akhtar Nawaz Malik, Javaid Iqbal

Abstract:

The paper presents a survey based approach in order to observe the level of awareness regarding Mechatronics in society of Pakistan and the factors affecting the future development trend of Mechatronics in Pakistan. With the help of these surveys a new direction for making a Mathematical model for the future development trend of Mechatronics in Pakistan is also suggested.

Keywords: mechatronics society survey, future development trend of mechatronics in pakistan, probability estimation, mathematical model

Procedia PDF Downloads 488
914 Artificial Neural Networks Application on Nusselt Number and Pressure Drop Prediction in Triangular Corrugated Plate Heat Exchanger

Authors: Hany Elsaid Fawaz Abdallah

Abstract:

This study presents a new artificial neural network(ANN) model to predict the Nusselt Number and pressure drop for the turbulent flow in a triangular corrugated plate heat exchanger for forced air and turbulent water flow. An experimental investigation was performed to create a new dataset for the Nusselt Number and pressure drop values in the following range of dimensionless parameters: The plate corrugation angles (from 0° to 60°), the Reynolds number (from 10000 to 40000), pitch to height ratio (from 1 to 4), and Prandtl number (from 0.7 to 200). Based on the ANN performance graph, the three-layer structure with {12-8-6} hidden neurons has been chosen. The training procedure includes back-propagation with the biases and weight adjustment, the evaluation of the loss function for the training and validation dataset and feed-forward propagation of the input parameters. The linear function was used at the output layer as the activation function, while for the hidden layers, the rectified linear unit activation function was utilized. In order to accelerate the ANN training, the loss function minimization may be achieved by the adaptive moment estimation algorithm (ADAM). The ‘‘MinMax’’ normalization approach was utilized to avoid the increase in the training time due to drastic differences in the loss function gradients with respect to the values of weights. Since the test dataset is not being used for the ANN training, a cross-validation technique is applied to the ANN network using the new data. Such procedure was repeated until loss function convergence was achieved or for 4000 epochs with a batch size of 200 points. The program code was written in Python 3.0 using open-source ANN libraries such as Scikit learn, TensorFlow and Keras libraries. The mean average percent error values of 9.4% for the Nusselt number and 8.2% for pressure drop for the ANN model have been achieved. Therefore, higher accuracy compared to the generalized correlations was achieved. The performance validation of the obtained model was based on a comparison of predicted data with the experimental results yielding excellent accuracy.

Keywords: artificial neural networks, corrugated channel, heat transfer enhancement, Nusselt number, pressure drop, generalized correlations

Procedia PDF Downloads 64
913 Optimum Dimensions of Hydraulic Structures Foundation and Protections Using Coupled Genetic Algorithm with Artificial Neural Network Model

Authors: Dheyaa W. Abbood, Rafa H. AL-Suhaili, May S. Saleh

Abstract:

A model using the artificial neural networks and genetic algorithm technique is developed for obtaining optimum dimensions of the foundation length and protections of small hydraulic structures. The procedure involves optimizing an objective function comprising a weighted summation of the state variables. The decision variables considered in the optimization are the upstream and downstream cutoffs length sand their angles of inclination, the foundation length, and the length of the downstream soil protection. These were obtained for a given maximum difference in head, depth of impervious layer and degree of anisotropy.The optimization carried out subjected to constraints that ensure a safe structure against the uplift pressure force and sufficient protection length at the downstream side of the structure to overcome an excessive exit gradient. The Geo-studios oft ware, was used to analyze 1200 different cases. For each case the length of protection and volume of structure required to satisfy the safety factors mentioned previously were estimated. An ANN model was developed and verified using these cases input-output sets as its data base. A MatLAB code was written to perform a genetic algorithm optimization modeling coupled with this ANN model using a formulated optimization model. A sensitivity analysis was done for selecting the cross-over probability, the mutation probability and level ,the number of population, the position of the crossover and the weights distribution for all the terms of the objective function. Results indicate that the most factor that affects the optimum solution is the number of population required. The minimum value that gives stable global optimum solution of this parameters is (30000) while other variables have little effect on the optimum solution.

Keywords: inclined cutoff, optimization, genetic algorithm, artificial neural networks, geo-studio, uplift pressure, exit gradient, factor of safety

Procedia PDF Downloads 303
912 Heuristic to Generate Random X-Monotone Polygons

Authors: Kamaljit Pati, Manas Kumar Mohanty, Sanjib Sadhu

Abstract:

A heuristic has been designed to generate a random simple monotone polygon from a given set of ‘n’ points lying on a 2-Dimensional plane. Our heuristic generates a random monotone polygon in O(n) time after O(nℓogn) preprocessing time which is improved over the previous work where a random monotone polygon is produced in the same O(n) time but the preprocessing time is O(k) for n < k < n2. However, our heuristic does not generate all possible random polygons with uniform probability. The space complexity of our proposed heuristic is O(n).

Keywords: sorting, monotone polygon, visibility, chain

Procedia PDF Downloads 406
911 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients

Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid

Abstract:

Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.

Keywords: diabetes, T2DM, SLC47A1, Pakistan, polymorphism

Procedia PDF Downloads 134
910 Dissociation of CDS from CVA Valuation Under Notation Changes

Authors: R. Henry, J-B. Paulin, St. Fauchille, Ph. Delord, K. Benkirane, A. Brunel

Abstract:

In this paper, the CVA computation of interest rate swap is presented based on its rating. Rating and probability default given by Moody’s Investors Service are used to calculate our CVA for a specific swap with different maturities. With this computation, the influence of rating variation can be shown on CVA. The application is made to the analysis of Greek CDS variation during the period of Greek crisis between 2008 and 2011. The main point is the determination of correlation between the fluctuation of Greek CDS cumulative value and the variation of swap CVA due to change of rating

Keywords: CDS, computation, CVA, Greek crisis, interest rate swap, maturity, rating, swap

Procedia PDF Downloads 284
909 Quantum Teleportation Using W-BELL and Bell-GHZ Channels

Authors: Abhinav Pandey

Abstract:

Teleportation is the transfer of information between two particles without physically being in contact with each other. It has been around in Quantum computation and has been used in theoretical physics. Using the Entangled pair we can achieve teleportation up to 100% out of probable measurements. We introduce a 5-qubit general entanglement system using W-BELL and BELL-GHZ channel pairs and show its usefulness in teleportation. In this paper, we use these channels to achieve teleportation probabilistically conventionally through nonteleporting channels, which has never been achieved before. In this paper, we compare and determine which channel is better in terms of probabilistic results of teleportation of single qubits using W-Bell and Bell-GHZ channels.

Keywords: entanglement, teleportation, no cloning theorem, quantum mechanics, probability

Procedia PDF Downloads 24
908 Evaluation of a Piecewise Linear Mixed-Effects Model in the Analysis of Randomized Cross-over Trial

Authors: Moses Mwangi, Geert Verbeke, Geert Molenberghs

Abstract:

Cross-over designs are commonly used in randomized clinical trials to estimate efficacy of a new treatment with respect to a reference treatment (placebo or standard). The main advantage of using cross-over design over conventional parallel design is its flexibility, where every subject become its own control, thereby reducing confounding effect. Jones & Kenward, discuss in detail more recent developments in the analysis of cross-over trials. We revisit the simple piecewise linear mixed-effects model, proposed by Mwangi et. al, (in press) for its first application in the analysis of cross-over trials. We compared performance of the proposed piecewise linear mixed-effects model with two commonly cited statistical models namely, (1) Grizzle model; and (2) Jones & Kenward model, used in estimation of the treatment effect, in the analysis of randomized cross-over trial. We estimate two performance measurements (mean square error (MSE) and coverage probability) for the three methods, using data simulated from the proposed piecewise linear mixed-effects model. Piecewise linear mixed-effects model yielded lowest MSE estimates compared to Grizzle and Jones & Kenward models for both small (Nobs=20) and large (Nobs=600) sample sizes. It’s coverage probability were highest compared to Grizzle and Jones & Kenward models for both small and large sample sizes. A piecewise linear mixed-effects model is a better estimator of treatment effect than its two competing estimators (Grizzle and Jones & Kenward models) in the analysis of cross-over trials. The data generating mechanism used in this paper captures two time periods for a simple 2-Treatments x 2-Periods cross-over design. Its application is extendible to more complex cross-over designs with multiple treatments and periods. In addition, it is important to note that, even for single response models, adding more random effects increases the complexity of the model and thus may be difficult or impossible to fit in some cases.

Keywords: Evaluation, Grizzle model, Jones & Kenward model, Performance measures, Simulation

Procedia PDF Downloads 102
907 Characterization of Probability Distributions through Conditional Expectation of Pair of Generalized Order Statistics

Authors: Zubdahe Noor, Haseeb Athar

Abstract:

In this article, first a relation for conditional expectation is developed and then is used to characterize a general class of distributions F(x) = 1-e^(-ah(x)) through conditional expectation of difference of pair of generalized order statistics. Some results are reduced for particular cases. In the end, a list of distributions is presented in the form of table that are compatible with the given general class.

Keywords: generalized order statistics, order statistics, record values, conditional expectation, characterization

Procedia PDF Downloads 443
906 Fire Safety Assessment of At-Risk Groups

Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen

Abstract:

Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.

Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty

Procedia PDF Downloads 81
905 Characteristic Function in Estimation of Probability Distribution Moments

Authors: Vladimir S. Timofeev

Abstract:

In this article the problem of distributional moments estimation is considered. The new approach of moments estimation based on usage of the characteristic function is proposed. By statistical simulation technique, author shows that new approach has some robust properties. For calculation of the derivatives of characteristic function there is used numerical differentiation. Obtained results confirmed that author’s idea has a certain working efficiency and it can be recommended for any statistical applications.

Keywords: characteristic function, distributional moments, robustness, outlier, statistical estimation problem, statistical simulation

Procedia PDF Downloads 484
904 Organic Paddy Production as a Coping Strategy to the Adverse Impact of Climate Change

Authors: Thapa M., J.P. Dutta, K.R. Pandey, R.R. Kattel

Abstract:

Nepal is extremely vulnerable to the impact of climate change. To mitigate the climate change effects on agricultural production and productivity a range of adaptive strategies needs to be considered. The study was conducted to assess organic paddy production as a coping strategy to the adverse impact of climate change in Phulbari, VDC of Chitwan district. Altogether, 120 respondents (60 adopters of organic farming and 60 from non adopter) were selected using snowball technique of sampling. Pre- tested interview schedule, direct observation, focus group discussion, key informant interview as well as secondary data were used to collect the required information. Factors determining the adoption of organic farming were found to be age, year of schooling, training, frequency of extension contact, perception about climate change, economically active members and poor. A unit increase in these factors except poor would increase the probability of adoption by 4.1%, 7.5%, 7.8%, 43.1%, 41.8% and 7% respectively. However, for poor, it would decrease the probability of adoption of organic farming by 5.1%. Average organic matter content in the adopters' field was higher (2.7%) than the non-adopters' field (2.5%). The regression result showed that type of farmer, price and area under rice cultivation had positive and significant relationship with income; however dependency ratio had negative relationship. As the year of adoption of organic farming increases, the production of rice decline in the first two years then after goes on increasing but the cost of production goes on decreasing with the year of adoption. The respondents adapted to the changing climate through diversification of crops, use of resistance varieties and following good cropping pattern. Gradually growing consumers' awareness about health, preference towards quality food products are the strong points behind organic farming, whereas lacks of bio-fertilizers, lack of effective extension services, no price differentiation between organic and inorganic products were the weak points. There is need for more training and education to change the attitude of farmers and enhance their confidence about the role of organic farming to cope with climate change impact.

Keywords: Organic farming, climate change, sustainable development

Procedia PDF Downloads 438
903 Risk Factors Affecting Construction Project Cost in Oman

Authors: Omar Amoudi, Latifa Al Brashdi

Abstract:

Construction projects are always subject to risks and uncertainties due to its unique and dynamic nature, outdoor work environment, the wide range of skills employed, various parties involved in addition to situation of construction business environment at large. Altogether, these risks and uncertainties affect projects objectives and lead to cost overruns, delay, and poor quality. Construction projects in Oman often experience cost overruns and delay. Managing these risks and reducing their impacts on construction cost requires firstly identifying these risks, and then analyzing their severity on project cost to obtain deep understanding about these risks. This in turn will assist construction managers in managing and tacking these risks. This paper aims to investigate the main risk factors that affect construction projects cost in the Sultanate of Oman. In order to achieve the main aim, literature review was carried out to identify the main risk factors affecting construction cost. Thirty-three risk factors were identified from the literature. Then, a questionnaire survey was designed and distributed among construction professionals (i.e., client, contractor and consultant) to obtain their opinion toward the probability of occurrence for each risk factor and its possible impact on construction project cost. The collected data was analyzed based on qualitative aspects and in several ways. The severity of each risk factor was obtained by multiplying the probability occurrence of a risk factor with its impact. The findings of this study reveal that the most significant risk factors that have high severity impact on construction project cost are: Change of Oil Price, Delay of Materials and Equipment Delivery, Changes in Laws and Regulations, Improper Budgeting, and Contingencies, Lack of Skilled Workforce and Personnel, Delays Caused by Contractor, Delays of Owner Payments, Delays Caused by Client, and Funding Risk. The results can be used as a basis for construction managers to make informed decisions and produce risk response procedures and strategies to tackle these risks and reduce their negative impacts on construction project cost.

Keywords: construction cost, construction projects, Oman, risk factors, risk management

Procedia PDF Downloads 314
902 Factorization of Computations in Bayesian Networks: Interpretation of Factors

Authors: Linda Smail, Zineb Azouz

Abstract:

Given a Bayesian network relative to a set I of discrete random variables, we are interested in computing the probability distribution P(S) where S is a subset of I. The general idea is to write the expression of P(S) in the form of a product of factors where each factor is easy to compute. More importantly, it will be very useful to give an interpretation of each of the factors in terms of conditional probabilities. This paper considers a semantic interpretation of the factors involved in computing marginal probabilities in Bayesian networks. Establishing such a semantic interpretations is indeed interesting and relevant in the case of large Bayesian networks.

Keywords: Bayesian networks, D-Separation, level two Bayesian networks, factorization of computation

Procedia PDF Downloads 503
901 Adaptive CFAR Analysis for Non-Gaussian Distribution

Authors: Bouchemha Amel, Chachoui Takieddine, H. Maalem

Abstract:

Automatic detection of targets in a modern communication system RADAR is based primarily on the concept of adaptive CFAR detector. To have an effective detection, we must minimize the influence of disturbances due to the clutter. The detection algorithm adapts the CFAR detection threshold which is proportional to the average power of the clutter, maintaining a constant probability of false alarm. In this article, we analyze the performance of two variants of adaptive algorithms CA-CFAR and OS-CFAR and we compare the thresholds of these detectors in the marine environment (no-Gaussian) with a Weibull distribution.

Keywords: CFAR, threshold, clutter, distribution, Weibull, detection

Procedia PDF Downloads 561
900 Analytical Modeling of Globular Protein-Ferritin in α-Helical Conformation: A White Noise Functional Approach

Authors: Vernie C. Convicto, Henry P. Aringa, Wilson I. Barredo

Abstract:

This study presents a conformational model of the helical structures of globular protein particularly ferritin in the framework of white noise path integral formulation by using Associated Legendre functions, Bessel and convolution of Bessel and trigonometric functions as modulating functions. The model incorporates chirality features of proteins and their helix-turn-helix sequence structural motif.

Keywords: globular protein, modulating function, white noise, winding probability

Procedia PDF Downloads 450