Search results for: Random Variable
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1364

Search results for: Random Variable

974 Large Deviations for Lacunary Systems

Authors: Bainian Li, Kongsheng Zhang

Abstract:

Let Xi be a Lacunary System, we established large deviations inequality for Lacunary System. Furthermore, we gained Marcinkiewicz Larger Number Law with dependent random variables sequences.

Keywords: Lacunary system, larger deviations, Locally GeneralizedGaussian, Strong law of large numbers.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1042
973 A Methodology for the Synthesis of Multi-Processors

Authors: Hamid Yasinian

Abstract:

Random epistemologies and hash tables have garnered minimal interest from both security experts and experts in the last several years. In fact, few information theorists would disagree with the evaluation of expert systems. In our research, we discover how flip-flop gates can be applied to the study of superpages. Though such a hypothesis at first glance seems perverse, it is derived from known results.

Keywords: Synthesis, Multi-Processors, Interactive Model, Moor’s Law.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2269
972 Service Quality and Consumer Behavior on Metered Taxi Services

Authors: Nattapong Techarattanased

Abstract:

The purposes of this research are to make comparisons in respect of the behaviors on the use of the services of metered taxi classified by the demographic factor and to study the influence of the recognition on service quality having the effect on usage behaviors of metered taxi services of consumers in Bangkok Metropolitan Areas. The samples used in this research were 400 metered taxi service users in Bangkok Metropolitan Areas and questionnaire was used as the tool for collecting the data. Analysis statistics are mean and multiple regression analysis. Results of the research revealed that the consumers recognize the overall quality of services in each aspect include tangible aspects of the service, responses to customers, assurance on the confidence, understanding and knowing of customers which is rated at the moderate level except the aspect of the assurance on the confidence and trustworthiness which are rated at a high level. For the result of hypothetical test, it is found that the quality in providing the services on the aspect of the assurance given to the customers has the effect on the usage behaviors of metered taxi services and the aspect of the frequency on the use of the services per month which in this connection. Such variable can forecast at one point nine percent (1.9%). In addition, quality in providing the services and the aspect of the responses to customers have the effect on the behaviors on the use of metered taxi services on the aspect of the expenses on the use of services per month which in this connection, such variable can forecast at two point one percent (2.1%).

Keywords: Consumer behavior, metered taxi, satisfaction, service quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3503
971 The Impact of Government Expenditure on Economic Growth: A Study of Asian Countries

Authors: K. P. K. S. Lahirushan, W. G. V. Gunasekara

Abstract:

Main purpose of this study is to identify the impact of government expenditure on economic growth in Asian Countries. Consequently, main objective is to analyze whether government expenditure causes economic growth in Asian countries vice versa and then scrutinizing long-run equilibrium relationship exists between them. The study completely based on secondary data. The methodology being quantitative that includes econometrical techniques of cointegration, panel fixed effects model and granger causality in the context of panel data of Asian countries; Singapore, Malaysia, Thailand, South Korea, Japan, China, Sri Lanka, India and Bhutan with 44 observations in each country, totaling to 396 observations from 1970 to 2013. The model used is the random effects panel OLS model. As with the above methodology, the study found the fascinating outcome. At first, empirical findings exhibit a momentous positive impact of government expenditure on Gross Domestic Production in Asian region. Secondly, government expenditure and economic growth indicate a long-run relationship in Asian countries. In conclusion, there is a unidirectional causality from economic growth to government expenditure and government expenditure to economic growth in Asian countries. Hence the study is validated that it is in line with the Keynesian theory and Wagner’s law as well. Consequently, it can be concluded that role of government would play a vital role in economic growth of Asian Countries. However; if government expenditure did not figure out with the economy’s needs it might be considerably inspiration the economy in a negative way so that society bears the costs.

Keywords: Asian Countries, Government Expenditure, Keynesian theory, Wagner’s theory, Random effects panel OLS model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8947
970 Enhanced Efficacy of Kinetic Power Transform for High-Speed Wind Field

Authors: Nan-Chyuan Tsai, Chao-Wen Chiang, Bai-Lu Wang

Abstract:

The three-time-scale plant model of a wind power generator, including a wind turbine, a flexible vertical shaft, a Variable Inertia Flywheel (VIF) module, an Active Magnetic Bearing (AMB) unit and the applied wind sequence, is constructed. In order to make the wind power generator be still able to operate as the spindle speed exceeds its rated speed, the VIF is equipped so that the spindle speed can be appropriately slowed down once any stronger wind field is exerted. To prevent any potential damage due to collision by shaft against conventional bearings, the AMB unit is proposed to regulate the shaft position deviation. By singular perturbation order-reduction technique, a lower-order plant model can be established for the synthesis of feedback controller. Two major system parameter uncertainties, an additive uncertainty and a multiplicative uncertainty, are constituted by the wind turbine and the VIF respectively. Frequency Shaping Sliding Mode Control (FSSMC) loop is proposed to account for these uncertainties and suppress the unmodeled higher-order plant dynamics. At last, the efficacy of the FSSMC is verified by intensive computer and experimental simulations for regulation on position deviation of the shaft and counter-balance of unpredictable wind disturbance.

Keywords: Sliding Mode Control, Singular Perturbation, Variable Inertia Flywheel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
969 Quantitative Assessment of Different Formulations of Antimalarials in Sentinel Sites of India

Authors: Taruna Katyal Arora, Geeta Kumari, Hari Shankar, Neelima Mishra

Abstract:

Substandard and counterfeit antimalarials is a major problem in malaria endemic areas. The availability of counterfeit/ substandard medicines is not only decreasing the efficacy in patients, but it is also one of the contributing factors for developing antimalarial drug resistance. Owing to this, a pilot study was conducted to survey quality of drugs collected from different malaria endemic areas of India. Artesunate+Sulphadoxine-Pyrimethamine (AS+SP), Artemether-Lumefantrine (AL), Chloroquine (CQ) tablets were randomly picked from public health facilities in selected states of India. The quality of antimalarial drugs from these areas was assessed by using Global Pharma Health Fund Minilab test kit. This includes physical/visual inspection and disintegration test. Thin-layer chromatography (TLC) was carried out for semi-quantitative assessment of active pharmaceutical ingredients. A total of 45 brands, out of which 21 were for CQ, 14 for AL and 10 for AS+SP were tested from Uttar Pradesh (U.P.), Mizoram, Meghalaya and Gujrat states. One out of 45 samples showed variable disintegration and retension factor. The variable disintegration and retention factor which would have been due to substandard quality or other factors including storage. However, HPLC analysis confirms standard active pharmaceutical ingredient, but may be due to humid temperature and moisture in storage may account for the observed result.

Keywords: Antimalarial medicines, counterfeit, substandard, thin layer chromatography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
968 Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

Authors: Sakshi Kausha, R.K Sharma

Abstract:

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

Keywords: Buffer Management, Consecutive packet loss, Quality-of-Service, Priority based packet discarding, partial buffersharing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1603
967 Accelerating Quantum Chemistry Calculations: Machine Learning for Efficient Evaluation of Electron-Repulsion Integrals

Authors: Nishant Rodrigues, Nicole Spanedda, Chilukuri K. Mohan, Arindam Chakraborty

Abstract:

A crucial objective in quantum chemistry is the computation of the energy levels of chemical systems. This task requires electron-repulsion integrals as inputs and the steep computational cost of evaluating these integrals poses a major numerical challenge in efficient implementation of quantum chemical software. This work presents a moment-based machine learning approach for the efficient evaluation of electron-repulsion integrals. These integrals were approximated using linear combinations of a small number of moments. Machine learning algorithms were applied to estimate the coefficients in the linear combination. A random forest approach was used to identify promising features using a recursive feature elimination approach, which performed best for learning the sign of each coefficient, but not the magnitude. A neural network with two hidden layers was then used to learn the coefficient magnitudes, along with an iterative feature masking approach to perform input vector compression, identifying a small subset of orbitals whose coefficients are sufficient for the quantum state energy computation. Finally, a small ensemble of neural networks (with a median rule for decision fusion) was shown to improve results when compared to a single network.

Keywords: Quantum energy calculations, atomic orbitals, electron-repulsion integrals, ensemble machine learning, random forests, neural networks, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 70
966 Evaluation of the Impact of Dataset Characteristics for Classification Problems in Biological Applications

Authors: Kanthida Kusonmano, Michael Netzer, Bernhard Pfeifer, Christian Baumgartner, Klaus R. Liedl, Armin Graber

Abstract:

Availability of high dimensional biological datasets such as from gene expression, proteomic, and metabolic experiments can be leveraged for the diagnosis and prognosis of diseases. Many classification methods in this area have been studied to predict disease states and separate between predefined classes such as patients with a special disease versus healthy controls. However, most of the existing research only focuses on a specific dataset. There is a lack of generic comparison between classifiers, which might provide a guideline for biologists or bioinformaticians to select the proper algorithm for new datasets. In this study, we compare the performance of popular classifiers, which are Support Vector Machine (SVM), Logistic Regression, k-Nearest Neighbor (k-NN), Naive Bayes, Decision Tree, and Random Forest based on mock datasets. We mimic common biological scenarios simulating various proportions of real discriminating biomarkers and different effect sizes thereof. The result shows that SVM performs quite stable and reaches a higher AUC compared to other methods. This may be explained due to the ability of SVM to minimize the probability of error. Moreover, Decision Tree with its good applicability for diagnosis and prognosis shows good performance in our experimental setup. Logistic Regression and Random Forest, however, strongly depend on the ratio of discriminators and perform better when having a higher number of discriminators.

Keywords: Classification, High dimensional data, Machine learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2337
965 Stock Price Forecast by Using Neuro-Fuzzy Inference System

Authors: Ebrahim Abbasi, Amir Abouec

Abstract:

In this research, the researchers have managed to design a model to investigate the current trend of stock price of the "IRAN KHODRO corporation" at Tehran Stock Exchange by utilizing an Adaptive Neuro - Fuzzy Inference system. For the Longterm Period, a Neuro-Fuzzy with two Triangular membership functions and four independent Variables including trade volume, Dividend Per Share (DPS), Price to Earning Ratio (P/E), and also closing Price and Stock Price fluctuation as an dependent variable are selected as an optimal model. For the short-term Period, a neureo – fuzzy model with two triangular membership functions for the first quarter of a year, two trapezoidal membership functions for the Second quarter of a year, two Gaussian combination membership functions for the third quarter of a year and two trapezoidal membership functions for the fourth quarter of a year were selected as an optimal model for the stock price forecasting. In addition, three independent variables including trade volume, price to earning ratio, closing Stock Price and a dependent variable of stock price fluctuation were selected as an optimal model. The findings of the research demonstrate that the trend of stock price could be forecasted with the lower level of error.

Keywords: Stock Price forecast, membership functions, Adaptive Neuro-Fuzzy Inference System, trade volume, P/E, DPS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2579
964 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1838
963 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 630
962 Study of Storms on the Javits Center Green Roof

Authors: A. Cho, H. Sanyal, J. Cataldo

Abstract:

A quantitative analysis of the different variables on both the South and North green roofs of the Jacob K. Javits Convention Center was taken to find mathematical relationships between net radiation and evapotranspiration (ET), average outside temperature, and the lysimeter weight. Groups of datasets were analyzed, and the relationships were plotted on linear and semi-log graphs to find consistent relationships. Antecedent conditions for each rainstorm were also recorded and plotted against the volumetric water difference within the lysimeter. The first relation was the inverse parabolic relationship between the lysimeter weight and the net radiation and ET. The peaks and valleys of the lysimeter weight corresponded to valleys and peaks in the net radiation and ET respectively, with the 8/22/15 and 1/22/16 datasets showing this trend. The U-shaped and inverse U-shaped plots of the two variables coincided, indicating an inverse relationship between the two variables. Cross variable relationships were examined through graphs with lysimeter weight as the dependent variable on the y-axis. 10 out of 16 of the plots of lysimeter weight vs. outside temperature plots had R² values > 0.9. Antecedent conditions were also recorded for rainstorms, categorized by the amount of precipitation accumulating during the storm. Plotted against the change in the volumetric water weight difference within the lysimeter, a logarithmic regression was found with large R² values. The datasets were compared using the Mann Whitney U-test to see if the datasets were statistically different, using a significance level of 5%; all datasets compared showed a U test statistic value, proving the null hypothesis of the datasets being different from being true.

Keywords: Green roof, green infrastructure, Javits Center, evapotranspiration, net radiation, lysimeter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 316
961 Non-Convex Multi Objective Economic Dispatch Using Ramp Rate Biogeography Based Optimization

Authors: Susanta Kumar Gachhayat, S. K. Dash

Abstract:

Multi objective non-convex economic dispatch problems of a thermal power plant are of grave concern for deciding the cost of generation and reduction of emission level for diminishing the global warming level for improving green-house effect. This paper deals with ramp rate constraints for achieving better inequality constraints so as to incorporate valve point loading for cost of generation in thermal power plant through ramp rate biogeography based optimization involving mutation and migration. Through 50 out of 100 trials, the cost function and emission objective function were found to have outperformed other classical methods such as lambda iteration method, quadratic programming method and many heuristic methods like particle swarm optimization method, weight improved particle swarm optimization method, constriction factor based particle swarm optimization method, moderate random particle swarm optimization method etc. Ramp rate biogeography based optimization applications prove quite advantageous in solving non convex multi objective economic dispatch problems subjected to nonlinear loads that pollute the source giving rise to third harmonic distortions and other such disturbances.

Keywords: Economic load dispatch, Biogeography based optimization, Ramp rate biogeography based optimization, Valve Point loading, Moderate random particle swarm optimization method, Weight improved particle swarm optimization method

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1006
960 Formant Tracking Linear Prediction Model using HMMs for Noisy Speech Processing

Authors: Zaineb Ben Messaoud, Dorra Gargouri, Saida Zribi, Ahmed Ben Hamida

Abstract:

This paper presents a formant-tracking linear prediction (FTLP) model for speech processing in noise. The main focus of this work is the detection of formant trajectory based on Hidden Markov Models (HMM), for improved formant estimation in noise. The approach proposed in this paper provides a systematic framework for modelling and utilization of a time- sequence of peaks which satisfies continuity constraints on parameter; the within peaks are modelled by the LP parameters. The formant tracking LP model estimation is composed of three stages: (1) a pre-cleaning multi-band spectral subtraction stage to reduce the effect of residue noise on formants (2) estimation stage where an initial estimate of the LP model of speech for each frame is obtained (3) a formant classification using probability models of formants and Viterbi-decoders. The evaluation results for the estimation of the formant tracking LP model tested in Gaussian white noise background, demonstrate that the proposed combination of the initial noise reduction stage with formant tracking and LPC variable order analysis, results in a significant reduction in errors and distortions. The performance was evaluated with noisy natual vowels extracted from international french and English vocabulary speech signals at SNR value of 10dB. In each case, the estimated formants are compared to reference formants.

Keywords: Formants Estimation, HMM, Multi Band Spectral Subtraction, Variable order LPC coding, White Gauusien Noise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931
959 Analysis of Seismic Waves Generated by Blasting Operations and their Response on Buildings

Authors: S. Ziaran, M. Musil, M. Cekan, O. Chlebo

Abstract:

The paper analyzes the response of buildings and industrially structures on seismic waves (low frequency mechanical vibration) generated by blasting operations. The principles of seismic analysis can be applied for different kinds of excitation such as: earthquakes, wind, explosions, random excitation from local transportation, periodic excitation from large rotating and/or machines with reciprocating motion, metal forming processes such as forging, shearing and stamping, chemical reactions, construction and earth moving work, and other strong deterministic and random energy sources caused by human activities. The article deals with the response of seismic, low frequency, mechanical vibrations generated by nearby blasting operations on a residential home. The goal was to determine the fundamental natural frequencies of the measured structure; therefore it is important to determine the resonant frequencies to design a suitable modal damping. The article also analyzes the package of seismic waves generated by blasting (Primary waves – P-waves and Secondary waves S-waves) and investigated the transfer regions. For the detection of seismic waves resulting from an explosion, the Fast Fourier Transform (FFT) and modal analysis, in the frequency domain, is used and the signal was acquired and analyzed also in the time domain. In the conclusions the measured results of seismic waves caused by blasting in a nearby quarry and its effect on a nearby structure (house) is analyzed. The response on the house, including the fundamental natural frequency and possible fatigue damage is also assessed.

Keywords: Building structure, seismic waves, spectral analysis, structural response.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5243
958 Switching Behaviors of TiN/HfOx/Pt Based RRAM

Authors: B. B. Weng, Z. Fang, Z. X. Chen, X. P. Wang, G. Q. Lo, D. L. Kwong

Abstract:

Resistive Random Access Memory (RRAM) had received great amount of attention from various research efforts in recent years, owing to its promising performance as a next generation memory device. In this paper, samples based on TiN/HfOx/Pt stack were prepared and its electrical switching behaviors were characterized and discussed in brief.

Keywords: HfOx, resistive switching, RRAM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
957 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134
956 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel

Authors: J. Daba, J. Dubois

Abstract:

In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.

Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
955 A Program Based on Artistic and Musical Activities to Acquire Educational Concepts for Children with Learning Difficulties

Authors: Ahmed Amin Mousa, Huda Mazeed, Eman Saad

Abstract:

The study aims to identify the extent of effectiveness of the artistic formation program using some types of pastes to reduce the hyperactivity of the kindergarten children with learning difficulties. The researchers have discussed the aforesaid topic, where the research sample included 120 children of ages between 5 to 6 years, from five schools for special needs, learning disability section, Cairo Governorate. The study used the quasi-empirical method, which depends on designing one group using the pre& post application measurements for the group to validate both, hypothesis and effectiveness of the program. The variables of the study were specified as follows; artistic formation program using Paper Mache as an independent variable, and its effect on the skills of kindergarten child with learning disabilities, as a dependent variable. The researchers utilized the application of an artistic formation program consisting of artistic and musical skills for kindergarten children with learning disabilities. The tools of the study, designed by the researchers, included: observation card used for recording the culling paper using pulp molding skills for kindergarten children with learning difficulties during practicing the artistic formation activity. Additionally, there was a program utilizing Artistic and Musical Activities for kindergarten children with learning disabilities to acquire educational concepts. The study was composed of 20 lessons for fine art activities and 20 lessons for musical activities, with obligation of giving the musical lesson with art lesson in one session to cast on the kindergarten child some educational concepts.

Keywords: musical activities, developing skills, early childhood, educational concepts, learning difficulties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 476
954 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 338
953 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: Path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
952 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer Aljohani

Abstract:

The COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred as corona virus which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as Omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. Numerous COVID-19 cases have produced a huge burden on hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease based on the symptoms and medical history of the patient. As machine learning is a widely accepted area and gives promising results for healthcare, this research presents an architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard University of California Irvine (UCI) dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques on the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and Principal Component Analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, Receiver Operating Characteristic (ROC) and Area under Curve (AUC). The results depict that Decision tree, Random Forest and neural networks outperform all other state-of-the-art ML techniques. This result can be used to effectively identify COVID-19 infection cases.

Keywords: Supervised machine learning, COVID-19 prediction, healthcare analytics, Random Forest, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 324
951 ALD HfO2 Based RRAM with Ti Capping

Authors: B. B. Weng, Z. Fang, Z. X. Chen, X. P. Wang, G. Q. Lo, D. L. Kwong

Abstract:

HfOx based Resistive Random Access Memory (RRAM) is one of the most widely studied material stack due to its promising performances as an emerging memory technology. In this work, we systematically investigated the effect of metal capping layer by preparing sample devices with varying thickness of Ti cap and comparing their operating parameters with the help of an Agilent-B1500A analyzer.

Keywords: HfOx, resistive switching, RRAM, metal capping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
950 Integrated Modeling of Transformation of Electricity and Transportation Sectors: A Case Study of Australia

Authors: T. Aboumahboub, R. Brecha, H. B. Shrestha, U. F. Hutfilter, A. Geiges, W. Hare, M. Schaeffer, L. Welder, M. Gidden

Abstract:

The proposed stringent mitigation targets require an immediate start for a drastic transformation of the whole energy system. The current Australian energy system is mainly centralized and fossil fuel-based in most states with coal and gas-fired plants dominating the total produced electricity over the recent past. On the other hand, the country is characterized by a huge, untapped renewable potential, where wind and solar energy could play a key role in the decarbonization of the Australia’s future energy system. However, integrating high shares of such variable renewable energy sources (VRES) challenges the power system considerably due to their temporal fluctuations and geographical dispersion. This raises the concerns about flexibility gap in the system to ensure the security of supply with increasing shares of such intermittent sources. One main flexibility dimension to facilitate system integration of high shares of VRES is to increase the cross-sectoral integration through coupling of electricity to other energy sectors alongside the decarbonization of the power sector and reinforcement of the transmission grid. This paper applies a multi-sectoral energy system optimization model for Australia. We investigate the cost-optimal configuration of a renewable-based Australian energy system and its transformation pathway in line with the ambitious range of proposed climate change mitigation targets. We particularly analyse the implications of linking the electricity and transport sectors in a prospective, highly renewable Australian energy system.

Keywords: Decarbonization, energy system modeling, sector coupling, variable renewable energies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 527
949 Transformations between Bivariate Polynomial Bases

Authors: Dimitris Varsamis, Nicholas Karampetakis

Abstract:

It is well known, that any interpolating polynomial p (x, y) on the vector space Pn,m of two-variable polynomials with degree less than n in terms of x and less than m in terms of y, has various representations that depends on the basis of Pn,m that we select i.e. monomial, Newton and Lagrange basis e.t.c.. The aim of this short note is twofold : a) to present transformations between the coordinates of the polynomial p (x, y) in the aforementioned basis and b) to present transformations between these bases.

Keywords: Bivariate interpolation polynomial, Polynomial basis, Transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2250
948 Accurate And Efficient Global Approximation using Adaptive Polynomial RSM for Complex Mechanical and Vehicular Performance Models

Authors: Y. Z. Wu, Z. Dong, S. K. You

Abstract:

Global approximation using metamodel for complex mathematical function or computer model over a large variable domain is often needed in sensibility analysis, computer simulation, optimal control, and global design optimization of complex, multiphysics systems. To overcome the limitations of the existing response surface (RS), surrogate or metamodel modeling methods for complex models over large variable domain, a new adaptive and regressive RS modeling method using quadratic functions and local area model improvement schemes is introduced. The method applies an iterative and Latin hypercube sampling based RS update process, divides the entire domain of design variables into multiple cells, identifies rougher cells with large modeling error, and further divides these cells along the roughest dimension direction. A small number of additional sampling points from the original, expensive model are added over the small and isolated rough cells to improve the RS model locally until the model accuracy criteria are satisfied. The method then combines local RS cells to regenerate the global RS model with satisfactory accuracy. An effective RS cells sorting algorithm is also introduced to improve the efficiency of model evaluation. Benchmark tests are presented and use of the new metamodeling method to replace complex hybrid electrical vehicle powertrain performance model in vehicle design optimization and optimal control are discussed.

Keywords: Global approximation, polynomial response surface, domain decomposition, domain combination, multiphysics modeling, hybrid powertrain optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1871
947 Survivability of Verhulst-free Populations under Mutation Accumulation

Authors: Chrysline Margus N. Piñol, Jenifer DP. De Maligaya, Ahl G. Balitaon

Abstract:

Stable nonzero populations without random deaths caused by the Verhulst factor (Verhulst-free) are a rarity. Majority either grow without bounds or die of excessive harmful mutations. To delay the accumulation of bad genes or diseases, a new environmental parameter Γ is introduced in the simulation. Current results demonstrate that stability may be achieved by setting Γ = 0.1. These steady states approach a maximum size that scales inversely with reproduction age.

Keywords: Aging, mutation accumulation, population dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1239
946 Development of State Model Theory for External Exclusive NOR Type LFSR Structures

Authors: Afaq Ahmad

Abstract:

Using state space technique and GF(2) theory, a simulation model for external exclusive NOR type LFSR structures is developed. Through this tool a systematic procedure is devised for computing pseudo-random binary sequences from such structures.

Keywords: LFSR, external exclusive NOR type, recursivebinary sequence, initial state - next state, state transition matrix.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
945 On Bianchi Type Cosmological Models in Lyra’s Geometry

Authors: R. K. Dubey

Abstract:

Bianchi type cosmological models have been studied on the basis of Lyra’s geometry. Exact solution has been obtained by considering a time dependent displacement field for constant deceleration parameter and varying cosmological term of the universe. The physical behavior of the different models has been examined for different cases.

Keywords: Bianchi type-I cosmological model, variable gravitational coupling (G) and Cosmological Constant term (β).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213