Search results for: random forest algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6083

Search results for: random forest algorithm

4763 Optimization of Wavy Channel Using Genetic Algorithm

Authors: Yue-Tzu Yang, Peng-Jen Chen

Abstract:

The present study deals with the numerical optimization of wavy channel with the help of genetic algorithm (GA). Three design variables related to the wave amplitude (A), the wavelength (λ) and the channel aspect ratio (α) are chosen and their ranges are decided through preliminary calculations of three-dimensional Navier-stokes and energy equations. A parametric study is also performed to show the effects of different design variables on the overall performance of the wavy channel. Objective functions related to the heat transfer and pressure drop, performance factor (PF) is formulated to analyze the performance of the wavy channel. The numerical results show that the wave amplitude and the channel aspect ratio have significant effects on the thermal performance. It can improve the performance of the wavy channels by increasing wave amplitude or decreasing the channel aspect ratio. Increasing wavelengths have no significant effects on the heat transfer performance.

Keywords: wavy channel, genetic algorithm, optimization, numerical simulation

Procedia PDF Downloads 286
4762 Robust Fuzzy PID Stabilizer: Modified Shuffled Frog Leaping Algorithm

Authors: Oveis Abedinia, Noradin Ghadimi, Nasser Mikaeilvand, Roza Poursoleiman, Asghar Poorfaraj

Abstract:

In this paper a robust Fuzzy Proportional Integral Differential (PID) controller is applied to multi-machine power system based on Modified Shuffled Frog Leaping (MSFL) algorithm. This newly proposed controller is more efficient because it copes with oscillations and different operating points. In this strategy the gains of the PID controller is optimized using the proposed technique. The nonlinear problem is formulated as an optimization problem for wide ranges of operating conditions using the MSFL algorithm. The simulation results demonstrate the effectiveness, good robustness and validity of the proposed method through some performance indices such as ITAE and FD under wide ranges operating conditions in comparison with TS and GSA techniques. The single-machine infinite bus system and New England 10-unit 39-bus standard power system are employed to illustrate the performance of the proposed method.

Keywords: fuzzy PID, MSFL, multi-machine, low frequency oscillation

Procedia PDF Downloads 416
4761 Detecting Covid-19 Fake News Using Deep Learning Technique

Authors: AnjalI A. Prasad

Abstract:

Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.

Keywords: BERT, CNN, LSTM, RNN

Procedia PDF Downloads 194
4760 Segmentation of Piecewise Polynomial Regression Model by Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise polynomial regression model is very flexible model for modeling the data. If the piecewise polynomial regression model is matched against the data, its parameters are not generally known. This paper studies the parameter estimation problem of piecewise polynomial regression model. The method which is used to estimate the parameters of the piecewise polynomial regression model is Bayesian method. Unfortunately, the Bayes estimator cannot be found analytically. Reversible jump MCMC algorithm is proposed to solve this problem. Reversible jump MCMC algorithm generates the Markov chain that converges to the limit distribution of the posterior distribution of piecewise polynomial regression model parameter. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of piecewise polynomial regression model.

Keywords: piecewise regression, bayesian, reversible jump MCMC, segmentation

Procedia PDF Downloads 359
4759 Use of Personal Rhythm to Authenticate Encrypted Messages

Authors: Carlos Gonzalez

Abstract:

When communicating using private and secure keys, there is always the doubt as to the identity of the message creator. We introduce an algorithm that uses the personal typing rhythm (keystroke dynamics) of the message originator to increase the trust of the authenticity of the message originator by the message recipient. The methodology proposes the use of a Rhythm Certificate Authority (RCA) to validate rhythm information. An illustrative example of the communication between Bob and Alice and the RCA is included. An algorithm of how to communicate with the RCA is presented. This RCA can be an independent authority or an enhanced Certificate Authority like the one used in public key infrastructure (PKI).

Keywords: authentication, digital signature, keystroke dynamics, personal rhythm, public-key encryption

Procedia PDF Downloads 289
4758 Efficient Antenna Array Beamforming with Robustness against Random Steering Mismatch

Authors: Ju-Hong Lee, Ching-Wei Liao, Kun-Che Lee

Abstract:

This paper deals with the problem of using antenna sensors for adaptive beamforming in the presence of random steering mismatch. We present an efficient adaptive array beamformer with robustness to deal with the considered problem. The robustness of the proposed beamformer comes from the efficient designation of the steering vector. Using the received array data vector, we construct an appropriate correlation matrix associated with the received array data vector and a correlation matrix associated with signal sources. Then, the eigenvector associated with the largest eigenvalue of the constructed signal correlation matrix is designated as an appropriate estimate of the steering vector. Finally, the adaptive weight vector required for adaptive beamforming is obtained by using the estimated steering vector and the constructed correlation matrix of the array data vector. Simulation results confirm the effectiveness of the proposed method.

Keywords: adaptive beamforming, antenna array, linearly constrained minimum variance, robustness, steering vector

Procedia PDF Downloads 190
4757 An A-Star Approach for the Quickest Path Problem with Time Windows

Authors: Christofas Stergianos, Jason Atkin, Herve Morvan

Abstract:

As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.

Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling

Procedia PDF Downloads 219
4756 Tuning of Kalman Filter Using Genetic Algorithm

Authors: Hesham Abdin, Mohamed Zakaria, Talaat Abd-Elmonaem, Alaa El-Din Sayed Hafez

Abstract:

Kalman filter algorithm is an estimator known as the workhorse of estimation. It has an important application in missile guidance, especially in lack of accurate data of the target due to noise or uncertainty. In this paper, a Kalman filter is used as a tracking filter in a simulated target-interceptor scenario with noise. It estimates the position, velocity, and acceleration of the target in the presence of noise. These estimations are needed for both proportional navigation and differential geometry guidance laws. A Kalman filter has a good performance at low noise, but a large noise causes considerable errors leads to performance degradation. Therefore, a new technique is required to overcome this defect using tuning factors to tune a Kalman filter to adapt increasing of noise. The values of the tuning factors are between 0.8 and 1.2, they have a specific value for the first half of range and a different value for the second half. they are multiplied by the estimated values. These factors have its optimum values and are altered with the change of the target heading. A genetic algorithm updates these selections to increase the maximum effective range which was previously reduced by noise. The results show that the selected factors have other benefits such as decreasing the minimum effective range that was increased earlier due to noise. In addition to, the selected factors decrease the miss distance for all ranges of this direction of the target, and expand the effective range which leads to increase probability of kill.

Keywords: proportional navigation, differential geometry, Kalman filter, genetic algorithm

Procedia PDF Downloads 496
4755 An Algorithm to Find Fractional Edge Domination Number and Upper Fractional Edge Domination Number of an Intuitionistic Fuzzy Graph

Authors: Karunambigai Mevani Govindasamy, Sathishkumar Ayyappan

Abstract:

In this paper, we formulate the algorithm to find out the dominating function parameters of Intuitionistic Fuzzy Graphs(IFG). The methodology we adopted here is converting any physical problem into an IFG, and that has been transformed into Intuitionistic Fuzzy Matrix. Using Linear Program Solver software (LiPS), we found the defined parameters for the given IFG. We obtained these parameters for a path and cycle IFG. This study can be extended to other varieties of IFG. In particular, we obtain the definition of edge dominating function, minimal edge dominating function, fractional edge domination number (γ_if^') and upper fractional edge domination number (Γ_if^') of an intuitionistic fuzzy graph. Also, we formulated an algorithm which is appropriate to work on LiPS to find fractional edge domination number and upper fractional edge domination number of an IFG.

Keywords: fractional edge domination number, intuitionistic fuzzy cycle, intuitionistic fuzzy graph, intuitionistic fuzzy path

Procedia PDF Downloads 156
4754 Optimal Design of Concrete Shells by Modified Particle Community Algorithm Using Spinless Curves

Authors: Reza Abbasi, Ahmad Hamidi Benam

Abstract:

Shell structures have many geometrical variables that modify some of these parameters to improve the mechanical behavior of the shell. On the other hand, the behavior of such structures depends on their geometry rather than on mass. Optimization techniques are useful in finding the geometrical shape of shell structures to improve mechanical behavior, especially to prevent or reduce bending anchors. The overall objective of this research is to optimize the shape of concrete shells using the thickness and height parameters along the reference curve and the overall shape of this curve. To implement the proposed scheme, the geometry of the structure was formulated using nonlinear curves. Shell optimization was performed under equivalent static loading conditions using the modified bird community algorithm. The results of this optimization show that without disrupting the initial design and with slight changes in the shell geometry, the structural behavior is significantly improved.

Keywords: concrete shells, shape optimization, spinless curves, modified particle community algorithm

Procedia PDF Downloads 220
4753 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 417
4752 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: chromosomes, cropping, genetic algorithm, genes

Procedia PDF Downloads 417
4751 A Generalized Sparse Bayesian Learning Algorithm for Near-Field Synthetic Aperture Radar Imaging: By Exploiting Impropriety and Noncircularity

Authors: Pan Long, Bi Dongjie, Li Xifeng, Xie Yongle

Abstract:

The near-field synthetic aperture radar (SAR) imaging is an advanced nondestructive testing and evaluation (NDT&E) technique. This paper investigates the complex-valued signal processing related to the near-field SAR imaging system, where the measurement data turns out to be noncircular and improper, meaning that the complex-valued data is correlated to its complex conjugate. Furthermore, we discover that the degree of impropriety of the measurement data and that of the target image can be highly correlated in near-field SAR imaging. Based on these observations, A modified generalized sparse Bayesian learning algorithm is proposed, taking impropriety and noncircularity into account. Numerical results show that the proposed algorithm provides performance gain, with the help of noncircular assumption on the signals.

Keywords: complex-valued signal processing, synthetic aperture radar, 2-D radar imaging, compressive sensing, sparse Bayesian learning

Procedia PDF Downloads 119
4750 Diversity and Phylogenetic Placement of Seven Inocybe (Inocybaceae, Fungi) from Benin

Authors: Hyppolite Aignon, Souleymane Yorou, Martin Ryberg, Anneli Svanholm

Abstract:

Climate change and human actions cause the extinction of wild mushrooms. In Benin, the diversity of fungi is large and may still contain species new to science but the inventory effort remains low and focuses on particularly edible species (Russula, Lactarius, Lactifluus, and also Amanita). In addition, inventories have started recently and some groups of fungi are not sufficiently sampled, however, the degradation of fungal habitat continues to increase and some species are already disappearing. (Yorou and De Kesel, 2011), however, the degradation of fungi habitat continues to increase and some species may disappear without being known. This genus (Inocybe) overlooked has a worldwide distribution and includes more than 700 species with many undiscovered or poorly known species worldwide and particularly in tropical Africa. It is therefore important to orient the inventory to other genera or important families such as Inocybe (Fungi, Agaricales) in order to highlight their diversity and also to know their phylogenetic positions with a combined approach of gene regions. This study aims to evaluate the species richness and phylogenetic position of Inocybe species and affiliated taxa in West Africa. Thus, in North Benin, we visited the Forest Reserve of Ouémé Supérieur, the Okpara forest and the Alibori Supérieur Forest Reserve. In the center, we targeted the Forest Reserve of Toui-Kilibo. The surveys have been carried during the raining season in the study area meaning from June to October. A total of 24 taxa were collected, photographed and described. The DNA was extracted, the Polymerase Chain Reaction was carried out using primers (ITS1-F, ITS4-B) for Internal transcribed spacer (ITS), (LROR, LWRB, LR7, LR5) for nuclear ribosomal (LSU), (RPB2-f5F, RPB2-b6F, RPB2- b6R2, RPB2-b7R) for RNA polymerase II gene (RPB2) and sequenced. The ITS sequences of the 24 collections of Inocybaceae were edited in Staden and all the sequences were aligned and edited with Aliview v1.17. The sequences were examined by eye for sufficient similarity to be considered the same species. 13 different species were present in the collections. In addition, sequences similar to the ITS sequences of the thirteen final species were searched using BLAST. The nLSU and RPB2 markers for these species have been inserted in a complete alignment, where species from all major Inocybaceae clades as well as from all continents except Antarctica are present. Our new sequences for nLSU and RPB2 have been manually aligned in this dataset. Phylogenetic analysis was performed using the RAxML v7.2.6 maximum likelihood software. Bootstrap replications have been set to 100 and no partitioning of the dataset has been performed. The resulting tree was viewed and edited with FigTree v1.4.3. The preliminary tree resulting from the analysis of maximum likelihood shows us that these species coming from Benin are much diversified and are distributed in four different clades (Inosperma, Inocybe, Mallocybe and Pseudosperma) on the seven clades of Inocybaceae but the phylogeny position of 7 is currently known. This study marks the diversity of Inocybe in Benin and the investigations will continue and a protection plan will be developed in the coming years.

Keywords: Benin, diversity, Inocybe, phylogeny placement

Procedia PDF Downloads 134
4749 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms

Authors: Alper Akin, Ibrahim Aydogdu

Abstract:

This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teaching-learning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.

Keywords: optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame

Procedia PDF Downloads 531
4748 Analysis of Fault Tolerance on Grid Computing in Real Time Approach

Authors: Parampal Kaur, Deepak Aggarwal

Abstract:

In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.

Keywords: computational grid, fault tolerance, task replication, job scheduling

Procedia PDF Downloads 426
4747 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce

Authors: Jiao Sun, Li Pan, Shijun Liu

Abstract:

Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.

Keywords: collaborative filtering, recommendation, data normalization, mapreduce

Procedia PDF Downloads 210
4746 Spatial Data Mining by Decision Trees

Authors: Sihem Oujdi, Hafida Belbachir

Abstract:

Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.

Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining

Procedia PDF Downloads 604
4745 Using Hyperspectral Sensor and Machine Learning to Predict Water Potentials of Wild Blueberries during Drought Treatment

Authors: Yongjiang Zhang, Kallol Barai, Umesh R. Hodeghatta, Trang Tran, Vikas Dhiman

Abstract:

Detecting water stress on crops early and accurately is crucial to minimize its impact. This study aims to measure water stress in wild blueberry crops non-destructively by analyzing proximal hyperspectral data. The data collection took place in the summer growing season of 2022. A drought experiment was conducted on wild blueberries in the randomized block design in the greenhouse, incorporating various genotypes and irrigation treatments. Hyperspectral data ( spectral range: 400-1000 nm) using a handheld spectroradiometer and leaf water potential data using a pressure chamber were collected from wild blueberry plants. Machine learning techniques, including multiple regression analysis and random forest models, were employed to predict leaf water potential (MPa). We explored the optimal wavelength bands for simple differences (RY1-R Y2), simple ratios (RY1/RY2), and normalized differences (|RY1-R Y2|/ (RY1-R Y2)). NDWI ((R857 - R1241)/(R857 + R1241)), SD (R2188 – R2245), and SR (R1752 / R1756) emerged as top predictors for predicting leaf water potential, significantly contributing to the highest model performance. The base learner models achieved an R-squared value of approximately 0.81, indicating their capacity to explain 81% of the variance. Research is underway to develop a neural vegetation index (NVI) that automates the process of index development by searching for specific wavelengths in the space ratio of linear functions of reflectance. The NVI framework could work across species and predict different physiological parameters.

Keywords: hyperspectral reflectance, water potential, spectral indices, machine learning, wild blueberries, optimal bands

Procedia PDF Downloads 57
4744 Assessment of Dietary Intake of Pregnant Women

Authors: Tuleshova Gulnara, Abduldayeva Aigul

Abstract:

The goal is based on the studying the prevalence of micronutrient deficiencies among children and women of reproductive age to develop evidence-based recommendations aimed at improving the effectiveness of programs to prevent micronutrient deficiency. Subject: In our study we used a representative, random sample, carried out with the cluster method in the precinct of the principle areas of medical care for children 5 years of old. If the site has at least 60 children under 5 years of old, each second child was sampled, and if more than 60 children - each third child (first child selected by random sampling). The total number of investigated persons was within 80-86 women of reproductive age and children - within 80-92 people. Results: The studies found that the average prevalence of anemia among children aged 6-59 months was 35.2%, with the most susceptible to iron deficiency anemia in infants aged 6-23 months (53.3%). The prevalence of anemia among non-pregnant women was 39.0% among pregnant women - 43.8%. In children, the prevalence of folate deficiency was the highest (27.6%). Among non-pregnant women, frequent prevalence of folic acid deficiency was 37.0%. The prevalence of vitamin A deficiency was higher among children living in Astana (37.4%) compared with the medium-republican level (23.2%).

Keywords: nutrition, pregnant women, micronutrients, macronutrients

Procedia PDF Downloads 604
4743 Randomness in Cybertext: A Study on Computer-Generated Poetry from the Perspective of Semiotics

Authors: Hongliang Zhang

Abstract:

The use of chance procedures and randomizers in poetry-writing can be traced back to surrealist works, which, by appealing to Sigmund Freud's theories, were still logocentrism. In the 1960s, random permutation and combination were extensively used by the Oulipo, John Cage and Jackson Mac Low, which further deconstructed the metaphysical presence of writing. Today, the randomly-generated digital poetry has emerged as a genre of cybertext which should be co-authored by readers. At the same time, the classical theories have now been updated by cybernetics and media theories. N· Katherine Hayles put forward the concept of ‘the floating signifiers’ by Jacques Lacan to be the ‘the flickering signifiers’ , arguing that the technology per se has become a part of the textual production. This paper makes a historical review of the computer-generated poetry in the perspective of semiotics, emphasizing that the randomly-generated digital poetry which hands over the dual tasks of both interpretation and writing to the readers demonstrates the intervention of media technology in literature. With the participation of computerized algorithm and programming languages, poems randomly generated by computers have not only blurred the boundary between encoder and decoder, but also raises the issue of human-machine. It is also a significant feature of the cybertext that the productive process of the text is full of randomness.

Keywords: cybertext, digital poetry, poetry generator, semiotics

Procedia PDF Downloads 167
4742 Measuring the Unmeasurable: A Project of High Risk Families Prediction and Management

Authors: Peifang Hsieh

Abstract:

The prevention of child abuse has aroused serious concerns in Taiwan because of the disparity between the increasing amount of reported child abuse cases that doubled over the past decade and the scarcity of social workers. New Taipei city, with the most population in Taiwan and over 70% of its 4 million citizens are migrant families in which the needs of children can be easily neglected due to insufficient support from relatives and communities, sees urgency for a social support system, by preemptively identifying and outreaching high-risk families of child abuse, so as to offer timely assistance and preventive measure to safeguard the welfare of the children. Big data analysis is the inspiration. As it was clear that high-risk families of child abuse have certain characteristics in common, New Taipei city decides to consolidate detailed background information data from departments of social affairs, education, labor, and health (for example considering status of parents’ employment, health, and if they are imprisoned, fugitives or under substance abuse), to cross-reference for accurate and prompt identification of the high-risk families in need. 'The Service Center for High-Risk Families' (SCHF) was established to integrate data cross-departmentally. By utilizing the machine learning 'random forest method' to build a risk prediction model which can early detect families that may very likely to have child abuse occurrence, the SCHF marks high-risk families red, yellow, or green to indicate the urgency for intervention, so as to those families concerned can be provided timely services. The accuracy and recall rates of the above model were 80% and 65%. This prediction model can not only improve the child abuse prevention process by helping social workers differentiate the risk level of newly reported cases, which may further reduce their major workload significantly but also can be referenced for future policy-making.

Keywords: child abuse, high-risk families, big data analysis, risk prediction model

Procedia PDF Downloads 119
4741 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks

Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah

Abstract:

Ant colony based routing algorithms are known to grantee the packet delivery, but they su ffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.

Keywords: ad-hoc network, MANET, ant colony routing, position based routing

Procedia PDF Downloads 411
4740 Landslide Vulnerability Assessment in Context with Indian Himalayan

Authors: Neha Gupta

Abstract:

Landslide vulnerability is considered as the crucial parameter for the assessment of landslide risk. The term vulnerability defined as the damage or degree of elements at risk of different dimensions, i.e., physical, social, economic, and environmental dimensions. Himalaya region is very prone to multi-hazard such as floods, forest fires, earthquakes, and landslides. With the increases in fatalities rates, loss of infrastructure, and economy due to landslide in the Himalaya region, leads to the assessment of vulnerability. In this study, a methodology to measure the combination of vulnerability dimension, i.e., social vulnerability, physical vulnerability, and environmental vulnerability in one framework. A combined result of these vulnerabilities has rarely been carried out. But no such approach was applied in the Indian Scenario. The methodology was applied in an area of east Sikkim Himalaya, India. The physical vulnerability comprises of building footprint layer extracted from remote sensing data and Google Earth imaginary. The social vulnerability was assessed by using population density based on land use. The land use map was derived from a high-resolution satellite image, and for environment vulnerability assessment NDVI, forest, agriculture land, distance from the river were assessed from remote sensing and DEM. The classes of social vulnerability, physical vulnerability, and environment vulnerability were normalized at the scale of 0 (no loss) to 1 (loss) to get the homogenous dataset. Then the Multi-Criteria Analysis (MCA) was used to assign individual weights to each dimension and then integrate it into one frame. The final vulnerability was further classified into four classes from very low to very high.

Keywords: landslide, multi-criteria analysis, MCA, physical vulnerability, social vulnerability

Procedia PDF Downloads 293
4739 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring

Authors: Daniel Fundi Murithi

Abstract:

Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.

Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring

Procedia PDF Downloads 152
4738 PID Sliding Mode Control with Sliding Surface Dynamics based Continuous Control Action for Robotic Systems

Authors: Wael M. Elawady, Mohamed F. Asar, Amany M. Sarhan

Abstract:

This paper adopts a continuous sliding mode control scheme for trajectory tracking control of robot manipulators with structured and unstructured uncertain dynamics and external disturbances. In this algorithm, the equivalent control in the conventional sliding mode control is replaced by a PID control action. Moreover, the discontinuous switching control signal is replaced by a continuous proportional-integral (PI) control term such that the implementation of the proposed control algorithm does not require the prior knowledge of the bounds of unknown uncertainties and external disturbances and completely eliminates the chattering phenomenon of the conventional sliding mode control approach. The closed-loop system with the adopted control algorithm has been proved to be globally stable by using Lyapunov stability theory. Numerical simulations using the dynamical model of robot manipulators with modeling uncertainties demonstrate the superiority and effectiveness of the proposed approach in high speed trajectory tracking problems.

Keywords: PID, robot, sliding mode control, uncertainties

Procedia PDF Downloads 487
4737 FlexPoints: Efficient Algorithm for Detection of Electrocardiogram Characteristic Points

Authors: Daniel Bulanda, Janusz A. Starzyk, Adrian Horzyk

Abstract:

The electrocardiogram (ECG) is one of the most commonly used medical tests, essential for correct diagnosis and treatment of the patient. While ECG devices generate a huge amount of data, only a small part of them carries valuable medical information. To deal with this problem, many compression algorithms and filters have been developed over the past years. However, the rapid development of new machine learning techniques poses new challenges. To address this class of problems, we created the FlexPoints algorithm that searches for characteristic points on the ECG signal and ignores all other points that do not carry relevant medical information. The conducted experiments proved that the presented algorithm can significantly reduce the number of data points which represents ECG signal without losing valuable medical information. These sparse but essential characteristic points (flex points) can be a perfect input for some modern machine learning models, which works much better using flex points as an input instead of raw data or data compressed by many popular algorithms.

Keywords: characteristic points, electrocardiogram, ECG, machine learning, signal compression

Procedia PDF Downloads 151
4736 Water Balance in the Forest Basins Essential for the Water Supply in Central America

Authors: Elena Listo Ubeda, Miguel Marchamalo Sacristan

Abstract:

The demand for water doubles every twenty years, at a rate which is twice as fast as the world´s population growth. Despite it´s great importance, water is one of the most degraded natural resources in the world, mainly because of the reduction of natural vegetation coverage, population growth, contamination and changes in the soil use which reduces its capacity to collect water. This situation is especially serious in Central America, as reflected in the Human Development reports. The objective of this project is to assist in the improvement of water production and quality in Central America. In order to do these two watersheds in Costa Rica were selected as experiments: that of the Virilla-Durazno River, located in the extreme north east of the central valley which has an Atlantic influence; and that of the Jabillo River, which flows directly into the Pacific. The Virilla river watershed is located over andisols, and that of the Jabillo River is over alfisols, and both are of great importance for water supply to the Greater Metropolitan Area and the future tourist resorts respectively, as well as for the production of agriculture, livestock and hydroelectricity. The hydrological reaction in different soil-cover complexes, varying from the secondary forest to natural vegetation and degraded pasture, was analyzed according to the evaluation of the properties of the soil, infiltration, soil compaction, as well as the effects of the soil cover complex on erosion, calculated by the C factor of the Revised Universal Soil Loss Equation (RUSLE). A water balance was defined for each watershed, in which the volume of water that enters and leaves were estimated, as well as the evapotranspiration, runoff, and infiltration. Two future scenarios, representing the implementation of reforestation and deforestation plans, were proposed, and were analyzed for the effects of the soil cover complex on the water balance in each case. The results obtained show an increase of the ground water recharge in the humid forest areas, and an extension of the study of the dry areas is proposed since the ground water recharge here is diminishing. These results are of great significance for the planning, design of Payment Schemes for Environmental Services and the improvement of the existing water supply systems. In Central America spatial planning is a priority, as are the watersheds, in order to assess the water resource socially and economically, and securing its availability for the future.

Keywords: Costa Rica, infiltration, soil, water

Procedia PDF Downloads 375
4735 Peeling Behavior of Thin Elastic Films Bonded to Rigid Substrate of Random Surface Topology

Authors: Ravinu Garg, Naresh V. Datla

Abstract:

We study the fracture mechanics of peeling of thin films perfectly bonded to a rigid substrate of any random surface topology using an analytical formulation. A generalized theoretical model has been developed to determine the peel strength of thin elastic films. It is demonstrated that an improvement in the peel strength can be achieved by modifying the surface characteristics of the rigid substrate. Characterization study has been performed to analyze the effect of different parameters on effective peel force from the rigid surface. Different surface profiles such as circular and sinusoidal has been considered to demonstrate the bonding characteristics of film-substrate interface. Condition for the instability in the debonding of the film is analyzed, where the localized self-debonding arises depending upon the film and surface characteristics. This study is towards improved adhesion strength of thin films to rigid substrate using different textured surfaces.

Keywords: debonding, fracture mechanics, peel test, thin film adhesion

Procedia PDF Downloads 437
4734 Pavement Maintenance and Rehabilitation Scheduling Using Genetic Algorithm Based Multi Objective Optimization Technique

Authors: Ashwini Gowda K. S, Archana M. R, Anjaneyappa V

Abstract:

This paper presents pavement maintenance and management system (PMMS) to obtain optimum pavement maintenance and rehabilitation strategies and maintenance scheduling for a network using a multi-objective genetic algorithm (MOGA). Optimal pavement maintenance & rehabilitation strategy is to maximize the pavement condition index of the road section in a network with minimum maintenance and rehabilitation cost during the planning period. In this paper, NSGA-II is applied to perform maintenance optimization; this maintenance approach was expected to preserve and improve the existing condition of the highway network in a cost-effective way. The proposed PMMS is applied to a network that assessed pavement based on the pavement condition index (PCI). The minimum and maximum maintenance cost for a planning period of 20 years obtained from the non-dominated solution was found to be 5.190x10¹⁰ ₹ and 4.81x10¹⁰ ₹, respectively.

Keywords: genetic algorithm, maintenance and rehabilitation, optimization technique, pavement condition index

Procedia PDF Downloads 138