Search results for: incremental conductance algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3864

Search results for: incremental conductance algorithm

1614 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 151
1613 Fast Algorithm to Determine Initial Tsunami Wave Shape at Source

Authors: Alexander P. Vazhenin, Mikhail M. Lavrentiev, Alexey A. Romanenko, Pavel V. Tatarintsev

Abstract:

One of the problems obstructing effective tsunami modelling is the lack of information about initial wave shape at source. The existing methods; geological, sea radars, satellite images, contain an important part of uncertainty. Therefore, direct measurement of tsunami waves obtained at the deep water bottom peruse recorders is also used. In this paper we propose a new method to reconstruct the initial sea surface displacement at tsunami source by the measured signal (marigram) approximation with the help of linear combination of synthetic marigrams from the selected set of unit sources, calculated in advance. This method has demonstrated good precision and very high performance. The mathematical model and results of numerical tests are here described.

Keywords: numerical tests, orthogonal decomposition, Tsunami Initial Sea Surface Displacement

Procedia PDF Downloads 465
1612 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach

Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares

Abstract:

Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.

Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network

Procedia PDF Downloads 201
1611 Product Design and Development of Wearable Assistant Device

Authors: Hao-Jun Hong, Jung-Tang Huang

Abstract:

The world is gradually becoming an aging society, and with the lack of laboring forces, this phenomenon is affecting the nation’s economy growth. Although nursing centers are booming in recent years, the lack of medical resources are yet to be resolved, thus creating an innovative wearable medical device could be a vital solution. This research is focused on the design and development of a wearable device which obtains a more precise heart failure measurement than products on the market. The method used by the device is based on the sensor fusion and big data algorithm. From the test result, the modified structure of wearable device can significantly decrease the MA (Motion Artifact) and provide users a more cozy and accurate physical monitor experience.

Keywords: big data, heart failure, motion artifact, sensor fusion, wearable medical device

Procedia PDF Downloads 348
1610 Programmed Speech to Text Summarization Using Graph-Based Algorithm

Authors: Hamsini Pulugurtha, P. V. S. L. Jagadamba

Abstract:

Programmed Speech to Text and Text Summarization Using Graph-based Algorithms can be utilized in gatherings to get the short depiction of the gathering for future reference. This gives signature check utilizing Siamese neural organization to confirm the personality of the client and convert the client gave sound record which is in English into English text utilizing the discourse acknowledgment bundle given in python. At times just the outline of the gathering is required, the answer for this text rundown. Thus, the record is then summed up utilizing the regular language preparing approaches, for example, solo extractive text outline calculations

Keywords: Siamese neural network, English speech, English text, natural language processing, unsupervised extractive text summarization

Procedia PDF Downloads 211
1609 Quantifying Multivariate Spatiotemporal Dynamics of Malaria Risk Using Graph-Based Optimization in Southern Ethiopia

Authors: Yonas Shuke Kitawa

Abstract:

Background: Although malaria incidence has substantially fallen sharply over the past few years, the rate of decline varies by district, time, and malaria type. Despite this turn-down, malaria remains a major public health threat in various districts of Ethiopia. Consequently, the present study is aimed at developing a predictive model that helps to identify the spatio-temporal variation in malaria risk by multiple plasmodium species. Methods: We propose a multivariate spatio-temporal Bayesian model to obtain a more coherent picture of the temporally varying spatial variation in disease risk. The spatial autocorrelation in such a data set is typically modeled by a set of random effects that assign a conditional autoregressive prior distribution. However, the autocorrelation considered in such cases depends on a binary neighborhood matrix specified through the border-sharing rule. Over here, we propose a graph-based optimization algorithm for estimating the neighborhood matrix that merely represents the spatial correlation by exploring the areal units as the vertices of a graph and the neighbor relations as the series of edges. Furthermore, we used aggregated malaria count in southern Ethiopia from August 2013 to May 2019. Results: We recognized that precipitation, temperature, and humidity are positively associated with the malaria threat in the area. On the other hand, enhanced vegetation index, nighttime light (NTL), and distance from coastal areas are negatively associated. Moreover, nonlinear relationships were observed between malaria incidence and precipitation, temperature, and NTL. Additionally, lagged effects of temperature and humidity have a significant effect on malaria risk by either species. More elevated risk of P. falciparum was observed following the rainy season, and unstable transmission of P. vivax was observed in the area. Finally, P. vivax risks are less sensitive to environmental factors than those of P. falciparum. Conclusion: The improved inference was gained by employing the proposed approach in comparison to the commonly used border-sharing rule. Additionally, different covariates are identified, including delayed effects, and elevated risks of either of the cases were observed in districts found in the central and western regions. As malaria transmission operates in a spatially continuous manner, a spatially continuous model should be employed when it is computationally feasible.

Keywords: disease mapping, MSTCAR, graph-based optimization algorithm, P. falciparum, P. vivax, waiting matrix

Procedia PDF Downloads 73
1608 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data

Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora

Abstract:

Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.

Keywords: drilling optimization, geological formations, machine learning, rate of penetration

Procedia PDF Downloads 129
1607 Biologically Inspired Small Infrared Target Detection Using Local Contrast Mechanisms

Authors: Tian Xia, Yuan Yan Tang

Abstract:

In order to obtain higher small target detection accuracy, this paper presents an effective algorithm inspired by the local contrast mechanism. The proposed method can enhance target signal and suppress background clutter simultaneously. In the first stage, a enhanced image is obtained using the proposed Weighted Laplacian of Gaussian. In the second stage, an adaptive threshold is adopted to segment the target. Experimental results on two changeling image sequences show that the proposed method can detect the bright and dark targets simultaneously, and is not sensitive to sea-sky line of the infrared image. So it is fit for IR small infrared target detection.

Keywords: small target detection, local contrast, human vision system, Laplacian of Gaussian

Procedia PDF Downloads 463
1606 Design of Reconfigurable Fixed-Point LMS Adaptive FIR Filter

Authors: S. Padmapriya, V. Lakshmi Prabha

Abstract:

In this paper, an efficient reconfigurable fixed-point Least Mean Square Adaptive FIR filter is proposed. The proposed architecture has two methods of operation: one is area efficient design and the other is optimized power. Pipelining of the adder blocks and partial product generator are used to achieve low area and reversible logic is used to obtain low power design. Depending upon the input samples and filter coefficients, one of the techniques is chosen. Least-Mean-Square adaptation is performed to update the weights. The architecture is coded using Verilog and synthesized in cadence encounter 0.18μm technology. The synthesized results show that the area reduction ratio of the proposed when compared with conventional technique is about 1.2%.

Keywords: adaptive filter, carry select adder, least mean square algorithm, reversible logic

Procedia PDF Downloads 324
1605 Feasibility Study of Distributed Lightless Intersection Control with Level 1 Autonomous Vehicles

Authors: Bo Yang, Christopher Monterola

Abstract:

Urban intersection control without the use of the traffic light has the potential to vastly improve the efficiency of the urban traffic flow. For most proposals in the literature, such lightless intersection control depends on the mass market commercialization of highly intelligent autonomous vehicles (AV), which limits the prospects of near future implementation. We present an efficient lightless intersection traffic control scheme that only requires Level 1 AV as defined by NHTSA. The technological barriers of such lightless intersection control are thus very low. Our algorithm can also accommodate a mixture of AVs and conventional vehicles. We also carry out large scale numerical analysis to illustrate the feasibility, safety and robustness, comfort level, and control efficiency of our intersection control scheme.

Keywords: intersection control, autonomous vehicles, traffic modelling, intelligent transport system

Procedia PDF Downloads 453
1604 A Multiobjective Damping Function for Coordinated Control of Power System Stabilizer and Power Oscillation Damping

Authors: Jose D. Herrera, Mario A. Rios

Abstract:

This paper deals with the coordinated tuning of the Power System Stabilizer (PSS) controller and Power Oscillation Damping (POD) Controller of Flexible AC Transmission System (FACTS) in a multi-machine power systems. The coordinated tuning is based on the critical eigenvalues of the power system and a model reduction technique where the Hankel Singular Value method is applied. Through the linearized system model and the parameter-constrained nonlinear optimization algorithm, it can compute the parameters of both controllers. Moreover, the parameters are optimized simultaneously obtaining the gains of both controllers. Then, the nonlinear simulation to observe the time response of the controller is performed.

Keywords: electromechanical oscillations, power system stabilizers, power oscillation damping, hankel singular values

Procedia PDF Downloads 587
1603 Hand Motion Trajectory Analysis for Dynamic Hand Gestures Used in Indian Sign Language

Authors: Daleesha M. Viswanathan, Sumam Mary Idicula

Abstract:

Dynamic hand gestures are an intrinsic component in sign language communication. Extracting spatial temporal features of the hand gesture trajectory plays an important role in a dynamic gesture recognition system. Finding a discrete feature descriptor for the motion trajectory based on the orientation feature is the main concern of this paper. Kalman filter algorithm and Hidden Markov Models (HMM) models are incorporated with this recognition system for hand trajectory tracking and for spatial temporal classification, respectively.

Keywords: orientation features, discrete feature vector, HMM., Indian sign language

Procedia PDF Downloads 364
1602 Tongue Image Retrieval Based Using Machine Learning

Authors: Ahmad FAROOQ, Xinfeng Zhang, Fahad Sabah, Raheem Sarwar

Abstract:

In Traditional Chinese Medicine, tongue diagnosis is a vital inspection tool (TCM). In this study, we explore the potential of machine learning in tongue diagnosis. It begins with the cataloguing of the various classifications and characteristics of the human tongue. We infer 24 kinds of tongues from the material and coating of the tongue, and we identify 21 attributes of the tongue. The next step is to apply machine learning methods to the tongue dataset. We use the Weka machine learning platform to conduct the experiment for performance analysis. The 457 instances of the tongue dataset are used to test the performance of five different machine learning methods, including SVM, Random Forests, Decision Trees, and Naive Bayes. Based on accuracy and Area under the ROC Curve, the Support Vector Machine algorithm was shown to be the most effective for tongue diagnosis (AUC).

Keywords: medical imaging, image retrieval, machine learning, tongue

Procedia PDF Downloads 73
1601 EhfadHaya (SaveLife) / AateHayah (GiveLife) Blood Donor Website

Authors: Sameer Muhammad Aslam, Nura Said Mohsin Al-Saifi

Abstract:

This research shows the process of creating a blood donation website for Oman. Blood donation is a widespread, crucial, ongoing process, so it is important that this website is easy to use. Several automated blood management systems are available, but none provides an effective algorithm that takes into account variables such as frequency of donation, donation date, and gender. In Oman, the Ministry of Health maintains a blood bank and keeps donors informed about the need for blood through a website. They also inform donors and the wider public where and when is their next blood donation event. The website's main goals are to educate the community about the benefits of blood donation. It also manages donor and receiver documentation and encourages voluntary blood donation by providing easy access to information about blood types and blood distribution in various hospitals in Oman, based on hospital needs.

Keywords: Oman, blood bank, blood donors, donor website

Procedia PDF Downloads 213
1600 The Impact of Professional Development on Teachers’ Instructional Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.

Keywords: teacher learning, professional development, pedagogical content knowledge, geometry

Procedia PDF Downloads 165
1599 Isogeometric Topology Optimization in Cracked Structures Design

Authors: Dongkyu Lee, Thanh Banh Thien, Soomi Shin

Abstract:

In the present study, the isogeometric topology optimization is proposed for cracked structures through using Solid Isotropic Material with Penalization (SIMP) as a design model. Design density variables defined in the variable space are used to approximate the element analysis density by the bivariate B-spline basis functions. The mathematical formulation of topology optimization problem solving minimum structural compliance is an alternating active-phase algorithm with the Gauss-Seidel version as an optimization model of optimality criteria. Stiffness and adjoint sensitivity formulations linked to strain energy of cracked structure are proposed in terms of design density variables. Numerical examples demonstrate interactions of topology optimization to structures design with cracks.

Keywords: topology optimization, isogeometric, NURBS, design

Procedia PDF Downloads 488
1598 Applying Sequential Pattern Mining to Generate Block for Scheduling Problems

Authors: Meng-Hui Chen, Chen-Yu Kao, Chia-Yu Hsu, Pei-Chann Chang

Abstract:

The main idea in this paper is using sequential pattern mining to find the information which is helpful for finding high performance solutions. By combining this information, it is defined as blocks. Using the blocks to generate artificial chromosomes (ACs) could improve the structure of solutions. Estimation of Distribution Algorithms (EDAs) is adapted to solve the combinatorial problems. Nevertheless many of these approaches are advantageous for this application, but only some of them are used to enhance the efficiency of application. Generating ACs uses patterns and EDAs could increase the diversity. According to the experimental result, the algorithm which we proposed has a better performance to solve the permutation flow-shop problems.

Keywords: combinatorial problems, sequential pattern mining, estimationof distribution algorithms, artificial chromosomes

Procedia PDF Downloads 598
1597 Optimal Injected Current Control for Shunt Active Power Filter Using Artificial Intelligence

Authors: Brahim Berbaoui

Abstract:

In this paper, a new particle swarm optimization (PSO) based method is proposed for the implantation of optimal harmonic power flow in power systems. In this algorithm approach, proportional integral controller for reference compensating currents of active power filter is performed in order to minimize the total harmonic distortion (THD). The simulation results show that the new control method using PSO approach is not only easy to be implanted, but also very effective in reducing the unwanted harmonics and compensating reactive power. The studies carried out have been accomplished using the MATLAB Simulink Power System Toolbox.

Keywords: shunt active power filter, power quality, current control, proportional integral controller, particle swarm optimization

Procedia PDF Downloads 610
1596 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform

Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar

Abstract:

It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.

Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)

Procedia PDF Downloads 575
1595 Impairments Correction of Six-Port Based Millimeter-Wave Radar

Authors: Dan Ohev Zion, Alon Cohen

Abstract:

In recent years, the presence of short-range millimeter-wave radar in civil application has increased significantly. Autonomous driving, security, 3D imaging and high data rate communication systems are a few examples. The next challenge is the integration inside small form-factor devices, such as smartphones (e.g. gesture recognition). The main challenge is implementation of a truly low-power, low-complexity high-resolution radar. The most popular approach is the Frequency Modulated Continuous Wave (FMCW) radar, with an analog multiplication front-end. In this paper, we present an approach for adaptive estimation and correction of impairments of such front-end, specifically implemented using the Six-Port Device (SPD) as the multiplier element. The proposed algorithm was simulated and implemented on a 60 GHz radar lab prototype.

Keywords: radar, FMCW Radar, IQ mismatch, six port

Procedia PDF Downloads 148
1594 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 346
1593 Automatic Algorithm for Processing and Analysis of Images from the Comet Assay

Authors: Yeimy L. Quintana, Juan G. Zuluaga, Sandra S. Arango

Abstract:

The comet assay is a method based on electrophoresis that is used to measure DNA damage in cells and has shown important results in the identification of substances with a potential risk to the human population as innumerable physical, chemical and biological agents. With this technique is possible to obtain images like a comet, in which the tail of these refers to damaged fragments of the DNA. One of the main problems is that the image has unequal luminosity caused by the fluorescence microscope and requires different processing to condition it as well as to know how many optimal comets there are per sample and finally to perform the measurements and determine the percentage of DNA damage. In this paper, we propose the design and implementation of software using Image Processing Toolbox-MATLAB that allows the automation of image processing. The software chooses the optimum comets and measuring the necessary parameters to detect the damage.

Keywords: artificial vision, comet assay, DNA damage, image processing

Procedia PDF Downloads 307
1592 Automatic Moment-Based Texture Segmentation

Authors: Tudor Barbu

Abstract:

An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.

Keywords: image segmentation, moment-based, texture analysis, automatic classification, validation indexes

Procedia PDF Downloads 413
1591 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 78
1590 MSG Image Encryption Based on AES and RSA Algorithms "MSG Image Security"

Authors: Boukhatem Mohammed Belkaid, Lahdir Mourad

Abstract:

In this paper, we propose a new encryption system for security issues meteorological images from Meteosat Second Generation (MSG), which generates 12 images every 15 minutes. The hybrid encryption scheme is based on AES and RSA algorithms to validate the three security services are authentication, integrity and confidentiality. Privacy is ensured by AES, authenticity is ensured by the RSA algorithm. Integrity is assured by the basic function of the correlation between adjacent pixels. Our system generates a unique password every 15 minutes that will be used to encrypt each frame of the MSG meteorological basis to strengthen and ensure his safety. Several metrics have been used for various tests of our analysis. For the integrity test, we noticed the efficiencies of our system and how the imprint cryptographic changes at reception if a change affects the image in the transmission channel.

Keywords: AES, RSA, integrity, confidentiality, authentication, satellite MSG, encryption, decryption, key, correlation

Procedia PDF Downloads 378
1589 Multiple Relaxation Times in the Gibbs Ensemble Monte Carlo Simulation of Phase Separation

Authors: Bina Kumari, Subir K. Sarkar, Pradipta Bandyopadhyay

Abstract:

The autocorrelation function of the density fluctuation is studied in each of the two phases in a Gibbs Ensemble Monte Carlo (GEMC) simulation of the problem of phase separation for a square well potential with various values of its range. We find that the normalized autocorrelation function is described very well as a linear combination of an exponential function with a time scale τ₂ and a stretched exponential function with a time scale τ₁ and an exponent α. Dependence of (α, τ₁, τ₂) on the parameters of the GEMC algorithm and the range of the square well potential is investigated and interpreted. We also analyse the issue of how to choose the parameters of the GEMC simulation optimally.

Keywords: autocorrelation function, density fluctuation, GEMC, simulation

Procedia PDF Downloads 182
1588 Comparative Study between Classical P-Q Method and Modern Fuzzy Controller Method to Improve the Power Quality of an Electrical Network

Authors: A. Morsli, A. Tlemçani, N. Ould Cherchali, M. S. Boucherit

Abstract:

This article presents two methods for the compensation of harmonics generated by a nonlinear load. The first is the classic method P-Q. The second is the controller by modern method of artificial intelligence specifically fuzzy logic. Both methods are applied to an Active Power Filter shunt (APFs) based on a three-phase voltage converter at five levels NPC topology. In calculating the harmonic currents of reference, we use the algorithm P-Q and pulse generation, we use the intersective PWM. For flexibility and dynamics, we use fuzzy logic. The results give us clear that the rate of Harmonic Distortion issued by fuzzy logic is better than P-Q.

Keywords: fuzzy logic controller, P-Q method, pulse width modulation (PWM), shunt active power filter (sAPF), total harmonic distortion (THD)

Procedia PDF Downloads 544
1587 Using Geographic Information Systems in the Desertification Risk’s Cartography: Case South of the Aurès Region, Algeria

Authors: Benmessaoud Hassen

Abstract:

The sensitivity to the desertification map of the south of Aurès region has been elaborated by the crossing of four thematic layers capable to have an impact on the process of desertification. The following step is inspired of MEDALUS (Mediterranean desertification and land Use), which use qualitative index to define the environment zones sensitive to the desertification. The cartographical information of vegetation, the climate, the soil and the socioeconomic state descended from cartographic data transformed to numerical data then seized on, structured and managed by an algorithm dedicated to a geographical information system. In step with information, each layer makes object of 3 or 4 classes, the geometrical median of the four layers used are leaded to sensitivity classes (ISD) of different mapped environment.

Keywords: information systems, thematic layers, the sensitivity to the desertification map, concept MEDALUS, South of Aurès

Procedia PDF Downloads 417
1586 Research on Fuzzy Test Framework Based on Concolic Execution

Authors: Xiong Xie, Yuhang Chen

Abstract:

Vulnerability discovery technology is a significant field of the current. In this paper, a fuzzy framework based on concolic execution has been proposed. Fuzzy test and symbolic execution are widely used in the field of vulnerability discovery technology. But each of them has its own advantages and disadvantages. During the path generation stage, path traversal algorithm based on generation is used to get more accurate path. During the constraint solving stage, dynamic concolic execution is used to avoid the path explosion. If there is external call, the concolic based on function summary is used. Experiments show that the framework can effectively improve the ability of triggering vulnerabilities and code coverage.

Keywords: concolic execution, constraint solving, fuzzy test, vulnerability discovery

Procedia PDF Downloads 227
1585 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 339