Search results for: artificial intelligence and genetic algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5680

Search results for: artificial intelligence and genetic algorithms

1660 Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP) for Recovering Signal

Authors: Israa Sh. Tawfic, Sema Koc Kayhan

Abstract:

Given a large sparse signal, great wishes are to reconstruct the signal precisely and accurately from lease number of measurements as possible as it could. Although this seems possible by theory, the difficulty is in built an algorithm to perform the accuracy and efficiency of reconstructing. This paper proposes a new proved method to reconstruct sparse signal depend on using new method called Least Support Matching Pursuit (LS-OMP) merge it with the theory of Partial Knowing Support (PSK) given new method called Partially Knowing of Least Support Orthogonal Matching Pursuit (PKLS-OMP). The new methods depend on the greedy algorithm to compute the support which depends on the number of iterations. So to make it faster, the PKLS-OMP adds the idea of partial knowing support of its algorithm. It shows the efficiency, simplicity, and accuracy to get back the original signal if the sampling matrix satisfies the Restricted Isometry Property (RIP). Simulation results also show that it outperforms many algorithms especially for compressible signals.

Keywords: compressed sensing, lest support orthogonal matching pursuit, partial knowing support, restricted isometry property, signal reconstruction

Procedia PDF Downloads 241
1659 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia

Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar

Abstract:

The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.

Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed

Procedia PDF Downloads 79
1658 Simulation of Flow through Dam Foundation by FEM and ANN Methods Case Study: Shahid Abbaspour Dam

Authors: Mehrdad Shahrbanozadeh, Gholam Abbas Barani, Saeed Shojaee

Abstract:

In this study, a finite element (Seep3D model) and an artificial neural network (ANN) model were developed to simulate flow through dam foundation. Seep3D model is capable of simulating three-dimensional flow through a heterogeneous and anisotropic, saturated and unsaturated porous media. Flow through the Shahid Abbaspour dam foundation has been used as a case study. The FEM with 24960 triangular elements and 28707 nodes applied to model flow through foundation of this dam. The FEM being made denser in the neighborhood of the curtain screen. The ANN model developed for Shahid Abbaspour dam is a feedforward four layer network employing the sigmoid function as an activator and the back-propagation algorithm for the network learning. The water level elevations of the upstream and downstream of the dam have been used as input variables and the piezometric heads as the target outputs in the ANN model. The two models are calibrated and verified using the Shahid Abbaspour’s dam piezometric data. Results of the models were compared with those measured by the piezometers which are in good agreement. The model results also revealed that the ANN model performed as good as and in some cases better than the FEM.

Keywords: seepage, dam foundation, finite element method, neural network, seep 3D model

Procedia PDF Downloads 474
1657 Multi-Objective Four-Dimensional Traveling Salesman Problem in an IoT-Based Transport System

Authors: Arindam Roy, Madhushree Das, Apurba Manna, Samir Maity

Abstract:

In this research paper, an algorithmic approach is developed to solve a novel multi-objective four-dimensional traveling salesman problem (MO4DTSP) where different paths with various numbers of conveyances are available to travel between two cities. NSGA-II and Decomposition algorithms are modified to solve MO4DTSP in an IoT-based transport system. This IoT-based transport system can be widely observed, analyzed, and controlled by an extensive distribution of traffic networks consisting of various types of sensors and actuators. Due to urbanization, most of the cities are connected using an intelligent traffic management system. Practically, for a traveler, multiple routes and vehicles are available to travel between any two cities. Thus, the classical TSP is reformulated as multi-route and multi-vehicle i.e., 4DTSP. The proposed MO4DTSP is designed with traveling cost, time, and customer satisfaction as objectives. In reality, customer satisfaction is an important parameter that depends on travel costs and time reflects in the present model.

Keywords: multi-objective four-dimensional traveling salesman problem (MO4DTSP), decomposition, NSGA-II, IoT-based transport system, customer satisfaction

Procedia PDF Downloads 110
1656 Rising of Single and Double Bubbles during Boiling and Effect of Electric Field in This Process

Authors: Masoud Gholam Ale Mohammad, Mojtaba Hafezi Birgani

Abstract:

An experimental study of saturated pool boiling on a single artificial nucleation site without and with the application of an electric field on the boiling surface has been conducted. N-pentane is boiling on a copper surface and is recorded with a high speed camera providing high quality pictures and movies. The accuracy of the visualization allowed establishing an experimental bubble growth law from a large number of experiments. This law shows that the evaporation rate is decreasing during the bubble growth, and underlines the importance of liquid motion induced by the preceding bubble. Bubble rise is therefore studied: once detached, bubbles accelerate vertically until reaching a maximum velocity in good agreement with a correlation from literature. The bubbles then turn to another direction. The effect of applying an electric field on the boiling surface in finally studied. In addition to changes in the bubble shape, changes are also shown in the liquid plume and the convective structures above the surface. Lower maximum rising velocities were measured in the presence of electric fields, especially with a negative polarity.

Keywords: single and double bubbles, electric field, boiling, rising

Procedia PDF Downloads 226
1655 Design and Field Programmable Gate Array Implementation of Radio Frequency Identification for Boosting up Tag Data Processing

Authors: G. Rajeshwari, V. D. M. Jabez Daniel

Abstract:

Radio Frequency Identification systems are used for automated identification in various applications such as automobiles, health care and security. It is also called as the automated data collection technology. RFID readers are placed in any area to scan large number of tags to cover a wide distance. The placement of the RFID elements may result in several types of collisions. A major challenge in RFID system is collision avoidance. In the previous works the collision was avoided by using algorithms such as ALOHA and tree algorithm. This work proposes collision reduction and increased throughput through reading enhancement method with tree algorithm. The reading enhancement is done by improving interrogation procedure and increasing the data handling capacity of RFID reader with parallel processing. The work is simulated using Xilinx ISE 14.5 verilog language. By implementing this in the RFID system, we can able to achieve high throughput and avoid collision in the reader at a same instant of time. The overall system efficiency will be increased by implementing this.

Keywords: antenna, anti-collision protocols, data management system, reader, reading enhancement, tag

Procedia PDF Downloads 306
1654 Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications

Authors: Janusz Bedkowski, Grzegorz Kisala, Michal Wlasiuk, Piotr Pokorski

Abstract:

Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM.

Keywords: SLAM, ground truth, navigation, LiDAR, visual odometry, mapping

Procedia PDF Downloads 69
1653 Parallelizing the Hybrid Pseudo-Spectral Time Domain/Finite Difference Time Domain Algorithms for the Large-Scale Electromagnetic Simulations Using Massage Passing Interface Library

Authors: Donggun Lee, Q-Han Park

Abstract:

Due to its coarse grid, the Pseudo-Spectral Time Domain (PSTD) method has advantages against the Finite Difference Time Domain (FDTD) method in terms of memory requirement and operation time. However, since the efficiency of parallelization is much lower than that of FDTD, PSTD is not a useful method for a large-scale electromagnetic simulation in a parallel platform. In this paper, we propose the parallelization technique of the hybrid PSTD-FDTD (HPF) method which simultaneously possesses the efficient parallelizability of FDTD and the quick speed and low memory requirement of PSTD. Parallelization cost of the HPF method is exactly the same as the parallel FDTD, but still, it occupies much less memory space and has faster operation speed than the parallel FDTD. Experiments in distributed memory systems have shown that the parallel HPF method saves up to 96% of the operation time and reduces 84% of the memory requirement. Also, by combining the OpenMP library to the MPI library, we further reduced the operation time of the parallel HPF method by 50%.

Keywords: FDTD, hybrid, MPI, OpenMP, PSTD, parallelization

Procedia PDF Downloads 148
1652 An Analysis of Uncoupled Designs in Chicken Egg

Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi

Abstract:

Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.

Keywords: uncoupled design, axiomatic design, nature design, design evaluation

Procedia PDF Downloads 173
1651 Bioflocculation Using the Purified Wild Strain of P. aeruginosa Culture in Wastewater Treatment

Authors: Mohammad Hajjartabar, Tahereh Kermani Ranjbar

Abstract:

P. aeruginosa EF2 was isolated and identified from human infection sources before in our previous study. The present study was performed to determine the characteristics and activity role of bioflocculant produced by the bacterium in flocculation of the wastewater active sludge treatment. The bacterium was inoculated and then was grown in an orbital shaker at 250 rpm for 5 days at 35 °C under TSB and peptone water media. After incubation period, culture broths of the bacterial strain was collected and washed. The concentration of the bacteria was adjusted. For the extraction of the bacterial bioflocculant, culture was centrifuged at 6000 rpm for 20 min at 4 °C to remove bacterial cells. Supernatant was decanted and pellet containing bioflocculant was dried at 105 °C to a constant weight according to APHA, 2005. The chemical composition of the extracted bioflocculant from the bacterial sample was then analyzed. Wastewater active sludge sample obtained from aeration tank from one of wastewater treatment plants in Tehran, was first mixed thoroughly. After addition of bioflocculant, improvements in floc density were observed with an increase in bioflocculant. The results of this study strongly suggested that the extracted bioflucculant played a significant role in flocculation of the wastewater sample. The use of wild bacteria and nutrient regulation techniques instead of genetic manipulation opens wide investigation area in the future to improve wastewater treatment processes. Also this may put a new path in front of us to attain and improve the more effective bioflocculant using the purified microbial culture in wastewater treatment.

Keywords: wastewater treatment, P. aeruginosa, sludge treatment

Procedia PDF Downloads 156
1650 Curative Effect of Blumea lacera Leaves on Experimental Haemorrhoids in Rats

Authors: Priyanka Sharma, Tarkewshwar Dubey, Hemalatha Siva

Abstract:

Hemorrhoids are one of the most common anorectal diseases around the world. Severalfactors are involved in causing hemorrhoids including irregularbowel function (constipation, diarrhea), exercise, gravity, low fiberdiet, pregnancy, obesity, high abdominal pressure, prolongedsitting, genetic factors, and aging. Pain, bleeding, itching,swelling and anal discharge are the symptoms of the disease. Due to limitedmodern pharmacotherapeutic options available for treatment, theherbal medicines remain the choice of therapy. Blumea lacera (Burm f.) DC. belonging to the Asteraceae family is a common plain land weed of Bangladesh. Traditionally it has been used for treatment of hemorrhoids.Considering the above fact, present study was aimed to validate the ethnomedicinal use of B. lacera leaves on experimental hemorrhoids in rats. The anti-hemorrhoid activity was performed by using croton oil induced rat models. The parameters studied were assessment of TNF-α and IL-6, Evans blue exudation, macroscopic severity score, recto-anal coefficient, histomorphological scores. Also, in vivo antioxidant parameters and histopathological studies were also performed. All paramaters exhibited significant anti-hemorrhoid activity. Moreover ethanolic extract of B. lacera (EBL) leaves 400mg/kg showed ameliorative effect oncroton oil induced hemorrhoids.In conclusion, EBL exhibitedbeneficial effect on croton oil- induced hemorrhoids and validates its ethnomedicinal use in treatment of piles.

Keywords: haemorrhoids, IL-6, piles, TNF-α

Procedia PDF Downloads 294
1649 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology

Procedia PDF Downloads 262
1648 Assessment of DNA Degradation Using Comet Assay: A Versatile Technique for Forensic Application

Authors: Ritesh K. Shukla

Abstract:

Degradation of biological samples in terms of macromolecules (DNA, RNA, and protein) are the major challenges in the forensic investigation which misleads the result interpretation. Currently, there are no precise methods available to circumvent this problem. Therefore, at the preliminary level, some methods are urgently needed to solve this issue. In this order, Comet assay is one of the most versatile, rapid and sensitive molecular biology technique to assess the DNA degradation. This technique helps to assess DNA degradation even at very low amount of sample. Moreover, the expedient part of this method does not require any additional process of DNA extraction and isolation during DNA degradation assessment. Samples directly embedded on agarose pre-coated microscopic slide and electrophoresis perform on the same slide after lysis step. After electrophoresis microscopic slide stained by DNA binding dye and observed under fluorescent microscope equipped with Komet software. With the help of this technique extent of DNA degradation can be assessed which can help to screen the sample before DNA fingerprinting, whether it is appropriate for DNA analysis or not. This technique not only helps to assess degradation of DNA but many other challenges in forensic investigation such as time since deposition estimation of biological fluids, repair of genetic material from degraded biological sample and early time since death estimation could also be resolved. With the help of this study, an attempt was made to explore the application of well-known molecular biology technique that is Comet assay in the field of forensic science. This assay will open avenue in the field of forensic research and development.

Keywords: comet assay, DNA degradation, forensic, molecular biology

Procedia PDF Downloads 155
1647 2D Hexagonal Cellular Automata: The Complexity of Forms

Authors: Vural Erdogan

Abstract:

We created two-dimensional hexagonal cellular automata to obtain complexity by using simple rules same as Conway’s game of life. Considering the game of life rules, Wolfram's works about life-like structures and John von Neumann's self-replication, self-maintenance, self-reproduction problems, we developed 2-states and 3-states hexagonal growing algorithms that reach large populations through random initial states. Unlike the game of life, we used six neighbourhoods cellular automata instead of eight or four neighbourhoods. First simulations explained that whether we are able to obtain sort of oscillators, blinkers, and gliders. Inspired by Wolfram's 1D cellular automata complexity and life-like structures, we simulated 2D synchronous, discrete, deterministic cellular automata to reach life-like forms with 2-states cells. The life-like formations and the oscillators have been explained how they contribute to initiating self-maintenance together with self-reproduction and self-replication. After comparing simulation results, we decided to develop the algorithm for another step. Appending a new state to the same algorithm, which we used for reaching life-like structures, led us to experiment new branching and fractal forms. All these studies tried to demonstrate that complex life forms might come from uncomplicated rules.

Keywords: hexagonal cellular automata, self-replication, self-reproduction, self- maintenance

Procedia PDF Downloads 152
1646 A Polynomial Time Clustering Algorithm for Solving the Assignment Problem in the Vehicle Routing Problem

Authors: Lydia Wahid, Mona F. Ahmed, Nevin Darwish

Abstract:

The vehicle routing problem (VRP) consists of a group of customers that needs to be served. Each customer has a certain demand of goods. A central depot having a fleet of vehicles is responsible for supplying the customers with their demands. The problem is composed of two subproblems: The first subproblem is an assignment problem where the number of vehicles that will be used as well as the customers assigned to each vehicle are determined. The second subproblem is the routing problem in which for each vehicle having a number of customers assigned to it, the order of visits of the customers is determined. Optimal number of vehicles, as well as optimal total distance, should be achieved. In this paper, an approach for solving the first subproblem (the assignment problem) is presented. In the approach, a clustering algorithm is proposed for finding the optimal number of vehicles by grouping the customers into clusters where each cluster is visited by one vehicle. Finding the optimal number of clusters is NP-hard. This work presents a polynomial time clustering algorithm for finding the optimal number of clusters and solving the assignment problem.

Keywords: vehicle routing problems, clustering algorithms, Clarke and Wright Saving Method, agglomerative hierarchical clustering

Procedia PDF Downloads 393
1645 Framework for Detecting External Plagiarism from Monolingual Documents: Use of Shallow NLP and N-Gram Frequency Comparison

Authors: Saugata Bose, Ritambhra Korpal

Abstract:

The internet has increased the copy-paste scenarios amongst students as well as amongst researchers leading to different levels of plagiarized documents. For this reason, much of research is focused on for detecting plagiarism automatically. In this paper, an initiative is discussed where Natural Language Processing (NLP) techniques as well as supervised machine learning algorithms have been combined to detect plagiarized texts. Here, the major emphasis is on to construct a framework which detects external plagiarism from monolingual texts successfully. For successfully detecting the plagiarism, n-gram frequency comparison approach has been implemented to construct the model framework. The framework is based on 120 characteristics which have been extracted during pre-processing the documents using NLP approach. Afterwards, filter metrics has been applied to select most relevant characteristics and then supervised classification learning algorithm has been used to classify the documents in four levels of plagiarism. Confusion matrix was built to estimate the false positives and false negatives. Our plagiarism framework achieved a very high the accuracy score.

Keywords: lexical matching, shallow NLP, supervised machine learning algorithm, word n-gram

Procedia PDF Downloads 357
1644 Predicting Dose Level and Length of Time for Radiation Exposure Using Gene Expression

Authors: Chao Sima, Shanaz Ghandhi, Sally A. Amundson, Michael L. Bittner, David J. Brenner

Abstract:

In a large-scale radiologic emergency, potentially affected population need to be triaged efficiently using various biomarkers where personal dosimeters are not likely worn by the individuals. It has long been established that radiation injury can be estimated effectively using panels of genetic biomarkers. Furthermore, the rate of radiation, in addition to dose of radiation, plays a major role in determining biological responses. Therefore, a better and more accurate triage involves estimating both the dose level of the exposure and the length of time of that exposure. To that end, a large in vivo study was carried out on mice with internal emitter caesium-137 (¹³⁷Cs). Four different injection doses of ¹³⁷Cs were used: 157.5 μCi, 191 μCi, 214.5μCi, and 259 μCi. Cohorts of 6~7 mice from the control arm and each of the dose levels were sacrificed, and blood was collected 2, 3, 5, 7 and 14 days after injection for microarray RNA gene expression analysis. Using a generalized linear model with penalized maximum likelihood, a panel of 244 genes was established and both the doses of injection and the number of days after injection were accurately predicted for all 155 subjects using this panel. This has proven that microarray gene expression can be used effectively in radiation biodosimetry in predicting both the dose levels and the length of exposure time, which provides a more holistic view on radiation exposure and helps improving radiation damage assessment and treatment.

Keywords: caesium-137, gene expression microarray, multivariate responses prediction, radiation biodosimetry

Procedia PDF Downloads 198
1643 A C/T Polymorphism at the 5’ Untranslated Region of CD40 Gene in Patients Associated with Graves’ Disease in Kumaon Region

Authors: Sanjeev Kumar Shukla, Govind Singh, Prabhat Pant Shahzad Ahmad

Abstract:

Background: Graves’ disease is an autoimmune disorder with a genetic predisposition, and CD40 plays a pathogenic role in various autoimmune diseases. A single nucleotide polymorphism at position –1 of the Kozak sequence of the 5 untranslated regions of the CD40 gene of exon 1 has been reported to be associated with the development of Graves’ Disease. Objective: The aim of the present study was to investigate whether CD40 gene polymorphism confers susceptibility to Graves’ disease in the Kumaon region. CD40 gene polymorphisms were studied in Graves’ Disease patients (n=50) and healthy control subjects without anti-thyroid autoantibodies or a family history of autoimmune disorders (n=50). Material and Method: CD40 gene polymorphisms were studied in fifty Graves’ Disease patients and fifty healthy control subjects. All samples were collected from STG Hospital, Haldwani, Nainital. A C/T polymorphism at position –1 of the CD40 gene was measured using the polymerase chain reaction-restriction fragment length polymorphism. Results: There was no significant difference in allele or genotype frequency of the CD40 SNP between Graves’ Disease and control subjects. There was a significant decrease in the TT genotype frequency in the Graves’ Disease patients who developed Graves’ Disease after 40 years old than those under 40 years of age. These data suggest that the SNP of the CD40 gene is associated with susceptibility to the later onset of Graves’ Disease. Conclusion: The CD40 gene was a different susceptibility gene for Graves’ Disease within certain families because it was both linked and associated with Graves’ Disease.

Keywords: autoimmune diseases, pathogenesis, diagnosis, therapy

Procedia PDF Downloads 51
1642 A Construction Scheduling Model by Applying Pedestrian and Vehicle Simulation

Authors: Akhmad F. K. Khitam, Yi Tai, Hsin-Yun Lee

Abstract:

In the modern research of construction management, the goals of scheduling are not only to finish the project within the limited duration, but also to improve the impact of people and environment. Especially for the impact to the pedestrian and vehicles, the considerable social cost should be estimated in the total performance of a construction project. However, the site environment has many differences between projects. These interactions affect the requirement and goal of scheduling. It is difficult for schedule planners to quantify these interactions. Therefore, this study use 3D dynamic simulation technology to plan the schedule of the construction engineering projects that affect the current space users (i.e., the pedestrians and vehicles). The proposed model can help the project manager find out the optimal schedule to minimize the inconvenience brought to the space users. Besides, a roadwork project and a building renovation project were analyzed for the practical situation of engineering and operations. Then this study integrates the proper optimization algorithms and computer technology to establish a decision support model. The proposed model can generate a near-optimal schedule solution for project planners.

Keywords: scheduling, simulation, optimization, pedestrian and vehicle behavior

Procedia PDF Downloads 141
1641 Polymorphisms of STAT5A and DGAT1 Genes and Their Associations with Milk Trait in Egyptian Goats

Authors: Othman Elmahdy Othman

Abstract:

The objectives of this study were to identify polymorphisms in the STAT5A using Restriction Fragment Length Polymorphism and DGAT1 using Single-Strand Conformation Polymorphism genes among three Egyptian goat breeds (Barki, Zaraibi, and Damascus) as well as investigate the effect of their genotypes on milk composition traits of Zaraibi goats. One hundred and fifty blood samples were collected for DNA extraction, 60 from Zaraibi, 40 from Damascus and 50 from Barki breeds. Fat, protein and lactose percentages were determined in Zaraibi goat milk using an automatic milk analyzer. Two genotypes, CC and CT (for STAT5A) and C-C- and C-C+ (for DGAT1), were identified in the three Egyptian goat breeds with different frequencies. The associations between these genotypes and milk fat, protein and lactose were determined in Zaraibi breed. The results showed that the STAT5A genotypes had significant effects on milk yield, protein, fat and lactose with the superiority of CT genotype over CC. Regarding DGAT1 polymorphism, the result showed the only association between it with milk fat where the animals with C-C+ genotype had greater milk fat than animals possess C-C- genotype. The association of combined genotypes with milk trait declared that the does with heterozygous genotypes for both genes are preferred than does with homozygous genotypes where the animals with CTC-C+ have more milk yield, fat and protein than those with CCC-C- genotype. In conclusion, the result showed that C/T and C-/C+ SNPs of STAT5A and DGAT1 genes respectively may be useful markers for assisted selection programs to improve goat milk composition

Keywords: DGAT1, genetic polymorphism, milk trait, STAT5A

Procedia PDF Downloads 163
1640 An Accurate Computation of 2D Zernike Moments via Fast Fourier Transform

Authors: Mohammed S. Al-Rawi, J. Bastos, J. Rodriguez

Abstract:

Object detection and object recognition are essential components of every computer vision system. Despite the high computational complexity and other problems related to numerical stability and accuracy, Zernike moments of 2D images (ZMs) have shown resilience when used in object recognition and have been used in various image analysis applications. In this work, we propose a novel method for computing ZMs via Fast Fourier Transform (FFT). Notably, this is the first algorithm that can generate ZMs up to extremely high orders accurately, e.g., it can be used to generate ZMs for orders up to 1000 or even higher. Furthermore, the proposed method is also simpler and faster than the other methods due to the availability of FFT software and/or hardware. The accuracies and numerical stability of ZMs computed via FFT have been confirmed using the orthogonality property. We also introduce normalizing ZMs with Neumann factor when the image is embedded in a larger grid, and color image reconstruction based on RGB normalization of the reconstructed images. Astonishingly, higher-order image reconstruction experiments show that the proposed methods are superior, both quantitatively and subjectively, compared to the q-recursive method.

Keywords: Chebyshev polynomial, fourier transform, fast algorithms, image recognition, pseudo Zernike moments, Zernike moments

Procedia PDF Downloads 265
1639 A Two-Stage Airport Ground Movement Speed Profile Design Methodology Using Particle Swarm Optimization

Authors: Zhang Tianci, Ding Meng, Zuo Hongfu, Zeng Lina, Sun Zejun

Abstract:

Automation of airport operations can greatly improve ground movement efficiency. In this paper, we study the speed profile design problem for advanced airport ground movement control and guidance. The problem is constrained by the surface four-dimensional trajectory generated in taxi planning. A decomposed approach of two stages is presented to solve this problem efficiently. In the first stage, speeds are allocated at control points which ensure smooth speed profiles can be found later. In the second stage, detailed speed profiles of each taxi interval are generated according to the allocated control point speeds with the objective of minimizing the overall fuel consumption. We present a swarm intelligence based algorithm for the first-stage problem and a discrete variable driven enumeration method for the second-stage problem since it only has a small set of discrete variables. Experimental results demonstrate the presented methodology performs well on real world speed profile design problems.

Keywords: airport ground movement, fuel consumption, particle swarm optimization, smoothness, speed profile design

Procedia PDF Downloads 582
1638 Computing Continuous Skyline Queries without Discriminating between Static and Dynamic Attributes

Authors: Ibrahim Gomaa, Hoda M. O. Mokhtar

Abstract:

Although most of the existing skyline queries algorithms focused basically on querying static points through static databases; with the expanding number of sensors, wireless communications and mobile applications, the demand for continuous skyline queries has increased. Unlike traditional skyline queries which only consider static attributes, continuous skyline queries include dynamic attributes, as well as the static ones. However, as skyline queries computation is based on checking the domination of skyline points over all dimensions, considering both the static and dynamic attributes without separation is required. In this paper, we present an efficient algorithm for computing continuous skyline queries without discriminating between static and dynamic attributes. Our algorithm in brief proceeds as follows: First, it excludes the points which will not be in the initial skyline result; this pruning phase reduces the required number of comparisons. Second, the association between the spatial positions of data points is examined; this phase gives an idea of where changes in the result might occur and consequently enables us to efficiently update the skyline result (continuous update) rather than computing the skyline from scratch. Finally, experimental evaluation is provided which demonstrates the accuracy, performance and efficiency of our algorithm over other existing approaches.

Keywords: continuous query processing, dynamic database, moving object, skyline queries

Procedia PDF Downloads 210
1637 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 240
1636 Time-Course Lipid Accumulation and Transcript Analyses of Lipid Biosynthesis Gene of Chlorella sp.3 under Nitrogen Limited Condition

Authors: Jyoti Singh, Swati Dubey, Mukta Singh, R. P. Singh

Abstract:

The freshwater microalgae Chlorella sp. is alluring considerable interest as a source for biofuel production due to its fast growth rate and high lipid content. Under nitrogen limited conditions, they can accumulate significant amounts of lipids. Thus, it is important to gain insight into the molecular mechanism of their lipid metabolism. In this study under nitrogen limited conditions, regular pattern of growth characteristics lipid accumulation and gene expression analysis of key regulatory genes of lipid biosynthetic pathway were carried out in microalgae Chlorella sp 3. Our results indicated that under nitrogen limited conditions there is a significant increase in the lipid content and lipid productivity, achieving 44.21±2.64 % and 39.34±0.66 mg/l/d at the end of the cultivation, respectively. Time-course transcript patterns of lipid biosynthesis genes i.e. acetyl coA carboxylase (accD) and diacylglycerol acyltransferase (dgat) showed that during late log phase of microalgae Chlorella sp.3 both the genes were significantly up regulated as compared to early log phase. Moreover, the transcript level of the dgat gene is two-fold higher than the accD gene. The results suggested that both the genes responded sensitively to the nitrogen limited conditions during the late log stage, which proposed their close relevance to lipid biosynthesis. Further, this transcriptome data will be useful for engineering microalgae species by targeting these genes for genetic modification to improve microalgal biofuel quality and production.

Keywords: biofuel, gene, lipid, microalgae

Procedia PDF Downloads 307
1635 Stubble and Senesced Leaves Are the Primary Sites of Ice Nucleation Activity in Wheat

Authors: Amanuel Bekuma, Rebecca Swift, Sarah Jackson, Ben Biddulph

Abstract:

Economic loss to frost damage is increasing over the past years in the Western Australian Wheatbelt. Agronomic, genetic, and climatic works have still found a weak correlation between temperature and frost damage. One possibility that has not been explored within the Australian cropping system is whether ice nucleation active bacteria (INB) either present in situ on crop residue or introduced by rainfall could be responsible for the increased sensitivity of cereal plants to frost at different stages of development. This study investigated upper and lower leaf canopy, stubble, and soil as a potential site of ice nucleation activity (INA) and tracked the changes in INA during the plant development. We found that older leaves of wheat are the primary sites of ice nucleation (-4.7 to -6.3°C) followed by stubble (-5.7 to -6.7°C) which increases the risk of frost damage during heading and flowering (the most susceptible stages). However, healthy and green upper canopy leaves (flag and flag-2) and the soil have lower INA (< -11°C) during the frost-sensitive stage of wheat. We anticipate the higher INA on the stubble and older leaves to be due to the presence of biologically active ice-nucleating bacteria (INB), known to cause frost injury to sensitive plants at -5°C. Stubble retained or applied during the growing season further exacerbates additional frost risk by potentially increasing the INB load. The implications of the result for stubble and frost risk management in a frost-prone landscape will be discussed.

Keywords: frost, ice-nucleation-activity, stubble, wheat

Procedia PDF Downloads 135
1634 Bio-Mimetic Foot Design for Legged Locomotion over Unstructured Terrain

Authors: Hannah Kolano, Paul Nadan, Jeremy Ryan, Sophia Nielsen

Abstract:

The hooves of goats and other ruminants, or the family Ruminantia, are uniquely structured to adapt to rough terrain. Their hooves possess a hard outer shell and a soft interior that allow them to both conform to uneven surfaces and hook onto prominent features. In an effort to apply this unique mechanism to a robotics context, artificial feet for a hexapedal robot have been designed based on the hooves of ruminants to improve the robot’s ability to traverse unstructured environments such as those found on a rocky planet or asteroid, as well as in earth-based environments such as rubble, caves, and mountainous regions. The feet were manufactured using a combination of 3D printing and polyurethane casting techniques and attached to a commercially available hexapedal robot. The robot was programmed with a terrain-adaptive gait and proved capable of traversing a variety of uneven surfaces and inclines. This development of more adaptable robotic feet allows legged robots to operate in a wider range of environments and expands their possible applications.

Keywords: biomimicry, legged locomotion, robotic foot design, ruminant feet, unstructured terrain navigation

Procedia PDF Downloads 128
1633 Monitoring the Phenomenon of Black Sand in Hurghada’s Artificial Lakes from Sources of Groundwater and Removal Techniques

Authors: Ahmed M. Noureldin, Khaled M. Naguib

Abstract:

This experimental investigation tries to identify the root cause of the black sand issue in one of the man-made lakes in a well-known Hurghada resort. The lake is nourished by the underground wells' source, which continuously empties into the Red Sea. Chemical testing was done by looking at spots of stinky black sand beneath the sandy lake surface. The findings on samples taken from several locations (wells, lake bottom sand samples, and clean sand with exact specifications as bottom sand) indicated the existence of organic sulfur bacteria that are responsible for the phenomena of black sand. Approximately 39.139 mg/kg of sulfide in the form of hydrogen sulfide was present in the lake bottom sand, while 1.145 mg/kg, before usage, was in the bare sand. The study also involved modeling with the GPS-X program for cleaning bottom sand that uses hydro cyclones as a physical-mechanical treatment method. The modeling findings indicated a Total Organic Carbon (TOC) removal effectiveness of 0.65%. The research recommended using hydro cyclones to routinely mechanically clear the sand from lake bottoms.

Keywords: man-made lakes, organic sulfur bacteria, total organic carbon, hydro cyclone

Procedia PDF Downloads 72
1632 Brain Computer Interface Implementation for Affective Computing Sensing: Classifiers Comparison

Authors: Ramón Aparicio-García, Gustavo Juárez Gracia, Jesús Álvarez Cedillo

Abstract:

A research line of the computer science that involve the study of the Human-Computer Interaction (HCI), which search to recognize and interpret the user intent by the storage and the subsequent analysis of the electrical signals of the brain, for using them in the control of electronic devices. On the other hand, the affective computing research applies the human emotions in the HCI process helping to reduce the user frustration. This paper shows the results obtained during the hardware and software development of a Brain Computer Interface (BCI) capable of recognizing the human emotions through the association of the brain electrical activity patterns. The hardware involves the sensing stage and analogical-digital conversion. The interface software involves algorithms for pre-processing of the signal in time and frequency analysis and the classification of patterns associated with the electrical brain activity. The methods used for the analysis and classification of the signal have been tested separately, by using a database that is accessible to the public, besides to a comparison among classifiers in order to know the best performing.

Keywords: affective computing, interface, brain, intelligent interaction

Procedia PDF Downloads 388
1631 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms

Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li

Abstract:

High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.

Keywords: monocular camera, GPS, positioning, measurement

Procedia PDF Downloads 144