Search results for: Lamarkian genetic algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4659

Search results for: Lamarkian genetic algorithm

3069 Automatic LV Segmentation with K-means Clustering and Graph Searching on Cardiac MRI

Authors: Hae-Yeoun Lee

Abstract:

Quantification of cardiac function is performed by calculating blood volume and ejection fraction in routine clinical practice. However, these works have been performed by manual contouring,which requires computational costs and varies on the observer. In this paper, an automatic left ventricle segmentation algorithm on cardiac magnetic resonance images (MRI) is presented. Using knowledge on cardiac MRI, a K-mean clustering technique is applied to segment blood region on a coil-sensitivity corrected image. Then, a graph searching technique is used to correct segmentation errors from coil distortion and noises. Finally, blood volume and ejection fraction are calculated. Using cardiac MRI from 15 subjects, the presented algorithm is tested and compared with manual contouring by experts to show outstanding performance.

Keywords: cardiac MRI, graph searching, left ventricle segmentation, K-means clustering

Procedia PDF Downloads 387
3068 Lateral Control of Electric Vehicle Based on Fuzzy Logic Control

Authors: Hartani Kada, Merah Abdelkader

Abstract:

Aiming at the high nonlinearities and unmatched uncertainties of the intelligent electric vehicles’ dynamic system, this paper presents a lateral motion control algorithm for intelligent electric vehicles with four in-wheel motors. A fuzzy logic procedure is presented and formulated to realize lateral control in lane change. The vehicle dynamics model and a desired target tracking model were established in this paper. A fuzzy logic controller was designed for integrated active front steering (AFS) and direct yaw moment control (DYC) in order to improve vehicle handling performance and stability, and a fuzzy controller for the automatic steering problem. The simulation results demonstrate the strong robustness and excellent tracking performance of the control algorithm that is proposed.

Keywords: fuzzy logic, lateral control, AFS, DYC, electric car technology, longitudinal control, lateral motion

Procedia PDF Downloads 591
3067 Efficacy of Preimplantation Genetic Screening in Women with a Spontaneous Abortion History with Eukaryotic or Aneuploidy Abortus

Authors: Jayeon Kim, Eunjung Yu, Taeki Yoon

Abstract:

Most spontaneous miscarriage is believed to be a consequence of embryo aneuploidies. Transferring eukaryotic embryos selected by PGS is expected to decrease the miscarriage rate. Current PGS indications include advanced maternal age, recurrent pregnancy loss, repeated implantation failure. Recently, use of PGS for healthy women without above indications for the purpose of improving in vitro fertilization (IVF) outcomes is on the rise. However, it is still controversy about the beneficial effect of PGS in this population, especially, in women with a history of no more than 2 miscarriages or miscarriage of eukaryotic abortus. This study aimed to investigate if karyotyping result of abortus is a good indicator of preimplantation genetic screening (PGS) in subsequent IVF cycle in women with a history of spontaneous abortion. A single-center retrospective cohort study was performed. Women who had spontaneous abortion(s) (less than 3) and dilatation and evacuation, and subsequent IVF from January 2016 to November 2016 were included. Their medical information was extracted from the charts. Clinical pregnancy was defined as presence of a gestational sac with fetal heart beat detected on ultrasound in week 7. Statistical analysis was performed using SPSS software. Total 234 women were included. 121 out of 234 (51.7%) underwent karyotyping of the abortus, and 113 did not have the abortus karyotyped. Embryo biopsy was performed on 3 or 5 days after oocyte retrieval, followed by embryo transfer (ET) on a fresh or frozen cycle. The biopsied materials were subjected to microarray comparative genomic hybridization. Clinical pregnancy rate per ET was compared between PGS and non-PGS group in each study group. Patients were grouped by two criteria: karyotype of the abortus from previous miscarriage (unknown fetal karyotype (n=89, Group 1), eukaryotic abortus (n=36, Group 2) or aneuploidy abortus (n=67, Group 3)), and pursuing PGS in subsequent IVF cycle (pursuing PGS (PGS group, n=105) or not pursuing PGS (non-PGS group, n=87)). The PGS group was significantly older and had higher number of retrieved oocytes and prior miscarriages compared to non-PGS group. There were no differences in BMI and AMH level between those two groups. In PGS group, the mean number of transferable embryos (eukaryotic embryo) was 1.3 ± 0.7, 1.5 ± 0.5 and 1.4 ± 0.5, respectively (p = 0.049). In 42 cases, ET was cancelled because all embryos biopsied turned out to be abnormal. In all three groups (group 1, 2, and 3), clinical pregnancy rates were not statistically different between PGS and non-PGS group (Group 1: 48.8% vs. 52.2% (p=0.858), Group 2: 70% vs. 73.1% (p=0.730), Group 3: 42.3% vs. 46.7% (p=0.640), in PGS and non-PGS group, respectively). In both groups who had miscarriage with eukaryotic and aneuploidy abortus, the clinical pregnancy rate between IVF cycles with and without PGS was not different. When we compare miscarriage and ongoing pregnancy rate, there were no significant differences between PGS and non-PGS group in all three groups. Our results show that the routine application of PGS in women who had less than 3 miscarriages would not be beneficial, even in cases that previous miscarriage had been caused by fetal aneuploidy.

Keywords: preimplantation genetic diagnosis, miscarriage, kpryotyping, in vitro fertilization

Procedia PDF Downloads 163
3066 Bi-Criteria Vehicle Routing Problem for Possibility Environment

Authors: Bezhan Ghvaberidze

Abstract:

A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.

Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory

Procedia PDF Downloads 465
3065 Event Extraction, Analysis, and Event Linking

Authors: Anam Alam, Rahim Jamaluddin Kanji

Abstract:

With the rapid growth of event in everywhere, event extraction has now become an important matter to retrieve the information from the unstructured data. One of the challenging problems is to extract the event from it. An event is an observable occurrence of interaction among entities. The paper investigates the effectiveness of event extraction capabilities of three software tools that are Wandora, Nitro and SPSS. We performed standard text mining techniques of these tools on the data sets of (i) Afghan War Diaries (AWD collection), (ii) MUC4 and (iii) WebKB. Information retrieval measures such as precision and recall which are computed under extensive set of experiments for Event Extraction. The experimental study analyzes the difference between events extracted by the software and human. This approach helps to construct an algorithm that will be applied for different machine learning methods.

Keywords: event extraction, Wandora, nitro, SPSS, event analysis, extraction method, AFG, Afghan War Diaries, MUC4, 4 universities, dataset, algorithm, precision, recall, evaluation

Procedia PDF Downloads 572
3064 Evolutionary Analysis of Influenza A (H1N1) Pdm 09 in Post Pandemic Period in Pakistan

Authors: Nazish Badar

Abstract:

In early 2009, Pandemic type A (H1N1) Influenza virus emerged globally. Since then, it has continued circulation causing considerable morbidity and mortality. The purpose of this study was to evaluate the evolutionary changes in Influenza A (H1N1) pdm09 viruses from 2009-15 and their relevance with the current vaccine viruses. Methods: Respiratory specimens were collected with influenza-like illness and Severe Acute Respiratory Illness. Samples were processed according to CDC protocol. Sequencing and phylogenetic analysis of Haemagglutinin (HA) and neuraminidase (NA) genes was carried out comparing representative isolates from Pakistan viruses. Results: Between Jan2009 - Feb 2016, 1870 (13.2%) samples were positive for influenza A out of 14086. During the pandemic period (2009–10), Influenza A/ H1N1pdm 09 was the dominant strain with 366 (45%) of total influenza positives. In the post-pandemic period (2011–2016), a total of 1066 (59.6%) cases were positive Influenza A/ H1N1pdm 09 with co-circulation of different Influenza A subtypes. Overall, the Pakistan A(H1N1) pdm09 viruses grouped in two genetic clades. Influenza A(H1N1)pdm09 viruses only ascribed to Clade 7 during the pandemic period whereas viruses belong to clade 7 (2011) and clade 6B (2015) during the post-pandemic years. Amino acid analysis of the HA gene revealed mutations at positions S220T, I338V and P100S specially associated with outbreaks in all the analyzed strains. Sequence analyses of post-pandemic A(H1N1)pdm09 viruses showed additional substitutions at antigenic sites; S179N,K180Q (SA), D185N, D239G (CA), S202A (SB) and at receptor binding sites; A13T, S200P when compared with pandemic period. Substitution at Genetic markers; A273T (69%), S200P/T (15%) and D239G (7.6%) associated with severity and E391K (69%) associated with virulence was identified in viruses isolated during 2015. Analysis of NA gene revealed outbreak markers; V106I (23%) among pandemic and N248D (100%) during post-pandemic Pakistan viruses. Additional N-Glycosylation site; HA S179N (23%), NA I23T(7.6%) and N44S (77%) in place of N386K(77%) were only found in post-pandemic viruses. All isolates showed histidine (H) at position 275 in NA indicating sensitivity to neuraminidase inhibitors. Conclusion: This study shows that the Influenza A(H1N1)pdm09 viruses from Pakistan clustered into two genetic clades, with co-circulation of some variants. Certain key substitutions in the receptor binding site and few changes indicative of virulence were also detected in post-pandemic strains. Therefore, it is imperative to continue monitoring of the viruses for early identification of potential variants of high virulence or emergence of drug-resistant variants.

Keywords: Influenza A (H1N1) pdm09, evolutionary analysis, post pandemic period, Pakistan

Procedia PDF Downloads 187
3063 Optimization of Electrocoagulation Process Using Duelist Algorithm

Authors: Totok R. Biyanto, Arif T. Mardianto, M. Farid R. R., Luthfi Machmudi, kandi mulakasti

Abstract:

The main objective of this research is optimizing the electrocoagulation process design as a post-treatment for biologically vinasse effluent process. The first principle model with three independent variables that affect the energy consumption of electrocoagulation process i.e. current density, electrode distance, and time of treatment process are chosen as optimized variables. The process condition parameters were determined with the value of pH, electrical conductivity, and temperature of vinasse about 6.5, 28.5 mS/cm, 52 oC, respectively. Aluminum was chosen as the electrode material of electrocoagulation process. Duelist algorithm was used as optimization technique due to its capability to reach a global optimum. The optimization results show that the optimal process can be reached in the conditions of current density of 2.9976 A/m2, electrode distance of 1.5 cm and electrolysis time of 119 min. The optimized energy consumption during process is 34.02 Wh.

Keywords: optimization, vinasse effluent, electrocoagulation, energy consumption

Procedia PDF Downloads 456
3062 Minimization of Propagation Delay in Multi Unmanned Aerial Vehicle Network

Authors: Purva Joshi, Rohit Thanki, Omar Hanif

Abstract:

Unmanned aerial vehicles (UAVs) are becoming increasingly important in various industrial applications and sectors. Nowadays, a multi UAV network is used for specific types of communication (e.g., military) and monitoring purposes. Therefore, it is critical to reducing propagation delay during communication between UAVs, which is essential in a multi UAV network. This paper presents how the propagation delay between the base station (BS) and the UAVs is reduced using a searching algorithm. Furthermore, the iterative-based K-nearest neighbor (k-NN) algorithm and Travelling Salesmen Problem (TSP) algorthm were utilized to optimize the distance between BS and individual UAV to overcome the problem of propagation delay in multi UAV networks. The simulation results show that this proposed method reduced complexity, improved reliability, and reduced propagation delay in multi UAV networks.

Keywords: multi UAV network, optimal distance, propagation delay, K - nearest neighbor, traveling salesmen problem

Procedia PDF Downloads 180
3061 Analysis of Intra-Varietal Diversity for Some Lebanese Grapevine Cultivars

Authors: Stephanie Khater, Ali Chehade, Lamis Chalak

Abstract:

The progressive replacement of the Lebanese autochthonous grapevine cultivars during the last decade by the imported foreign varieties almost resulted in the genetic erosion of the local germplasm and the confusion with cultivars' names. Hence there is a need to characterize these local cultivars and to assess the possible existing variability at the cultivar level. This work was conducted in an attempt to evaluate the intra-varietal diversity within Lebanese traditional cultivars 'Aswad', 'Maghdoushe', 'Maryame', 'Merweh', 'Meksese' and 'Obeide'. A total of 50 accessions distributed over five main geographical areas in Lebanon were collected and submitted to both ampelographic description and ISSR DNA analysis. A set of 35 ampelographic descriptors previously established by the International Office of Vine and Wine and related to leaf, bunch, berry, and phenological stages, were examined. Variability was observed between accessions within cultivars for blade shape, density of prostrate and erect hairs, teeth shape, berry shape, size and color, cluster shape and size, and flesh juiciness. At the molecular level, nine ISSR (inter-simple sequence repeat) primers, previously developed for grapevine, were used in this study. These primers generated a total of 35 bands, of which 30 (85.7%) were polymorphic. Totally, 29 genetic profiles were differentiated, of which 9 revealed within 'Obeide', 6 for 'Maghdoushe', 5 for 'Merweh', 4 within 'Maryame', 3 for 'Aswad' and 2 within 'Meksese'. Findings of this study indicate the existence of several genotypes that form the basis of the main indigenous cultivars grown in Lebanon and which should be further considered in the establishment of new vineyards and selection programs.

Keywords: ampelography, autochthonous cultivars, ISSR markers, Lebanon, Vitis vinifera L.

Procedia PDF Downloads 128
3060 Amino Acid Responses of Wheat Cultivars under Glasshouse Drought Accurately Predict Yield-Based Drought Tolerance in the Field

Authors: Arun K. Yadav, Adam J. Carroll, Gonzalo M. Estavillo, Greg J. Rebetzke, Barry J. Pogson

Abstract:

Water limits crop productivity, so selecting for minimal yield-gap in drier environments is critical to mitigate against climate change and land-use pressures. To date, no markers measured in glasshouses have been reported to predict field-based drought tolerance. In the field, the best measure of drought tolerance is yield-gap; but this requires multisite trials that are an order of magnitude more resource intensive and can be impacted by weather variation. We investigated the responses of relative water content (RWC), stomatal conductance (gs), chlorophyll content and metabolites in flag leaves of commercial wheat (Triticum aestivum L.) cultivars to three drought treatments in the glasshouse and field environments. We observed strong genetic associations between glasshouse-based RWC, metabolites and Yield gap-based Drought Tolerance (YDT): the ratio of yield in water-limited versus well-watered conditions across 24 field environments spanning sites and seasons. Critically, RWC response to glasshouse drought was strongly associated with both YDT (r2 = 0.85, p < 8E-6) and RWC under field drought (r2 = 0.77, p < 0.05). Multiple regression analyses revealed that 98% of genetic YDT variance was explained by drought responses of four metabolites: serine, asparagine, methionine and lysine (R2 = 0.98; p < 0.01). Fitted coefficients suggested that, for given levels of serine and asparagine, stronger methionine and lysine accumulation was associated with higher YDT. Collectively, our results demonstrate that high-throughput, targeted metabolic phenotyping of glasshouse-grown plants may be an effective tool for the selection of wheat cultivars with high YDT in the field.

Keywords: drought stress, grain yield, metabolomics, stomatal conductance, wheat

Procedia PDF Downloads 250
3059 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 129
3058 Integrated Model for Enhancing Data Security Performance in Cloud Computing

Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali

Abstract:

Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.

Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish

Procedia PDF Downloads 458
3057 Saliency Detection Using a Background Probability Model

Authors: Junling Li, Fang Meng, Yichun Zhang

Abstract:

Image saliency detection has been long studied, while several challenging problems are still unsolved, such as detecting saliency inaccurately in complex scenes or suppressing salient objects in the image borders. In this paper, we propose a new saliency detection algorithm in order to solving these problems. We represent the image as a graph with superixels as nodes. By considering appearance similarity between the boundary and the background, the proposed method chooses non-saliency boundary nodes as background priors to construct the background probability model. The probability that each node belongs to the model is computed, which measures its similarity with backgrounds. Thus we can calculate saliency by the transformed probability as a metric. We compare our algorithm with ten-state-of-the-art salient detection methods on the public database. Experimental results show that our simple and effective approach can attack those challenging problems that had been baffling in image saliency detection.

Keywords: visual saliency, background probability, boundary knowledge, background priors

Procedia PDF Downloads 413
3056 TMBCoI-SIOT: Trust Management System Based on the Community of Interest for the Social Internet of Things

Authors: Oumaima Ben Abderrahim, Mohamed Houcine Elhedhili, Leila Saidane

Abstract:

In this paper, we propose a trust management system based on clustering architecture for the social internet of things called TMBCO-SIOT. The proposed model integrates numerous factors such as direct and indirect trust; transaction factor; precaution factor; and social modeling of trust. The novelty of our approach can be summed up in two aspects. The first aspect concerns the architecture based on the community of interest (CoT) where each community is headed by an administrator (admin). However, the second aspect is the trust management system that tries to prevent On-Off attacks and mitigates dishonest recommendations using the k-means algorithm and guarantor things. The effectiveness of the proposed system is proved by simulation against malicious nodes.

Keywords: IoT, trust management system, attacks, trust, dishonest recommendations, K-means algorithm

Procedia PDF Downloads 198
3055 Size Optimization of Microfluidic Polymerase Chain Reaction Devices Using COMSOL

Authors: Foteini Zagklavara, Peter Jimack, Nikil Kapur, Ozz Querin, Harvey Thompson

Abstract:

The invention and development of the Polymerase Chain Reaction (PCR) technology have revolutionised molecular biology and molecular diagnostics. There is an urgent need to optimise their performance of those devices while reducing the total construction and operation costs. The present study proposes a CFD-enabled optimisation methodology for continuous flow (CF) PCR devices with serpentine-channel structure, which enables the trade-offs between competing objectives of DNA amplification efficiency and pressure drop to be explored. This is achieved by using a surrogate-enabled optimisation approach accounting for the geometrical features of a CF μPCR device by performing a series of simulations at a relatively small number of Design of Experiments (DoE) points, with the use of COMSOL Multiphysics 5.4. The values of the objectives are extracted from the CFD solutions, and response surfaces created using the polyharmonic splines and neural networks. After creating the respective response surfaces, genetic algorithm, and a multi-level coordinate search optimisation function are used to locate the optimum design parameters. Both optimisation methods produced similar results for both the neural network and the polyharmonic spline response surfaces. The results indicate that there is the possibility of improving the DNA efficiency by ∼2% in one PCR cycle when doubling the width of the microchannel to 400 μm while maintaining the height at the value of the original design (50μm). Moreover, the increase in the width of the serpentine microchannel is combined with a decrease in its total length in order to obtain the same residence times in all the simulations, resulting in a smaller total substrate volume (32.94% decrease). A multi-objective optimisation is also performed with the use of a Pareto Front plot. Such knowledge will enable designers to maximise the amount of DNA amplified or to minimise the time taken throughout thermal cycling in such devices.

Keywords: PCR, optimisation, microfluidics, COMSOL

Procedia PDF Downloads 143
3054 Mixtures of Length-Biased Weibull Distributions for Loss Severity Modelling

Authors: Taehan Bae

Abstract:

In this paper, a class of length-biased Weibull mixtures is presented to model loss severity data. The proposed model generalizes the Erlang mixtures with the common scale parameter, and it shares many important modelling features, such as flexibility to fit various data distribution shapes and weak-denseness in the class of positive continuous distributions, with the Erlang mixtures. We show that the asymptotic tail estimate of the length-biased Weibull mixture is Weibull-type, which makes the model effective to fit loss severity data with heavy-tailed observations. A method of statistical estimation is discussed with applications on real catastrophic loss data sets.

Keywords: Erlang mixture, length-biased distribution, transformed gamma distribution, asymptotic tail estimate, EM algorithm, expectation-maximization algorithm

Procedia PDF Downloads 209
3053 Agroecological and Socioeconomic Determinants of Conserving Diversity On-Farm: The Case of Wheat Genetic Resources in Ethiopia

Authors: Bedilu Tafesse

Abstract:

Conservation of crop genetic resources presents a challenge of identifying specific determinants driving maintenance of diversity at farm and agroecosystems. The objectives of this study were to identify socioeconomic, market and agroecological determinants of farmers’ maintenance of wheat diversity at the household level and derive implications for policies in designing on-farm conservation programs. We assess wheat diversity at farm level using household survey data. A household decision making model is conceptualized using microeconomic theory to assess and identify factors influencing on-farm rice diversity. The model is then tested econometrically by using various factors affecting farmers’ variety choice and diversity decisions. The findings show that household-specific socioeconomic, agroecological and market factors are important in determining on-farm wheat diversity. The significant variables in explaining richness and evenness of wheat diversity include distance to the nearest market, subsistence ratio, modern variety sold, land types and adult labour working in agriculture. The statistical signs of the factors determining wheat diversity are consistent in explaining the richness, dominance and evenness among rice varieties. Finally, the study implies that the cost-effective means of promoting and sustaining on-farm conservation programmes is to target them in market isolated geographic locations of high crop diversity where farm households have more heterogeneity of agroecological conditions and more active family adult labour working on-farm.

Keywords: diversity indices, dominance, evenness, on-farm conservation, wheat diversity, richness

Procedia PDF Downloads 285
3052 Reconfigurable Efficient IIR Filter Design Using MAC Algorithm

Authors: Rajesh Mehra

Abstract:

In this paper an IIR filter has been designed and simulated on an FPGA. The implementation is based on MAC algorithm which uses multiply-and-accumulate operations IIR filter design implementation. Parallel Pipelined structure is used to implement the proposed IIR Filter taking optimal advantage of the look up table of the FPGA device. The designed filter has been synthesized on DSP slice based FPGA to perform multiplier function of MAC unit. The DSP slices are useful to enhance the speed performance. The developed IIR filter is designed and simulated with MATLAB and synthesized with Xilinx Synthesis Tool (XST), and implemented on Virtex 5 and Spartan 3 ADSP FPGA devices. The IIR filter implemented on Virtex 5 FPGA can operate at an estimated frequency of 81.5 MHz as compared to 40.5 MHz in case of Spartan 3 ADSP FPGA. The Virtex 5 based implementation also consumes less slices and slice flip flops of target FPGA in comparison to Spartan 3 ADSP based implementation to provide cost effective solution for signal processing applications.

Keywords: butterworth, DSP, IIR, MAC, FPGA

Procedia PDF Downloads 342
3051 Application of Neural Networks to Predict Changing the Diameters of Bubbles in Pool Boiling Distilled Water

Authors: V. Nikkhah Rashidabad, M. Manteghian, M. Masoumi, S. Mousavian, D. Ashouri

Abstract:

In this research, the capability of neural networks in modeling and learning complicated and nonlinear relations has been used to develop a model for the prediction of changes in the diameter of bubbles in pool boiling distilled water. The input parameters used in the development of this network include element temperature, heat flux, and retention time of bubbles. The test data obtained from the experiment of the pool boiling of distilled water, and the measurement of the bubbles form on the cylindrical element. The model was developed based on training algorithm, which is typologically of back-propagation type. Considering the correlation coefficient obtained from this model is 0.9633. This shows that this model can be trusted for the simulation and modeling of the size of bubble and thermal transfer of boiling.

Keywords: bubble diameter, heat flux, neural network, training algorithm

Procedia PDF Downloads 430
3050 Unsupervised Segmentation Technique for Acute Leukemia Cells Using Clustering Algorithms

Authors: N. H. Harun, A. S. Abdul Nasir, M. Y. Mashor, R. Hassan

Abstract:

Leukaemia is a blood cancer disease that contributes to the increment of mortality rate in Malaysia each year. There are two main categories for leukaemia, which are acute and chronic leukaemia. The production and development of acute leukaemia cells occurs rapidly and uncontrollable. Therefore, if the identification of acute leukaemia cells could be done fast and effectively, proper treatment and medicine could be delivered. Due to the requirement of prompt and accurate diagnosis of leukaemia, the current study has proposed unsupervised pixel segmentation based on clustering algorithm in order to obtain a fully segmented abnormal white blood cell (blast) in acute leukaemia image. In order to obtain the segmented blast, the current study proposed three clustering algorithms which are k-means, fuzzy c-means and moving k-means algorithms have been applied on the saturation component image. Then, median filter and seeded region growing area extraction algorithms have been applied, to smooth the region of segmented blast and to remove the large unwanted regions from the image, respectively. Comparisons among the three clustering algorithms are made in order to measure the performance of each clustering algorithm on segmenting the blast area. Based on the good sensitivity value that has been obtained, the results indicate that moving k-means clustering algorithm has successfully produced the fully segmented blast region in acute leukaemia image. Hence, indicating that the resultant images could be helpful to haematologists for further analysis of acute leukaemia.

Keywords: acute leukaemia images, clustering algorithms, image segmentation, moving k-means

Procedia PDF Downloads 278
3049 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process

Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı

Abstract:

Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.

Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter

Procedia PDF Downloads 409
3048 Impact of Drainage Defect on the Railway Track Surface Deflections; A Numerical Investigation

Authors: Shadi Fathi, Moura Mehravar, Mujib Rahman

Abstract:

The railwaytransportation network in the UK is over 100 years old and is known as one of the oldest mass transit systems in the world. This aged track network requires frequent closure for maintenance. One of the main reasons for closure is inadequate drainage due to the leakage in the buried drainage pipes. The leaking water can cause localised subgrade weakness, which subsequently can lead to major ground/substructure failure.Different condition assessment methods are available to assess the railway substructure. However, the existing condition assessment methods are not able to detect any local ground weakness/damageand provide details of the damage (e.g. size and location). To tackle this issue, a hybrid back-analysis technique based on artificial neural network (ANN) and genetic algorithm (GA) has been developed to predict the substructurelayers’ moduli and identify any soil weaknesses. At first, afinite element (FE) model of a railway track section under Falling Weight Deflection (FWD) testing was developed and validated against field trial. Then a drainage pipe and various scenarios of the local defect/ soil weakness around the buried pipe with various geometriesand physical properties were modelled. The impact of the soil local weaknesson the track surface deflection wasalso studied. The FE simulations results were used to generate a database for ANN training, and then a GA wasemployed as an optimisation tool to optimise and back-calculate layers’ moduli and soil weakness moduli (ANN’s input). The hybrid ANN-GA back-analysis technique is a computationally efficient method with no dependency on seed modulus values. The modelcan estimate substructures’ layer moduli and the presence of any localised foundation weakness.

Keywords: finite element (FE) model, drainage defect, falling weight deflectometer (FWD), hybrid ANN-GA

Procedia PDF Downloads 138
3047 Smartphone Video Source Identification Based on Sensor Pattern Noise

Authors: Raquel Ramos López, Anissa El-Khattabi, Ana Lucila Sandoval Orozco, Luis Javier García Villalba

Abstract:

An increasing number of mobile devices with integrated cameras has meant that most digital video comes from these devices. These digital videos can be made anytime, anywhere and for different purposes. They can also be shared on the Internet in a short period of time and may sometimes contain recordings of illegal acts. The need to reliably trace the origin becomes evident when these videos are used for forensic purposes. This work proposes an algorithm to identify the brand and model of mobile device which generated the video. Its procedure is as follows: after obtaining the relevant video information, a classification algorithm based on sensor noise and Wavelet Transform performs the aforementioned identification process. We also present experimental results that support the validity of the techniques used and show promising results.

Keywords: digital video, forensics analysis, key frame, mobile device, PRNU, sensor noise, source identification

Procedia PDF Downloads 412
3046 An Approach to Noise Variance Estimation in Very Low Signal-to-Noise Ratio Stochastic Signals

Authors: Miljan B. Petrović, Dušan B. Petrović, Goran S. Nikolić

Abstract:

This paper describes a method for AWGN (Additive White Gaussian Noise) variance estimation in noisy stochastic signals, referred to as Multiplicative-Noising Variance Estimation (MNVE). The aim was to develop an estimation algorithm with minimal number of assumptions on the original signal structure. The provided MATLAB simulation and results analysis of the method applied on speech signals showed more accuracy than standardized AR (autoregressive) modeling noise estimation technique. In addition, great performance was observed on very low signal-to-noise ratios, which in general represents the worst case scenario for signal denoising methods. High execution time appears to be the only disadvantage of MNVE. After close examination of all the observed features of the proposed algorithm, it was concluded it is worth of exploring and that with some further adjustments and improvements can be enviably powerful.

Keywords: noise, signal-to-noise ratio, stochastic signals, variance estimation

Procedia PDF Downloads 373
3045 Heuristic Algorithms for Time Based Weapon-Target Assignment Problem

Authors: Hyun Seop Uhm, Yong Ho Choi, Ji Eun Kim, Young Hoon Lee

Abstract:

Weapon-target assignment (WTA) is a problem that assigns available launchers to appropriate targets in order to defend assets. Various algorithms for WTA have been developed over past years for both in the static and dynamic environment (denoted by SWTA and DWTA respectively). Due to the problem requirement to be solved in a relevant computational time, WTA has suffered from the solution efficiency. As a result, SWTA and DWTA problems have been solved in the limited situation of the battlefield. In this paper, the general situation under continuous time is considered by Time based Weapon Target Assignment (TWTA) problem. TWTA are studied using the mixed integer programming model, and three heuristic algorithms; decomposed opt-opt, decomposed opt-greedy, and greedy algorithms are suggested. Although the TWTA optimization model works inefficiently when it is characterized by a large size, the decomposed opt-opt algorithm based on the linearization and decomposition method extracted efficient solutions in a reasonable computation time. Because the computation time of the scheduling part is too long to solve by the optimization model, several algorithms based on greedy is proposed. The models show lower performance value than that of the decomposed opt-opt algorithm, but very short time is needed to compute. Hence, this paper proposes an improved method by applying decomposition to TWTA, and more practical and effectual methods can be developed for using TWTA on the battlefield.

Keywords: air and missile defense, weapon target assignment, mixed integer programming, piecewise linearization, decomposition algorithm, military operations research

Procedia PDF Downloads 323
3044 Analysis of CO₂ Two-Phase Ejector with Taguchi and ANOVA Optimization and Refrigerant Selection with Enviro Economic Concerns by TOPSIS Analysis

Authors: Karima Megdouli, Bourhan tachtouch

Abstract:

Ejector refrigeration cycles offer an alternative to conventional systems for producing cold from low-temperature heat. In this article, a thermodynamic model is presented. This model has the advantage of simplifying the calculation algorithm and describes the complex double-throttling mechanism that occurs in the ejector. The model assumption and calculation algorithm are presented first. The impact of each efficiency is evaluated. Validation is performed on several data sets. The ejector model is then used to simulate a RES (refrigeration ejector system), to validate its robustness and suitability for use in predicting thermodynamic cycle performance. A Taguchi and ANOVA optimization is carried out on a RES. TOPSIS analysis was applied to decide the optimum refrigerants with cost, safety, environmental and enviro economic concerns along with thermophysical properties.

Keywords: ejector, velocity distribution, shock circle, Taguchi and ANOVA optimization, TOPSIS analysis

Procedia PDF Downloads 66
3043 Research on Transmission Parameters Determination Method Based on Dynamic Characteristic Analysis

Authors: Baoshan Huang, Fanbiao Bao, Bing Li, Lianghua Zeng, Yi Zheng

Abstract:

Parameter control strategy based on statistical characteristics can analyze the choice of the transmission ratio of an automobile transmission. According to the difference of the transmission gear, the number and spacing of the gear can be determined. Transmission ratio distribution of transmission needs to satisfy certain distribution law. According to the statistic characteristics of driving parameters, the shift control strategy of the vehicle is analyzed. CVT shift schedule adjustment algorithm based on statistical characteristic parameters can be seen from the above analysis, if according to the certain algorithm to adjust the size of, can adjust the target point are in the best efficiency curve and dynamic curve between the location, to alter the vehicle characteristics. Based on the dynamic characteristics and the practical application of the vehicle, this paper presents the setting scheme of the transmission ratio.

Keywords: vehicle dynamics, transmission ratio, transmission parameters, statistical characteristics

Procedia PDF Downloads 376
3042 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software

Authors: Marian Sagat, Mariana Remesikova

Abstract:

In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.

Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software

Procedia PDF Downloads 133
3041 The GRIT Study: Getting Global Rare Disease Insights Through Technology Study

Authors: Aneal Khan, Elleine Allapitan, Desmond Koo, Katherine-Ann Piedalue, Shaneel Pathak, Utkarsh Subnis

Abstract:

Background: Disease management of metabolic, genetic disorders is long-term and can be cumbersome to patients and caregivers. Patient-Reported Outcome Measures (PROMs) have been a useful tool in capturing patient perspectives to help enhance treatment compliance and engagement with health care providers, reduce utilization of emergency services, and increase satisfaction with their treatment choices. Currently, however, PROMs are collected during infrequent and decontextualized clinic visits, which makes translation of patient experiences challenging over time. The GRIT study aims to evaluate a digital health journal application called Zamplo that provides a personalized health diary to record self-reported health outcomes accurately and efficiently in patients with metabolic, genetic disorders. Methods: This is a randomized controlled trial (RCT) (1:1) that assesses the efficacy of Zamplo to increase patient activation (primary outcome), improve healthcare satisfaction and confidence to manage medications (secondary outcomes), and reduce costs to the healthcare system (exploratory). Using standardized online surveys, assessments will be collected at baseline, 1 month, 3 months, 6 months, and 12 months. Outcomes will be compared between patients who were given access to the application versus those with no access. Results: Seventy-seven patients were recruited as of November 30, 2021. Recruitment for the study commenced in November 2020 with a target of n=150 patients. The accrual rate was 50% from those eligible and invited for the study, with the majority of patients having Fabry disease (n=48) and the remaining having Pompe disease and mitochondrial disease. Real-time clinical responses, such as pain, are being measured and correlated to disease-modifying therapies, supportive treatments like pain medications, and lifestyle interventions. Engagement with the application, along with compliance metrics of surveys and journal entries, are being analyzed. An interim analysis of the engagement data along with preliminary findings from this pilot RCT, and qualitative patient feedback will be presented. Conclusions: The digital self-care journal provides a unique approach to disease management, allowing patients direct access to their progress and actively participating in their care. Findings from the study can help serve the virtual care needs of patients with metabolic, genetic disorders in North America and the world over.

Keywords: eHealth, mobile health, rare disease, patient outcomes, quality of life (QoL), pain, Fabry disease, Pompe disease

Procedia PDF Downloads 142
3040 An Alternative Approach for Assessing the Impact of Cutting Conditions on Surface Roughness Using Single Decision Tree

Authors: S. Ghorbani, N. I. Polushin

Abstract:

In this study, an approach to identify factors affecting on surface roughness in a machining process is presented. This study is based on 81 data about surface roughness over a wide range of cutting tools (conventional, cutting tool with holes, cutting tool with composite material), workpiece materials (AISI 1045 Steel, AA2024 aluminum alloy, A48-class30 gray cast iron), spindle speed (630-1000 rpm), feed rate (0.05-0.075 mm/rev), depth of cut (0.05-0.15 mm) and tool overhang (41-65 mm). A single decision tree (SDT) analysis was done to identify factors for predicting a model of surface roughness, and the CART algorithm was employed for building and evaluating regression tree. Results show that a single decision tree is better than traditional regression models with higher rate and forecast accuracy and strong value.

Keywords: cutting condition, surface roughness, decision tree, CART algorithm

Procedia PDF Downloads 358