Search results for: algorithm techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9584

Search results for: algorithm techniques

7514 Self-Assembly of Monodisperse Oleic Acid-Capped Superparamagnetic Iron Oxide Nanoparticles

Authors: Huseyin Kavas

Abstract:

Oleic acid (OA) capped superparamagnetic iron oxide nanoparticles (SPION) were synthesized by a thermal decomposition method. The composition of nanoparticles was confirmed by X-ray powder diffraction, and the morphology of particles was investigated by Atomic Force Microscopy (AFM), Scanning Electron Microscopy (SEM), and Transmission electron microscopy (TEM). The crystalline and particle size distribution of SPIONS capped with OA were investigated with a mean size of 6.99 nm and 8.9 nm, respectively. It was found that SPIONS have superparamagnetic characteristics with a saturation magnetization value of 64 emu/g. The thin film form of self-assembled SPIONS was fabricated by coating techniques of spin coating and dip coating. SQUID-VSM magnetometer and FMR techniques were performed in order to evaluate the magnetic properties of thin films, especially the existence of magnetic anisotropy. The thin films with magnetic anisotropy were obtained by self-assembled monolayers of SPION.

Keywords: magnetic materials, nanostructures, self-assembly, FMR

Procedia PDF Downloads 90
7513 Preservation of Sensitive Biological Products: An Insight into Conventional and Upcoming Drying Techniques

Authors: Jannika Dombrowski, Sabine Ambros, Ulrich Kulozik

Abstract:

Several drying techniques are used to preserve sensitive substances such as probiotic lactic acid bacteria. With the aim to better understand differences between these processes, this work gives new insights into structural variations resulting from different preservation methods and their impact on product quality and storage stability. Industrially established methods (freeze drying, spray drying) were compared to upcoming vacuum, microwave-freeze, and microwave-vacuum drying. For freeze and microwave-freeze dried samples, survival and activity maintained 100%, whereas vacuum and microwave-vacuum dried cultures achieved 30-40% survival. Spray drying yielded in lowest viability. The results are directly related to temperature and oxygen content during drying. Interestingly, most storage stable products resulted from vacuum and microwave-vacuum drying due to denser product structures as determined by helium pycnometry and SEM images. Further, lower water adsorption velocities were responsible for lower inactivation rates. Concluding, resulting product structures as well as survival rates and storage stability mainly depend on the type of water removal instead of energy input. Microwave energy compared to conductive heating did not lead to significant differences regarding the examined factors. Correlations could be proven for three investigated microbial strains. The presentation will be completed by an overview on the energy efficiency of the presented methods.

Keywords: drying techniques, energy efficiency, lactic acid bacteria, probiotics, survival rates, structure characterization

Procedia PDF Downloads 225
7512 Intelligent Control of Doubly Fed Induction Generator Wind Turbine for Smart Grid

Authors: Amal A. Hassan, Faten H. Fahmy, Abd El-Shafy A. Nafeh, Hosam K. M. Youssef

Abstract:

Due to the growing penetration of wind energy into the power grid, it is very important to study its interactions with the power system and to provide good control technique in order to deliver high quality power. In this paper, an intelligent control methodology is proposed for optimizing the controllers’ parameters of doubly fed induction generator (DFIG) based wind turbine generation system (WTGS). The genetic algorithm (GA) and particle swarm optimization (PSO) are employed and compared for the parameters adaptive tuning of the proposed proportional integral (PI) multiple controllers of the back to back converters of the DFIG based WTGS. For this purpose, the dynamic model of WTGS with DFIG and its associated controllers is presented. Furthermore, the simulation of the system is performed using MATLAB/SIMULINK and SIMPOWERSYSTEM toolbox to illustrate the performance of the optimized controllers. Finally, this work is validated to 33-bus test radial system to show the interaction between wind distributed generation (DG) systems and the distribution network.

Keywords: DFIG wind turine, intelligent control, distributed generation, particle swarm optimization, genetic algorithm

Procedia PDF Downloads 253
7511 A Hybrid Feature Selection Algorithm with Neural Network for Software Fault Prediction

Authors: Khalaf Khatatneh, Nabeel Al-Milli, Amjad Hudaib, Monther Ali Tarawneh

Abstract:

Software fault prediction identify potential faults in software modules during the development process. In this paper, we present a novel approach for software fault prediction by combining a feedforward neural network with particle swarm optimization (PSO). The PSO algorithm is employed as a feature selection technique to identify the most relevant metrics as inputs to the neural network. Which enhances the quality of feature selection and subsequently improves the performance of the neural network model. Through comprehensive experiments on software fault prediction datasets, the proposed hybrid approach achieves better results, outperforming traditional classification methods. The integration of PSO-based feature selection with the neural network enables the identification of critical metrics that provide more accurate fault prediction. Results shows the effectiveness of the proposed approach and its potential for reducing development costs and effort by detecting faults early in the software development lifecycle. Further research and validation on diverse datasets will help solidify the practical applicability of the new approach in real-world software engineering scenarios.

Keywords: feature selection, neural network, particle swarm optimization, software fault prediction

Procedia PDF Downloads 72
7510 Adversary Emulation: Implementation of Automated Countermeasure in CALDERA Framework

Authors: Yinan Cao, Francine Herrmann

Abstract:

Adversary emulation is a very effective concrete way to evaluate the defense of an information system or network. It is about building an emulator, which depending on the vulnerability of a target system, will allow to detect and execute a set of identified attacks. However, emulating an adversary is very costly in terms of time and resources. Verifying the information of each technique and building up the countermeasures in the middle of the test is also needed to be accomplished manually. In this article, a synthesis of previous MITRE research on the creation of the ATT&CK matrix will be as the knowledge base of the known techniques and a well-designed adversary emulation software CALDERA based on ATT&CK Matrix will be used as our platform. Inspired and guided by the previous study, a plugin in CALDERA called Tinker will be implemented, which is aiming to help the tester to get more information and also the mitigation of each technique used in the previous operation. Furthermore, the optional countermeasures for some techniques are also implemented and preset in Tinker in order to facilitate and fasten the process of the defense improvement of the tested system.

Keywords: automation, adversary emulation, CALDERA, countermeasures, MITRE ATT&CK

Procedia PDF Downloads 185
7509 Extraction of Natural Colorant from the Flowers of Flame of Forest Using Ultrasound

Authors: Sunny Arora, Meghal A. Desai

Abstract:

An impetus towards green consumerism and implementation of sustainable techniques, consumption of natural products and utilization of environment friendly techniques have gained accelerated acceptance. Butein, a natural colorant, has many medicinal properties apart from its use in dyeing industries. Extraction of butein from the flowers of flame of forest was carried out using ultrasonication bath. Solid loading (2-6 g), extraction time (30-50 min), volume of solvent (30-50 mL) and types of solvent (methanol, ethanol and water) have been studied to maximize the yield of butein using the Taguchi method. The highest yield of butein 4.67% (w/w) was obtained using 4 g of plant material, 40 min of extraction time and 30 mL volume of methanol as a solvent. The present method provided a greater reduction in extraction time compared to the conventional method of extraction. Hence, the outcome of the present investigation could further be utilized to develop the method at a higher scale.

Keywords: butein, flowers of Flame of the Forest, Taguchi method, ultrasonic bath

Procedia PDF Downloads 460
7508 Simulation, Optimization, and Analysis Approach of Microgrid Systems

Authors: Saqib Ali

Abstract:

Sources are classified into two depending upon the factor of reviving. These sources, which cannot be revived into their original shape once they are consumed, are considered as nonrenewable energy resources, i.e., (coal, fuel) Moreover, those energy resources which are revivable to the original condition even after being consumed are known as renewable energy resources, i.e., (wind, solar, hydel) Renewable energy is a cost-effective way to generate clean and green electrical energy Now a day’s majority of the countries are paying heed to energy generation from RES Pakistan is mostly relying on conventional energy resources which are mostly nonrenewable in nature coal, fuel is one of the major resources, and with the advent of time their prices are increasing on the other hand RES have great potential in the country with the deployment of RES greater reliability and an effective power system can be obtained In this thesis, a similar concept is being used and a hybrid power system is proposed which is composed of intermixing of renewable and nonrenewable sources The Source side is composed of solar, wind, fuel cells which will be used in an optimal manner to serve load The goal is to provide an economical, reliable, uninterruptable power supply. This is achieved by optimal controller (PI, PD, PID, FOPID) Optimization techniques are applied to the controllers to achieve the desired results. Advanced algorithms (Particle swarm optimization, Flower Pollination Algorithm) will be used to extract the desired output from the controller Detailed comparison in the form of tables and results will be provided, which will highlight the efficiency of the proposed system.

Keywords: distributed generation, demand-side management, hybrid power system, micro grid, renewable energy resources, supply-side management

Procedia PDF Downloads 85
7507 A Study on the Assessment of Prosthetic Infection after Total Knee Replacement Surgery

Authors: Chun-Lang Chang, Chun-Kai Liu

Abstract:

In this study, the patients that have undergone total knee replacement surgery from the 2010 National Health Insurance database were adopted as the study participants. The important factors were screened and selected through literature collection and interviews with physicians. Through the Cross Entropy Method (CE), Genetic Algorithm Logistic Regression (GALR), and Particle Swarm Optimization (PSO), the weights of the factors were obtained. In addition, the weights of the respective algorithms, coupled with the Excel VBA were adopted to construct the Case Based Reasoning (CBR) system. The results through statistical tests show that the GALR and PSO produced no significant differences, and the accuracy of both models were above 97%. Moreover, the area under the curve of ROC for these two models also exceeded 0.87. This study shall serve as a reference for medical staff as an assistance for clinical assessment of infections in order to effectively enhance medical service quality and efficiency, avoid unnecessary medical waste, and substantially contribute to resource allocations in medical institutions.

Keywords: Case Based Reasoning, Cross Entropy Method, Genetic Algorithm Logistic Regression, Particle Swarm Optimization, Total Knee Replacement Surgery

Procedia PDF Downloads 310
7506 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient

Procedia PDF Downloads 123
7505 Multi-Stage Classification for Lung Lesion Detection on CT Scan Images Applying Medical Image Processing Technique

Authors: Behnaz Sohani, Sahand Shahalinezhad, Amir Rahmani, Aliyu Aliyu

Abstract:

Recently, medical imaging and specifically medical image processing is becoming one of the most dynamically developing areas of medical science. It has led to the emergence of new approaches in terms of the prevention, diagnosis, and treatment of various diseases. In the process of diagnosis of lung cancer, medical professionals rely on computed tomography (CT) scans, in which failure to correctly identify masses can lead to incorrect diagnosis or sampling of lung tissue. Identification and demarcation of masses in terms of detecting cancer within lung tissue are critical challenges in diagnosis. In this work, a segmentation system in image processing techniques has been applied for detection purposes. Particularly, the use and validation of a novel lung cancer detection algorithm have been presented through simulation. This has been performed employing CT images based on multilevel thresholding. The proposed technique consists of segmentation, feature extraction, and feature selection and classification. More in detail, the features with useful information are selected after featuring extraction. Eventually, the output image of lung cancer is obtained with 96.3% accuracy and 87.25%. The purpose of feature extraction applying the proposed approach is to transform the raw data into a more usable form for subsequent statistical processing. Future steps will involve employing the current feature extraction method to achieve more accurate resulting images, including further details available to machine vision systems to recognise objects in lung CT scan images.

Keywords: lung cancer detection, image segmentation, lung computed tomography (CT) images, medical image processing

Procedia PDF Downloads 77
7504 Mindfulness and Employability: A Course on the Control of Stress during the Search for Work

Authors: O. Lasaga

Abstract:

Defining professional objectives and the search for work are some of the greatest stress factors for final year university students and recent graduates. To manage correctly the stress brought about by the uncertainty, confusion and frustration this process often generates, a course to control stress based on mindfulness has been designed and taught. This course provides tools based on relaxation, mindfulness and meditation that enable students to address personal and professional challenges in the transition to the job market, eliminating or easing the anxiety involved. The course is extremely practical and experiential, combining theory classes and practical classes of relaxation, meditation and mindfulness, group dynamics, reflection, application protocols and session integration. The evaluation of the courses highlighted on the one hand the high degree of satisfaction and, on the other, the usefulness for the students in becoming aware of stressful situations and how these affect them and learning new coping techniques that enable them to reach their goals more easily and with greater satisfaction and well-being.

Keywords: employability, meditation, mindfulness, relaxation techniques, stress

Procedia PDF Downloads 370
7503 Jitter Based Reconstruction of Transmission Line Pulse Using On-Chip Sensor

Authors: Bhuvnesh Narayanan, Bernhard Weiss, Tvrtko Mandic, Adrijan Baric

Abstract:

This paper discusses a method to reconstruct internal high-frequency signals through subsampling techniques in an IC using an on-chip sensor. Though there are existing methods to internally probe and reconstruct high frequency signals through subsampling techniques; these methods have been applicable mainly for synchronized systems. This paper demonstrates a method for making such non-intrusive on-chip reconstructions possible also in non-synchronized systems. The TLP pulse is used to demonstrate the experimental validation of the concept. The on-chip sensor measures the voltage in an internal node. The jitter in the input pulse causes a varying pulse delay with respect to the on-chip sampling command. By measuring this pulse delay and by correlating it with the measured on-chip voltage, time domain waveforms can be reconstructed, and the influence of the pulse on the internal nodes can be better understood.

Keywords: on-chip sensor, jitter, transmission line pulse, subsampling

Procedia PDF Downloads 128
7502 An Overview of Corroded Pipe Repair Techniques Using Composite Materials

Authors: Lim Kar Sing, Siti Nur Afifah Azraai, Norhazilan Md Noor, Nordin Yahaya

Abstract:

Polymeric composites are being increasingly used as repair material for repairing critical infrastructures such as building, bridge, pressure vessel, piping and pipeline. Technique in repairing damaged pipes is one of the major concerns of pipeline owners. Considerable researches have been carried out on the repair of corroded pipes using composite materials. This article attempts a short review of the subject matter to provide insight into various techniques used in repairing corroded pipes, focusing on a wide range of composite repair systems. These systems including pre-cured layered, flexible wet lay-up, pre-impregnated, split composite sleeve and flexible tape systems. Both advantages and limitations of these repair systems were highlighted. Critical technical aspects have been discussed through the current standards and practices. Research gaps and future study scopes in achieving more effective design philosophy are also presented.

Keywords: composite materials, pipeline, repair technique, polymers

Procedia PDF Downloads 500
7501 An Agile, Intelligent and Scalable Framework for Global Software Development

Authors: Raja Asad Zaheer, Aisha Tanveer, Hafza Mehreen Fatima

Abstract:

Global Software Development (GSD) is becoming a common norm in software industry, despite of the fact that global distribution of the teams presents special issues for effective communication and coordination of the teams. Now trends are changing and project management for distributed teams is no longer in a limbo. GSD can be effectively established using agile and project managers can use different agile techniques/tools for solving the problems associated with distributed teams. Agile methodologies like scrum and XP have been successfully used with distributed teams. We have employed exploratory research method to analyze different recent studies related to challenges of GSD and their proposed solutions. In our study, we had deep insight in six commonly faced challenges: communication and coordination, temporal differences, cultural differences, knowledge sharing/group awareness, speed and communication tools. We have established that each of these challenges cannot be neglected for distributed teams of any kind. They are interlinked and as an aggregated whole can cause the failure of projects. In this paper we have focused on creating a scalable framework for detecting and overcoming these commonly faced challenges. In the proposed solution, our objective is to suggest agile techniques/tools relevant to a particular problem faced by the organizations related to the management of distributed teams. We focused mainly on scrum and XP techniques/tools because they are widely accepted and used in the industry. Our solution identifies the problem and suggests an appropriate technique/tool to help solve the problem based on globally shared knowledgebase. We can establish a cause and effect relationship using a fishbone diagram based on the inputs provided for issues commonly faced by organizations. Based on the identified cause, suitable tool is suggested, our framework suggests a suitable tool. Hence, a scalable, extensible, self-learning, intelligent framework proposed will help implement and assess GSD to achieve maximum out of it. Globally shared knowledgebase will help new organizations to easily adapt best practices set forth by the practicing organizations.

Keywords: agile project management, agile tools/techniques, distributed teams, global software development

Procedia PDF Downloads 284
7500 Kriging-Based Global Optimization Method for Bluff Body Drag Reduction

Authors: Bingxi Huang, Yiqing Li, Marek Morzynski, Bernd R. Noack

Abstract:

We propose a Kriging-based global optimization method for active flow control with multiple actuation parameters. This method is designed to converge quickly and avoid getting trapped into local minima. We follow the model-free explorative gradient method (EGM) to alternate between explorative and exploitive steps. This facilitates a convergence similar to a gradient-based method and the parallel exploration of potentially better minima. In contrast to EGM, both kinds of steps are performed with Kriging surrogate model from the available data. The explorative step maximizes the expected improvement, i.e., favors regions of large uncertainty. The exploitive step identifies the best location of the cost function from the Kriging surrogate model for a subsequent weight-biased linear-gradient descent search method. To verify the effectiveness and robustness of the improved Kriging-based optimization method, we have examined several comparative test problems of varying dimensions with limited evaluation budgets. The results show that the proposed algorithm significantly outperforms some model-free optimization algorithms like genetic algorithm and differential evolution algorithm with a quicker convergence for a given budget. We have also performed direct numerical simulations of the fluidic pinball (N. Deng et al. 2020 J. Fluid Mech.) on three circular cylinders in equilateral-triangular arrangement immersed in an incoming flow at Re=100. The optimal cylinder rotations lead to 44.0% net drag power saving with 85.8% drag reduction and 41.8% actuation power. The optimal results for active flow control based on this configuration have achieved boat-tailing mechanism by employing Coanda forcing and wake stabilization by delaying separation and minimizing the wake region.

Keywords: direct numerical simulations, flow control, kriging, stochastic optimization, wake stabilization

Procedia PDF Downloads 93
7499 Machine Learning Prediction of Compressive Damage and Energy Absorption in Carbon Fiber-Reinforced Polymer Tubular Structures

Authors: Milad Abbasi

Abstract:

Carbon fiber-reinforced polymer (CFRP) composite structures are increasingly being utilized in the automotive industry due to their lightweight and specific energy absorption capabilities. Although it is impossible to predict composite mechanical properties directly using theoretical methods, various research has been conducted so far in the literature for accurate simulation of CFRP structures' energy-absorbing behavior. In this research, axial compression experiments were carried out on hand lay-up unidirectional CFRP composite tubes. The fabrication method allowed the authors to extract the material properties of the CFRPs using ASTM D3039, D3410, and D3518 standards. A neural network machine learning algorithm was then utilized to build a robust prediction model to forecast the axial compressive properties of CFRP tubes while reducing high-cost experimental efforts. The predicted results have been compared with the experimental outcomes in terms of load-carrying capacity and energy absorption capability. The results showed high accuracy and precision in the prediction of the energy-absorption capacity of the CFRP tubes. This research also demonstrates the effectiveness and challenges of machine learning techniques in the robust simulation of composites' energy-absorption behavior. Interestingly, the proposed method considerably condensed numerical and experimental efforts in the simulation and calibration of CFRP composite tubes subjected to compressive loading.

Keywords: CFRP composite tubes, energy absorption, crushing behavior, machine learning, neural network

Procedia PDF Downloads 132
7498 Suppressing Vibration in a Three-axis Flexible Satellite: An Approach with Composite Control

Authors: Jalal Eddine Benmansour, Khouane Boulanoir, Nacera Bekhadda, Elhassen Benfriha

Abstract:

This paper introduces a novel composite control approach that addresses the challenge of stabilizing the three-axis attitude of a flexible satellite in the presence of vibrations caused by flexible appendages. The key contribution of this research lies in the development of a disturbance observer, which effectively observes and estimates the unwanted torques induced by the vibrations. By utilizing the estimated disturbance, the proposed approach enables efficient compensation for the detrimental effects of vibrations on the satellite system. To govern the attitude angles of the spacecraft, a proportional derivative controller (PD) is specifically designed and proposed. The PD controller ensures precise control over all attitude angles, facilitating stable and accurate spacecraft maneuvering. In order to demonstrate the global stability of the system, the Lyapunov method, a well-established technique in control theory, is employed. Through rigorous analysis, the Lyapunov method verifies the convergence of system dynamics, providing strong evidence of system stability. To evaluate the performance and efficacy of the proposed control algorithm, extensive simulations are conducted. The simulation results validate the effectiveness of the combined approach, showcasing significant improvements in the stabilization and control of the satellite's attitude, even in the presence of disruptive vibrations from flexible appendages. This novel composite control approach presented in this paper contributes to the advancement of satellite attitude control techniques, offering a promising solution for achieving enhanced stability and precision in challenging operational environments.

Keywords: attitude control, flexible satellite, vibration control, disturbance observer

Procedia PDF Downloads 68
7497 Learning Object Interface Adapted to the Learner's Learning Style

Authors: Zenaide Carvalho da Silva, Leandro Rodrigues Ferreira, Andrey Ricardo Pimentel

Abstract:

Learning styles (LS) refer to the ways and forms that the student prefers to learn in the teaching and learning process. Each student has their own way of receiving and processing information throughout the learning process. Therefore, knowing their LS is important to better understand their individual learning preferences, and also, understand why the use of some teaching methods and techniques give better results with some students, while others it does not. We believe that knowledge of these styles enables the possibility of making propositions for teaching; thus, reorganizing teaching methods and techniques in order to allow learning that is adapted to the individual needs of the student. Adapting learning would be possible through the creation of online educational resources adapted to the style of the student. In this context, this article presents the structure of a learning object interface adaptation based on the LS. The structure created should enable the creation of the adapted learning object according to the student's LS and contributes to the increase of student’s motivation in the use of a learning object as an educational resource.

Keywords: adaptation, interface, learning object, learning style

Procedia PDF Downloads 391
7496 Precise Identification of Clustered Regularly Interspaced Short Palindromic Repeats-Induced Mutations via Hidden Markov Model-Based Sequence Alignment

Authors: Jingyuan Hu, Zhandong Liu

Abstract:

CRISPR genome editing technology has transformed molecular biology by accurately targeting and altering an organism’s DNA. Despite the state-of-art precision of CRISPR genome editing, the imprecise mutation outcome and off-target effects present considerable risk, potentially leading to unintended genetic changes. Targeted deep sequencing, combined with bioinformatics sequence alignment, can detect such unwanted mutations. Nevertheless, the classical method, Needleman-Wunsch (NW) algorithm may produce false alignment outcomes, resulting in inaccurate mutation identification. The key to precisely identifying CRISPR-induced mutations lies in determining optimal parameters for the sequence alignment algorithm. Hidden Markov models (HMM) are ideally suited for this task, offering flexibility across CRISPR systems by leveraging forward-backward algorithms for parameter estimation. In this study, we introduce CRISPR-HMM, a statistical software to precisely call CRISPR-induced mutations. We demonstrate that the software significantly improves precision in identifying CRISPR-induced mutations compared to NW-based alignment, thereby enhancing the overall understanding of the CRISPR gene-editing process.

Keywords: CRISPR, HMM, sequence alignment, gene editing

Procedia PDF Downloads 33
7495 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms

Authors: Abdul Rehman, Bo Liu

Abstract:

Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.

Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization

Procedia PDF Downloads 216
7494 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency

Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu

Abstract:

In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.

Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal

Procedia PDF Downloads 133
7493 Modification Encryption Time and Permutation in Advanced Encryption Standard Algorithm

Authors: Dalal N. Hammod, Ekhlas K. Gbashi

Abstract:

Today, cryptography is used in many applications to achieve high security in data transmission and in real-time communications. AES has long gained global acceptance and is used for securing sensitive data in various industries but has suffered from slow processing and take a large time to transfer data. This paper suggests a method to enhance Advance Encryption Standard (AES) Algorithm based on time and permutation. The suggested method (MAES) is based on modifying the SubByte and ShiftRrows in the encryption part and modification the InvSubByte and InvShiftRows in the decryption part. After the implementation of the proposal and testing the results, the Modified AES achieved good results in accomplishing the communication with high performance criteria in terms of randomness, encryption time, storage space, and avalanche effects. The proposed method has good randomness to ciphertext because this method passed NIST statistical tests against attacks; also, (MAES) reduced the encryption time by (10 %) than the time of the original AES; therefore, the modified AES is faster than the original AES. Also, the proposed method showed good results in memory utilization where the value is (54.36) for the MAES, but the value for the original AES is (66.23). Also, the avalanche effects used for calculating diffusion property are (52.08%) for the modified AES and (51.82%) percentage for the original AES.

Keywords: modified AES, randomness test, encryption time, avalanche effects

Procedia PDF Downloads 228
7492 Enhancing Throughput for Wireless Multihop Networks

Authors: K. Kalaiarasan, B. Pandeeswari, A. Arockia John Francis

Abstract:

Wireless, Multi-hop networks consist of one or more intermediate nodes along the path that receive and forward packets via wireless links. The backpressure algorithm provides throughput optimal routing and scheduling decisions for multi-hop networks with dynamic traffic. Xpress, a cross-layer backpressure architecture was designed to reach the capacity of wireless multi-hop networks and it provides well coordination between layers of network by turning a mesh network into a wireless switch. Transmission over the network is scheduled using a throughput-optimal backpressure algorithm. But this architecture operates much below their capacity due to out-of-order packet delivery and variable packet size. In this paper, we present Xpress-T, a throughput optimal backpressure architecture with TCP support designed to reach maximum throughput of wireless multi-hop networks. Xpress-T operates at the IP layer, and therefore any transport protocol, including TCP, can run on top of Xpress-T. The proposed design not only avoids bottlenecks but also handles out-of-order packet delivery and variable packet size, optimally load-balances traffic across them when needed, improving fairness among competing flows. Our simulation results shows that Xpress-T gives 65% more throughput than Xpress.

Keywords: backpressure scheduling and routing, TCP, congestion control, wireless multihop network

Procedia PDF Downloads 507
7491 Automatic Adjustment of Thresholds via Closed-Loop Feedback Mechanism for Solder Paste Inspection

Authors: Chia-Chen Wei, Pack Hsieh, Jeffrey Chen

Abstract:

Surface Mount Technology (SMT) is widely used in the area of the electronic assembly in which the electronic components are mounted to the surface of the printed circuit board (PCB). Most of the defects in the SMT process are mainly related to the quality of solder paste printing. These defects lead to considerable manufacturing costs in the electronics assembly industry. Therefore, the solder paste inspection (SPI) machine for controlling and monitoring the amount of solder paste printing has become an important part of the production process. So far, the setting of the SPI threshold is based on statistical analysis and experts’ experiences to determine the appropriate threshold settings. Because the production data are not normal distribution and there are various variations in the production processes, defects related to solder paste printing still occur. In order to solve this problem, this paper proposes an online machine learning algorithm, called the automatic threshold adjustment (ATA) algorithm, and closed-loop architecture in the SMT process to determine the best threshold settings. Simulation experiments prove that our proposed threshold settings improve the accuracy from 99.85% to 100%.

Keywords: big data analytics, Industry 4.0, SPI threshold setting, surface mount technology

Procedia PDF Downloads 103
7490 Development of a Geomechanical Risk Assessment Model for Underground Openings

Authors: Ali Mortazavi

Abstract:

The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).

Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering

Procedia PDF Downloads 128
7489 A Hybrid Data Mining Algorithm Based System for Intelligent Defence Mission Readiness and Maintenance Scheduling

Authors: Shivam Dwivedi, Sumit Prakash Gupta, Durga Toshniwal

Abstract:

It is a challenging task in today’s date to keep defence forces in the highest state of combat readiness with budgetary constraints. A huge amount of time and money is squandered in the unnecessary and expensive traditional maintenance activities. To overcome this limitation Defence Intelligent Mission Readiness and Maintenance Scheduling System has been proposed, which ameliorates the maintenance system by diagnosing the condition and predicting the maintenance requirements. Based on new data mining algorithms, this system intelligently optimises mission readiness for imminent operations and maintenance scheduling in repair echelons. With modified data mining algorithms such as Weighted Feature Ranking Genetic Algorithm and SVM-Random Forest Linear ensemble, it improves the reliability, availability and safety, alongside reducing maintenance cost and Equipment Out of Action (EOA) time. The results clearly conclude that the introduced algorithms have an edge over the conventional data mining algorithms. The system utilizing the intelligent condition-based maintenance approach improves the operational and maintenance decision strategy of the defence force.

Keywords: condition based maintenance, data mining, defence maintenance, ensemble, genetic algorithms, maintenance scheduling, mission capability

Procedia PDF Downloads 278
7488 Vr-GIS and Ar-GIS In Education: A Case Study

Authors: Ilario Gabriele Gerloni, Vincenza Carchiolo, Alessandro Longheu, Ugo Becciani, Eva Sciacca, Fabio Vitello

Abstract:

ICT tools and platforms endorse more and more educational process. Many models and techniques for people to be educated and trained about specific topics and skills do exist, as classroom lectures with textbooks, computers, handheld devices and others. The choice to what extent ICT is applied within learning contexts is related to personal access to technologies as well as to the infrastructure surrounding environment. Among recent techniques, the adoption of Virtual Reality (VR) and Augmented Reality (AR) provides significant impulse in fully engaging users senses. In this paper, an application of AR/VR within Geographic Information Systems (GIS) context is presented. It aims to provide immersive environment experiences for educational and training purposes (e.g. for civil protection personnel), useful especially for situations where real scenarios are not easily accessible by humans. First acknowledgments are promising for building an effective tool that helps civil protection personnel training with risk reduction.

Keywords: education, virtual reality, augmented reality, GIS, civil protection

Procedia PDF Downloads 163
7487 A Comparative Assessment Method For Map Alignment Techniques

Authors: Rema Daher, Theodor Chakhachiro, Daniel Asmar

Abstract:

In the era of autonomous robot mapping, assessing the goodness of the generated maps is important, and is usually performed by aligning them to ground truth. Map alignment is difficult for two reasons: first, the query maps can be significantly distorted from ground truth, and second, establishing what constitutes ground truth for different settings is challenging. Most map alignment techniques to this date have addressed the first problem, while paying too little importance to the second. In this paper, we propose a benchmark dataset, which consists of synthetically transformed maps with their corresponding displacement fields. Furthermore, we propose a new system for comparison, where the displacement field of any map alignment technique can be computed and compared to the ground truth using statistical measures. The local information in displacement fields renders the evaluation system applicable to any alignment technique, whether it is linear or not. In our experiments, the proposed method was applied to different alignment methods from the literature, allowing for a comparative assessment between them all.

Keywords: assessment methods, benchmark, image deformation, map alignment, robot mapping, robot motion

Procedia PDF Downloads 103
7486 Mammographic Multi-View Cancer Identification Using Siamese Neural Networks

Authors: Alisher Ibragimov, Sofya Senotrusova, Aleksandra Beliaeva, Egor Ushakov, Yuri Markin

Abstract:

Mammography plays a critical role in screening for breast cancer in women, and artificial intelligence has enabled the automatic detection of diseases in medical images. Many of the current techniques used for mammogram analysis focus on a single view (mediolateral or craniocaudal view), while in clinical practice, radiologists consider multiple views of mammograms from both breasts to make a correct decision. Consequently, computer-aided diagnosis (CAD) systems could benefit from incorporating information gathered from multiple views. In this study, the introduce a method based on a Siamese neural network (SNN) model that simultaneously analyzes mammographic images from tri-view: bilateral and ipsilateral. In this way, when a decision is made on a single image of one breast, attention is also paid to two other images – a view of the same breast in a different projection and an image of the other breast as well. Consequently, the algorithm closely mimics the radiologist's practice of paying attention to the entire examination of a patient rather than to a single image. Additionally, to the best of our knowledge, this research represents the first experiments conducted using the recently released Vietnamese dataset of digital mammography (VinDr-Mammo). On an independent test set of images from this dataset, the best model achieved an AUC of 0.87 per image. Therefore, this suggests that there is a valuable automated second opinion in the interpretation of mammograms and breast cancer diagnosis, which in the future may help to alleviate the burden on radiologists and serve as an additional layer of verification.

Keywords: breast cancer, computer-aided diagnosis, deep learning, multi-view mammogram, siamese neural network

Procedia PDF Downloads 119
7485 Rapid Classification of Soft Rot Enterobacteriaceae Phyto-Pathogens Pectobacterium and Dickeya Spp. Using Infrared Spectroscopy and Machine Learning

Authors: George Abu-Aqil, Leah Tsror, Elad Shufan, Shaul Mordechai, Mahmoud Huleihel, Ahmad Salman

Abstract:

Pectobacterium and Dickeya spp which negatively affect a wide range of crops are the main causes of the aggressive diseases of agricultural crops. These aggressive diseases are responsible for a huge economic loss in agriculture including a severe decrease in the quality of the stored vegetables and fruits. Therefore, it is important to detect these pathogenic bacteria at their early stages of infection to control their spread and consequently reduce the economic losses. In addition, early detection is vital for producing non-infected propagative material for future generations. The currently used molecular techniques for the identification of these bacteria at the strain level are expensive and laborious. Other techniques require a long time of ~48 h for detection. Thus, there is a clear need for rapid, non-expensive, accurate and reliable techniques for early detection of these bacteria. In this study, infrared spectroscopy, which is a well-known technique with all its features, was used for rapid detection of Pectobacterium and Dickeya spp. at the strain level. The bacteria were isolated from potato plants and tubers with soft rot symptoms and measured by infrared spectroscopy. The obtained spectra were analyzed using different machine learning algorithms. The performances of our approach for taxonomic classification among the bacterial samples were evaluated in terms of success rates. The success rates for the correct classification of the genus, species and strain levels were ~100%, 95.2% and 92.6% respectively.

Keywords: soft rot enterobacteriaceae (SRE), pectobacterium, dickeya, plant infections, potato, solanum tuberosum, infrared spectroscopy, machine learning

Procedia PDF Downloads 86