Search results for: evolution algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5286

Search results for: evolution algorithm

3396 Towards the Definition of New Instruments of Design and Evaluation of Environmental Impacts in Built Environment

Authors: Bernarette Soust Verdaguer

Abstract:

Sustainability applied to the built environment has been understood in practice as a strategy to improve efficiency. Its evolution into ecology closer visions are becoming more intense. So the paradigm of regeneration is presented as a complementary alternative to sustainability, emphasizing the association with nature, betting adaptation, recovery and resilience. New design tools and evaluation of built spaces, incorporating this strategy are necessary. In this sense, how it could contribute to the concept of regeneration in built environment design and environmental impacts assessment tools? This paper explores and analyzes some of these keys.

Keywords: sustainability, regeneration, environmental impacts assessment, built environment

Procedia PDF Downloads 404
3395 Fill Rate Window as a Criterion for Spares Allocation

Authors: Michael Dreyfuss, Yahel Giat

Abstract:

Limited battery range and long recharging times are the greatest obstacles to the successful adoption of electric cars. One of the suggestions to overcome these problems is that carmakers retain ownership of batteries and provide battery swapping service so that customers exchange their depleted batteries for recharged batteries. Motivated by this example, we consider the problem of optimal spares allocation in an exchangeable-item, multi-location repair system. We generalize the standard service measures of fill rate and average waiting time to reflect the fact that customers penalize the service provider only if they have to wait more than a ‘tolerable’ time window. These measures are denoted as the window fill rate and the truncated waiting time, respectively. We find that the truncated waiting time is convex and therefore a greedy algorithm solves the spares allocation problem efficiently. We show that the window fill rate is generally S-shaped and describe an efficient algorithm to find a near-optimal solution and detail a priori and a posteriori upper bounds to the distance from optimum. The theory is complemented with a large scale numerical example demonstrating the spare battery allocation in battery swapping stations.

Keywords: convex-concave optimization, exchangeable item, M/G/infinity, multiple location, repair system, spares allocation, window fill rate

Procedia PDF Downloads 481
3394 Machine Learning Approach for Automating Electronic Component Error Classification and Detection

Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski

Abstract:

The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.

Keywords: augmented reality, machine learning, object recognition, virtual laboratories

Procedia PDF Downloads 121
3393 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features

Authors: Rabab M. Ramadan, Elaraby A. Elgallad

Abstract:

With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.

Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)

Procedia PDF Downloads 224
3392 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 400
3391 A Fishery Regulation Model: Bargaining over Fishing Pressure

Authors: Duplan Yves Jamont Junior

Abstract:

The Diamond-Mortensen-Pissarides model widely used in labor economics is tailored to fishery. By this way, a fishing function is defined to depict the fishing technology, and Bellman equations are established to describe the behaviors of fishermen and conservationists. On this basis, a negotiation takes place as a Nash-bargaining over the upper limit of the fishing pressure between both political representative groups of fishermen and conservationists. The existence and uniqueness conditions of the Nash-bargained fishing pressure are established. Given the biomass evolution equation, the dynamics of the model variables (fishing pressure, biomass, fish need) is studied.

Keywords: conservation, fishery, fishing, Nash bargaining

Procedia PDF Downloads 251
3390 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform

Authors: David Jurado, Carlos Ávila

Abstract:

Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.

Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis

Procedia PDF Downloads 66
3389 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 248
3388 A Comparative Study of Photo and Electro-Fenton Reactions Efficiency in Degradation of Cationic Dyes Mixture

Authors: S. Bouafia Chergui, Nihal Oturan, Hussein Khalaf, Mehmet A. Oturan

Abstract:

The aim of this work was to compare the degradation of a mixture of three cationic dyes by advanced oxidation processes (electro-Fenton, photo-Fenton) in aqueous solution. These processes are based on the in situ production of hydroxyl radical, a highly strong oxidant, which allows the degradation of organic pollutants until their mineralization into CO2 and H2O. Under optimal operating conditions, the evolution of total organic carbon (TOC) and electrical energy efficiency have been investigated for the two processes.

Keywords: photo-fenton, electro-fenton, energy efficiency, water treatment

Procedia PDF Downloads 491
3387 The Design of a Mixed Matrix Model for Activity Levels Extraction and Sub Processes Classification of a Work Project (Case: Great Tehran Electrical Distribution Company)

Authors: Elham Allahmoradi, Bahman Allahmoradi, Ali Bonyadi Naeini

Abstract:

Complex systems have many aspects. A variety of methods have been developed to analyze these systems. The most efficient of these methods should not only be simple, but also provide useful and comprehensive information about many aspects of the system. Matrix methods are considered the most commonly methods used to analyze and design systems. Each matrix method can examine a particular aspect of the system. If these methods are combined, managers can access to more comprehensive and broader information about the system. This study was conducted in four steps. In the first step, a process model of a real project has been extracted through IDEF3. In the second step, activity levels have been attained by writing a process model in the form of a design structure matrix (DSM) and sorting it through triangulation algorithm (TA). In the third step, sub-processes have been obtained by writing the process model in the form of an interface structure matrix (ISM) and clustering it through cluster identification algorithm (CIA). In the fourth step, a mixed model has been developed to provide a unified picture of the project structure through the simultaneous presentation of activities and sub-processes. Finally, the paper is completed with a conclusion.

Keywords: integrated definition for process description capture (IDEF3) method, design structure matrix (DSM), interface structure matrix (ism), mixed matrix model, activity level, sub-process

Procedia PDF Downloads 483
3386 Influence of Error Correction Codes on the Quality of Optical Broadband Connections

Authors: Mouna Hemdi, Jamel bel Hadj Tahar

Abstract:

The increasing development of multimedia applications requiring the simultaneous transport of several different services contributes to the evolution of the need for very high-speed network. In this paper, we propose an effective solution to achieve the very high speed while retaining elements of the optical transmission channel. So our study focuses on error correcting codes that aim for quality improvement on duty. We present a comparison of the quality of service for single channels and integrating the code BCH, RS and LDPC in order to find the best code in the different conditions of the transmission.

Keywords: code error correction, high speed broadband, optical transmission, information systems security

Procedia PDF Downloads 378
3385 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams

Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina

Abstract:

It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.

Keywords: algorithm, radiology, tuberculosis, x-rays exam

Procedia PDF Downloads 404
3384 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment

Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh

Abstract:

This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.

Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm

Procedia PDF Downloads 300
3383 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction

Authors: Bastien Batardière, Joon Kwon

Abstract:

For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.

Keywords: convex optimization, variance reduction, adaptive algorithms, loopless

Procedia PDF Downloads 56
3382 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant

Authors: John K. Avor, Choong-Koo Chang

Abstract:

The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.

Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability

Procedia PDF Downloads 157
3381 Blockchain Security in MANETs

Authors: Nada Mouchfiq, Ahmed Habbani, Chaimae Benjbara

Abstract:

The security aspect of the IoT occupies a place of great importance especially after the evolution that has known this field lastly because it must take into account the transformations and the new applications .Blockchain is a new technology dedicated to the data sharing. However, this does not work the same way in the different systems with different operating principles. This article will discuss network security using the Blockchain to facilitate the sending of messages and information, enabling the use of new processes and enabling autonomous coordination of devices. To do this, we will discuss proposed solutions to ensure a high level of security in these networks in the work of other researchers. Finally, our article will propose a method of security more adapted to our needs as a team working in the ad hoc networks, this method is based on the principle of the Blockchain and that we named ”MPR Blockchain”.

Keywords: Ad hocs networks, blockchain, MPR, security

Procedia PDF Downloads 162
3380 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning

Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan

Abstract:

We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.

Keywords: daily activity recognition, healthcare, IoT sensors, transfer learning

Procedia PDF Downloads 121
3379 Triassic and Liassic Paleoenvironments during the Central Atlantic Magmatique Province (CAMP) Effusion in the Moroccan Coastal Meseta: The Mohammedia-Benslimane-El Gara-Berrechid Basin

Authors: Rachid Essamoud, Abdelkrim Afenzar, Ahmed Belqadi

Abstract:

During the Early Mesozoic, the northwestern part of the African continent was affected by initial fracturing associated with the early stages of the opening of the Central Atlantic (Atlantic Rift). During this rifting phase, the Moroccan Meseta experienced an extensive tectonic regime. This extension favored the formation of a set of rift-type basins, including the Mohammedia-Benslimane-ElGara-Berrechid basin. Thus, it is essential to know the nature of the deposits in this basin and their evolution over time as well as their relationship with the basaltic effusion of the Central Atlantic Magmatic Province (CAMP). These deposits are subdivided into two large series: The Lower clay-salt series attributed to the Triassic and the Upper clay-salt series attributed to the Liassic. The two series are separated by the Upper Triassic-Lower Liassic basaltic complex. The detailed sedimentological analysis made it possible to characterize four mega-sequences, fifteen types of facies and eight architectural elements and facies associations in the Triassic series. A progressive decrease observed in paleo-slope over time led to the evolution of the paleoenvironment from a proximal system of alluvial fans to a braided fluvial style, then to an anastomosed system. These environments eventually evolved into an alluvial plain associated with a coastal plain where playa lakes, mudflats and lagoons had developed. The pure and massive halitic facies at the top of the series probably indicate an evolution of the depositional environment towards a shallow subtidal environment. The presence of these evaporites indicates a climate that favored their precipitation, in this case, a fairly hot and humid climate. The sedimentological analysis of the supra-basaltic part shows that during the Lower Liassic, the paleopente after basaltic effusion remained weak with distal environments. The faciological analysis revealed the presence of four major sandstone, silty, clayey and evaporitic lithofacies organized in two mega-sequences: the sedimentation of the first rock-salt mega-sequence took place in a brine depression system free, followed by saline mudflats under continental influences. The upper clay mega-sequence displays facies documenting sea level fluctuations from the final transgression of the Tethys or the opening Atlantic. Saliferous sedimentation is therefore favored from the Upper Triassic, but experienced a sudden rupture by the emission of basaltic flows which are interstratified in the azoic salt clays of very shallow seas. This basaltic emission which belongs to the CAMP would come from a fissural volcanism probably carried out through transfer faults located in the NW and SE of the basin. Their emplacement is probably subaquatic to subaerial. From a chronological and paleogeographic point of view, this main volcanism, dated between the Upper Triassic and the Lower Liassic (180-200 MA), is linked to the fragmentation of Pangea and managed by a progressive expansion triggered in the West in close relation with the initial phases of Central Atlantic rifting and seems to coincide with the major mass extinction at the Triassic-Jurassic boundary.

Keywords: Basalt, CAMP, Liassic, sedimentology, Triassic, Morocco

Procedia PDF Downloads 58
3378 Numerical Simulation of Unsteady Cases of Fluid Flow Using Modified Dynamic Boundary Condition (mDBC) in Smoothed Particle Hydrodynamics Models

Authors: Exa Heydemans, Jessica Sjah, Dwinanti Rika Marthanty

Abstract:

This paper presents numerical simulations using an open boundary algorithm with modified dynamic boundary condition (mDBC) for weakly compressible smoothed particle hydrodynamics models from particle-based code Dualsphysics. The problems of piping erosion in dams and dikes are aimed for studying the algorithm. The case 2D model of unsteady fluid flow past around a fixed cylinder is simulated, where various values of Reynold’s numbers (Re40, Re60, Re80, and Re100) and different model’s resolution are considered. A constant velocity with different values of viscosity for generating various Reynold’s numbers and different numbers of particles over a cylinder for the resolution are modeled. The interaction between solid particles of the cylinder and fluid particles is concerned. The cylinder is affected by the hydrodynamics force caused by the flow of fluid particles. The solid particles of the cylinder are the observation points to obtain force and pressure due to the hydrodynamics forces. As results of the simulation, which is to show the capability to model 2D unsteady with various Reynold’s numbers, the pressure coefficient, drag coefficient, lift coefficient, and Strouhal number are compared to the previous work from literature.

Keywords: hydrodynamics, internal erosion, dualsphysics, viscous fluid flow

Procedia PDF Downloads 147
3377 Assessment Power and Oscillation Damping Using the POD Controller and Proposed FOD Controller

Authors: Tohid Rahimi, Yahya Naderi, Babak Yousefi, Seyed Hossein Hoseini

Abstract:

Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. However, FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. However, Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper, firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. Therefore, FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.

Keywords: power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA)

Procedia PDF Downloads 465
3376 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning

Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park

Abstract:

The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.

Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm

Procedia PDF Downloads 318
3375 A Review of Travel Data Collection Methods

Authors: Muhammad Awais Shafique, Eiji Hato

Abstract:

Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.

Keywords: computer, smartphone, telephone, travel survey

Procedia PDF Downloads 300
3374 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils

Authors: Muqdad Al-Juboori, Bithin Datta

Abstract:

Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.

Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis

Procedia PDF Downloads 212
3373 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems

Authors: Sagir M. Yusuf, Chris Baber

Abstract:

In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.

Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence

Procedia PDF Downloads 107
3372 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique

Authors: C. Manjula, Lilly Florence

Abstract:

Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.

Keywords: decision tree, genetic algorithm, machine learning, software defect prediction

Procedia PDF Downloads 321
3371 Grain Structure Evolution during Friction-Stir Welding of 6061-T6 Aluminum Alloy

Authors: Aleksandr Kalinenko, Igor Vysotskiy, Sergey Malopheyev, Sergey Mironov, Rustam Kaibyshev

Abstract:

From a thermo-mechanical standpoint, friction-stir welding (FSW) represents a unique combination of very large strains, high temperature and relatively high strain rate. The material behavior under such extreme deformation conditions is not studied well and thus, the microstructural examinations of the friction-stir welded materials represent an essential academic interest. Moreover, a clear understanding of the microstructural mechanisms operating during FSW should improve our understanding of the microstructure-properties relationship in the FSWed materials and thus enables us to optimize their service characteristics. Despite extensive research in this field, the microstructural behavior of some important structural materials remains not completely clear. In order to contribute to this important work, the present study was undertaken to examine the grain structure evolution during the FSW of 6061-T6 aluminum alloy. To provide an in-depth insight into this process, the electron backscatter diffraction (EBSD) technique was employed for this purpose. Microstructural observations were conducted by using an FEI Quanta 450 Nova field-emission-gun scanning electron microscope equipped with TSL OIMTM software. A suitable surface finish for EBSD was obtained by electro-polishing in a solution of 25% nitric acid in methanol. A 15° criterion was employed to differentiate low-angle boundaries (LABs) from high-angle boundaries (HABs). In the entire range of the studied FSW regimes, the grain structure evolved in the stir zone was found to be dominated by nearly-equiaxed grains with a relatively high fraction of low-angle boundaries and the moderate-strength B/-B {112}<110> simple-shear texture. In all cases, the grain-structure development was found to be dictated by an extensive formation of deformation-induced boundaries, their gradual transformation to the high-angle grain boundaries. Accordingly, the grain subdivision was concluded to the key microstructural mechanism. Remarkably, a gradual suppression of this mechanism has been observed at relatively high welding temperatures. This surprising result has been attributed to the reduction of dislocation density due to the annihilation phenomena.

Keywords: electron backscatter diffraction, friction-stir welding, heat-treatable aluminum alloys, microstructure

Procedia PDF Downloads 225
3370 Bifurcation and Chaos of the Memristor Circuit

Authors: Wang Zhulin, Min Fuhong, Peng Guangya, Wang Yaoda, Cao Yi

Abstract:

In this paper, a magnetron memristor model based on hyperbolic sine function is presented and the correctness proved by studying the trajectory of its voltage and current phase, and then a memristor chaotic system with the memristor model is presented. The phase trajectories and the bifurcation diagrams and Lyapunov exponent spectrum of the magnetron memristor system are plotted by numerical simulation, and the chaotic evolution with changing the parameters of the system is also given. The paper includes numerical simulations and mathematical model, which confirming that the system, has a wealth of dynamic behavior.

Keywords: memristor, chaotic circuit, dynamical behavior, chaotic system

Procedia PDF Downloads 487
3369 Advancing Microstructure Evolution in Tungsten Through Rolling in Laser Powder Bed Fusion

Authors: Narges Shayesteh Moghaddam

Abstract:

Tungsten (W), a refractory metal known for its remarkably high melting temperature, offers tremendous potential for use in challenging environments prevalent in sectors such as space exploration, defense, and nuclear industries. Additive manufacturing, especially the Laser Powder-Bed Fusion (LPBF) technique, emerges as a beneficial method for fabricating tungsten parts. This technique enables the production of intricate components while simultaneously reducing production lead times and associated costs. However, the inherent brittleness of tungsten and its tendency to crack under high-temperature conditions pose significant challenges to the manufacturing process. Our research primarily focuses on the process of rolling tungsten parts in a layer-by-layer manner in LPBF and the subsequent changes in microstructure. Our objective is not only to identify the alterations in the microstructure but also to assess their implications on the physical properties and performance of the fabricated tungsten parts. To examine these aspects, we conducted an extensive series of experiments that included the fabrication of tungsten samples through LPBF and subsequent characterization using advanced materials analysis techniques. These investigations allowed us to scrutinize shifts in various microstructural features, including, but not limited to, grain size and grain boundaries occurring during the rolling process. The results of our study provide crucial insights into how specific factors, such as plastic deformation occurring during the rolling process, influence the microstructural characteristics of the fabricated parts. This information is vital as it provides a foundation for understanding how the parameters of the layer-by-layer rolling process affect the final tungsten parts. Our research significantly broadens the current understanding of microstructural evolution in tungsten parts produced via the layer-by-layer rolling process in LPBF. The insights obtained will play a pivotal role in refining and optimizing manufacturing parameters, thus improving the mechanical properties of tungsten parts and, therefore, enhancing their performance. Furthermore, these findings will contribute to the advancement of manufacturing techniques, facilitating the wider application of tungsten parts in various high-demand sectors. Through these advancements, this research represents a significant step towards harnessing the full potential of tungsten in high-temperature and high-stress applications.

Keywords: additive manufacturing, rolling, tungsten, refractory materials

Procedia PDF Downloads 79
3368 Optimal Design of Linear Generator to Recharge the Smartphone Battery

Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha

Abstract:

Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.

Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design

Procedia PDF Downloads 335
3367 Artificial Intelligence for Generative Modelling

Authors: Shryas Bhurat, Aryan Vashistha, Sampreet Dinakar Nayak, Ayush Gupta

Abstract:

As the technology is advancing more towards high computational resources, there is a paradigm shift in the usage of these resources to optimize the design process. This paper discusses the usage of ‘Generative Design using Artificial Intelligence’ to build better models that adapt the operations like selection, mutation, and crossover to generate results. The human mind thinks of the simplest approach while designing an object, but the intelligence learns from the past & designs the complex optimized CAD Models. Generative Design takes the boundary conditions and comes up with multiple solutions with iterations to come up with a sturdy design with the most optimal parameter that is given, saving huge amounts of time & resources. The new production techniques that are at our disposal allow us to use additive manufacturing, 3D printing, and other innovative manufacturing techniques to save resources and design artistically engineered CAD Models. Also, this paper discusses the Genetic Algorithm, the Non-Domination technique to choose the right results using biomimicry that has evolved for current habitation for millions of years. The computer uses parametric models to generate newer models using an iterative approach & uses cloud computing to store these iterative designs. The later part of the paper compares the topology optimization technology with Generative Design that is previously being used to generate CAD Models. Finally, this paper shows the performance of algorithms and how these algorithms help in designing resource-efficient models.

Keywords: genetic algorithm, bio mimicry, generative modeling, non-dominant techniques

Procedia PDF Downloads 134