Search results for: data transfer optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29587

Search results for: data transfer optimization

28507 The Optimization Process of Aortic Heart Valve Stent Geometry

Authors: Arkadiusz Mezyk, Wojciech Klein, Mariusz Pawlak, Jacek Gnilka

Abstract:

The aortic heart valve stents should fulfill many criterions. These criteria have a strong impact on the geometrical shape of the stent. Usually, the final construction of stent is a result of many year experience and knowledge. Depending on patents claims, different stent shapes are produced by different companies. This causes difficulties for biomechanics engineers narrowing the domain of feasible solutions. The paper present optimization method for stent geometry defining by a specific analytical equation based on various mathematical functions. This formula was implemented as APDL script language in ANSYS finite element environment. For the purpose of simulation tests, a few parameters were separated from developed equation. The application of the genetic algorithms allows finding the best solution due to selected objective function. Obtained solution takes into account parameters such as radial force, compression ratio and coefficient of expansion on the transverse axial.

Keywords: aortic stent, optimization process, geometry, finite element method

Procedia PDF Downloads 281
28506 A Cognitive Approach to the Optimization of Power Distribution across an Educational Campus

Authors: Mrinmoy Majumder, Apu Kumar Saha

Abstract:

The ever-increasing human population and its demand for energy is placing stress upon conventional energy sources; and as demand for power continues to outstrip supply, the need to optimize energy distribution and utilization is emerging as an important focus for various stakeholders. The distribution of available energy must be achieved in such a way that the needs of the consumer are satisfied. However, if the availability of resources is not sufficient to satisfy consumer demand, it is necessary to find a method to select consumers based on factors such as their socio-economic or environmental impacts. Weighting consumer types in this way can help separate them based on their relative importance, and cognitive optimization of the allocation process can then be carried out so that, even on days of particularly scarce supply, the socio-economic impacts of not satisfying the needs of consumers can be minimized. In this context, the present study utilized fuzzy logic to assign weightage to different types of consumers based at an educational campus in India, and then established optimal allocation by applying the non-linear mapping capability of neuro-genetic algorithms. The outputs of the algorithms were compared with similar outputs from particle swarm optimization and differential evolution algorithms. The results of the study demonstrate an option for the optimal utilization of available energy based on the socio-economic importance of consumers.

Keywords: power allocation, optimization problem, neural networks, environmental and ecological engineering

Procedia PDF Downloads 479
28505 Wireless Sensor Networks Optimization by Using 2-Stage Algorithm Based on Imperialist Competitive Algorithm

Authors: Hamid R. Lashgarian Azad, Seyed N. Shetab Boushehri

Abstract:

Wireless sensor networks (WSN) have become progressively popular due to their wide range of applications. Wireless Sensor Network is made of numerous tiny sensor nodes that are battery-powered. It is a very significant problem to maximize the lifetime of wireless sensor networks. In this paper, we propose a two-stage protocol based on an imperialist competitive algorithm (2S-ICA) to solve a sensor network optimization problem. The energy of the sensors can be greatly reduced and the lifetime of the network reduced by long communication distances between the sensors and the sink. We can minimize the overall communication distance considerably, thereby extending the lifetime of the network lifetime through connecting sensors into a series of independent clusters using 2SICA. Comparison results of the proposed protocol and LEACH protocol, which is common to solving WSN problems, show that our protocol has a better performance in terms of improving network life and increasing the number of transmitted data.

Keywords: wireless sensor network, imperialist competitive algorithm, LEACH protocol, k-means clustering

Procedia PDF Downloads 103
28504 Optimization of Media for Enhanced Fermentative Production of Mycophenolic Acid by Penicillium brevicompactum

Authors: Shraddha Digole, Swarali Hingse, Uday Annapure

Abstract:

Mycophenolic acid (MPA) is an immunosuppressant; produced by Penicillium Sp. Box-Behnken statistical experimental design was employed to optimize the condition of Penicillium brevicompactum NRRL 2011 for mycophenolic acid (MPA) production. Initially optimization of various physicochemical parameters and media components was carried out using one factor at a time approach and significant factors were screened by Taguchi L-16 orthogonal array design. Taguchi design indicated that glucose, KH2PO4 and MgSO4 had significant effect on MPA production. These variables were selected for further optimization studies using Box-Behnken design. Optimised fermentation condition, glucose (60 g/L), glycine (28 g/L), L-leucine (1.5g/L), KH2PO4 (3g/L), MgSO4.7H2O (1.5g/L), increased the production of MPA from 170 mg/L to 1032.54 mg/L. Analysis of variance (ANOVA) showed a high value of coefficient of determination R2 (0.9965), indicating a good agreement between experimental and predicted values and proves validity of the statistical model.

Keywords: Box-Behnken design, fermentation, mycophenolic acid, Penicillium brevicompactum

Procedia PDF Downloads 452
28503 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems

Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic

Abstract:

Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.

Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method

Procedia PDF Downloads 133
28502 Design Optimization of Doubly Fed Induction Generator Performance by Differential Evolution

Authors: Mamidi Ramakrishna Rao

Abstract:

Doubly-fed induction generators (DFIG) due to their advantages like speed variation and four-quadrant operation, find its application in wind turbines. DFIG besides supplying power to the grid has to support reactive power (kvar) under grid voltage variations, should contribute minimum fault current during faults, have high efficiency, minimum weight, adequate rotor protection during crow-bar-operation from +20% to -20% of rated speed.  To achieve the optimum performance, a good electromagnetic design of DFIG is required. In this paper, a simple and heuristic global optimization – Differential Evolution has been used. Variables considered are lamination details such as slot dimensions, stack diameters, air gap length, and generator stator and rotor stack length. Two operating conditions have been considered - voltage and speed variations. Constraints included were reactive power supplied to the grid and limiting fault current and torque. The optimization has been executed separately for three objective functions - maximum efficiency, weight reduction, and grid fault stator currents. Subsequent calculations led to the conclusion that designs determined through differential evolution help in determining an optimum electrical design for each objective function.

Keywords: design optimization, performance, DFIG, differential evolution

Procedia PDF Downloads 150
28501 Thermal Analysis of a Graphite Calorimeter for the Measurement of Absorbed Dose for Therapeutic X-Ray Beam

Authors: I.J. Kim, B.C. Kim, J.H. Kim, C.-Y. Yi

Abstract:

Heat transfer in a graphite calorimeter is analyzed by using the finite elements method. The calorimeter is modeled in 3D geometry. Quasi-adiabatic mode operation is realized in the simulation and the temperature rise by different sources of the ionizing radiation and electric heaters is compared, directly. The temperature distribution caused by the electric power was much different from that by the ionizing radiation because of its point-like localized heating. However, the temperature rise which was finally read by sensing thermistors agreed well to each other within 0.02 %.

Keywords: graphite calorimeter, finite element analysis, heat transfer, quasi-adiabatic mode

Procedia PDF Downloads 430
28500 Optimization of Pretreatment Process of Napier Grass for Improved Sugar Yield

Authors: Shashikant Kumar, Chandraraj K.

Abstract:

Perennial grasses have presented interesting choices in the current demand for renewable and sustainable energy sources to alleviate the load of the global energy problem. The perennial grass Napier grass (Pennisetum purpureum Schumach) is a promising feedstock for the production of cellulosic ethanol. The conversion of biomass into glucose and xylose is a crucial stage in the production of bioethanol, and it necessitates optimal pretreatment. Alkali treatment, among the several pretreatments available, effectively reduces lignin concentration and crystallinity of cellulose. Response surface methodology was used to optimize the alkali pretreatment of Napier grass for maximal reducing sugar production. The combined effects of three independent variables, viz. sodium hydroxide concentration, temperature, and reaction time, were studied. A second-order polynomial equation was used to fit the observed data. Maximum reducing sugar (590.54 mg/g) was obtained under the following conditions: 1.6 % sodium hydroxide, a reaction period of 30 min., and 120˚C. The results showed that Napier grass is a desirable feedstock for bioethanol production.

Keywords: Napier grass, optimization, pretreatment, sodium hydroxide

Procedia PDF Downloads 506
28499 Increasing Power Transfer Capacity of Distribution Networks Using Direct Current Feeders

Authors: Akim Borbuev, Francisco de León

Abstract:

Economic and population growth in densely-populated urban areas introduce major challenges to distribution system operators, planers, and designers. To supply added loads, utilities are frequently forced to invest in new distribution feeders. However, this is becoming increasingly more challenging due to space limitations and rising installation costs in urban settings. This paper proposes the conversion of critical alternating current (ac) distribution feeders into direct current (dc) feeders to increase the power transfer capacity by a factor as high as four. Current trends suggest that the return of dc transmission, distribution, and utilization are inevitable. Since a total system-level transformation to dc operation is not possible in a short period of time due to the needed huge investments and utility unreadiness, this paper recommends that feeders that are expected to exceed their limits in near future are converted to dc. The increase in power transfer capacity is achieved through several key differences between ac and dc power transmission systems. First, it is shown that underground cables can be operated at higher dc voltage than the ac voltage for the same dielectric stress in the insulation. Second, cable sheath losses, due to induced voltages yielding circulation currents, that can be as high as phase conductor losses under ac operation, are not present under dc. Finally, skin and proximity effects in conductors and sheaths do not exist in dc cables. The paper demonstrates that in addition to the increased power transfer capacity utilities substituting ac feeders by dc feeders could benefit from significant lower costs and reduced losses. Installing dc feeders is less expensive than installing new ac feeders even when new trenches are not needed. Case studies using the IEEE 342-Node Low Voltage Networked Test System quantify the technical and economic benefits of dc feeders.

Keywords: DC power systems, distribution feeders, distribution networks, power transfer capacity

Procedia PDF Downloads 128
28498 Bottom-up Quantification of Mega Inter-Basin Water Transfer Vulnerability to Climate Change

Authors: Enze Zhang

Abstract:

Large numbers of inter-basin water transfer (IBWT) projects are constructed or proposed all around the world as solutions to water distribution and supply problems. Nowadays, as climate change warms the atmosphere, alters the hydrologic cycle, and perturbs water availability, large scale IBWTs which are sensitive to these water-related changes may carry significant risk. Given this reality, IBWTs have elicited great controversy and assessments of vulnerability to climate change are urgently needed worldwide. In this paper, we consider the South-to-North Water Transfer Project (SNWTP) in China as a case study, and introduce a bottom-up vulnerability assessment framework. Key hazards and risks related to climate change that threaten future water availability for the SNWTP are firstly identified. Then a performance indicator is presented to quantify the vulnerability of IBWT by taking three main elements (i.e., sensitivity, adaptive capacity, and exposure degree) into account. A probabilistic Budyko model is adapted to estimate water availability responses to a wide range of possibilities for future climate conditions in each region of the study area. After bottom-up quantifying the vulnerability based on the estimated water availability, our findings confirm that SNWTP would greatly alleviate geographical imbalances in water availability under some moderate climate change scenarios but raises questions about whether it is a long-term solution because the donor basin has a high level of vulnerability due to extreme climate change.

Keywords: vulnerability, climate change, inter-basin water transfer, bottom-up

Procedia PDF Downloads 400
28497 Nozzle-to-Surface Distances Effect on Heat Transfer of Two-Phase Impinging Jets

Authors: Aspen W. Glaspell, Victoria J. Rouse, Brian K. Friedrich, Kyosung Choo

Abstract:

Heat transfer of two-phase impinging jet on a flat plate surface are experimentally investigated. The effects of the nozzle-to-surface distance and volumetric quality on the Nusselt number are considered. The results show that the normalized stagnation Nusselt number drastically increase with decreasing the nozzle-to-surface distance due to the jet deflection effect. Based on the experimental results, new correlations for the stagnation Nusselt number are developed as a function of the nozzle-to-surface distance.

Keywords: jet impingement, water jet, air assisted, circular jet

Procedia PDF Downloads 191
28496 Particle Swarm Optimization Algorithm vs. Genetic Algorithm for Image Watermarking Based Discrete Wavelet Transform

Authors: Omaima N. Ahmad AL-Allaf

Abstract:

Over communication networks, images can be easily copied and distributed in an illegal way. The copyright protection for authors and owners is necessary. Therefore, the digital watermarking techniques play an important role as a valid solution for authority problems. Digital image watermarking techniques are used to hide watermarks into images to achieve copyright protection and prevent its illegal copy. Watermarks need to be robust to attacks and maintain data quality. Therefore, we discussed in this paper two approaches for image watermarking, first is based on Particle Swarm Optimization (PSO) and the second approach is based on Genetic Algorithm (GA). Discrete wavelet transformation (DWT) is used with the two approaches separately for embedding process to cover image transformation. Each of PSO and GA is based on co-relation coefficient to detect the high energy coefficient watermark bit in the original image and then hide the watermark in original image. Many experiments were conducted for the two approaches with different values of PSO and GA parameters. From experiments, PSO approach got better results with PSNR equal 53, MSE equal 0.0039. Whereas GA approach got PSNR equal 50.5 and MSE equal 0.0048 when using population size equal to 100, number of iterations equal to 150 and 3×3 block. According to the results, we can note that small block size can affect the quality of image watermarking based PSO/GA because small block size can increase the search area of the watermarking image. Better PSO results were obtained when using swarm size equal to 100.

Keywords: image watermarking, genetic algorithm, particle swarm optimization, discrete wavelet transform

Procedia PDF Downloads 226
28495 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 302
28494 Spatial Optimization of Riverfront Street Based on Inclusive Design

Authors: Lianxue Shi

Abstract:

Riverfront street has the dual characteristics of street space and waterfront space, which is not only a vital place for residents to travel and communicate but also a high-frequency space for people's leisure and entertainment. However, under the development of cities and towns pursuing efficiency, riverfront streets appear to have a variety of problems, such as a lack of multifunctionality, insufficient facilities, and loss of characteristics, which fail to meet the needs of various groups of people, and their inclusiveness is facing a great challenge. It is, therefore, evident that the optimization of riverfront street space from an inclusivity perspective is important to the establishment of a human-centered, high-quality urban space. Therefore, this article starts by exploring the interactive relationship between inclusive design and street space. Based on the analysis of the characteristics of the riverfront street space and people's needs, it proposes the four inclusive design orientations of natural inclusion, group inclusion, spatial inclusion, and social inclusion. It then constructs a design framework for the inclusive optimization of riverfront street space, aiming to create streets that are “safe and accessible, diverse and shared, distinctive and friendly, green and sustainable”. Riverfront streets in Wansheng District, Chongqing, are selected as a practice case, and specific strategies are put forward in four aspects: the creation of an accessible slow-traffic system, the provision of diversified functional services, the reshaping of emotional bonds and the integration of ecological spaces.

Keywords: inclusiveness design, riverfront street, spatial optimization, street spaces

Procedia PDF Downloads 34
28493 Framework for Socio-Technical Issues in Requirements Engineering for Developing Resilient Machine Vision Systems Using Levels of Automation through the Lifecycle

Authors: Ryan Messina, Mehedi Hasan

Abstract:

This research is to examine the impacts of using data to generate performance requirements for automation in visual inspections using machine vision. These situations are intended for design and how projects can smooth the transfer of tacit knowledge to using an algorithm. We have proposed a framework when specifying machine vision systems. This framework utilizes varying levels of automation as contingency planning to reduce data processing complexity. Using data assists in extracting tacit knowledge from those who can perform the manual tasks to assist design the system; this means that real data from the system is always referenced and minimizes errors between participating parties. We propose using three indicators to know if the project has a high risk of failing to meet requirements related to accuracy and reliability. All systems tested achieved a better integration into operations after applying the framework.

Keywords: automation, contingency planning, continuous engineering, control theory, machine vision, system requirements, system thinking

Procedia PDF Downloads 204
28492 Optimization of the Numerical Fracture Mechanics

Authors: H. Hentati, R. Abdelmoula, Li Jia, A. Maalej

Abstract:

In this work, we present numerical simulations of the quasi-static crack propagation based on the variation approach. We perform numerical simulations of a piece of brittle material without initial crack. An alternate minimization algorithm is used. Based on these numerical results, we determine the influence of numerical parameters on the location of crack. We show the importance of trying to optimize the time of numerical computation and we present the first attempt to develop a simple numerical method to optimize this time.

Keywords: fracture mechanics, optimization, variation approach, mechanic

Procedia PDF Downloads 606
28491 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows

Authors: Imen Boudali, Marwa Ragmoun

Abstract:

The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.

Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO

Procedia PDF Downloads 411
28490 Economic Analysis of a Carbon Abatement Technology

Authors: Hameed Rukayat Opeyemi, Pericles Pilidis Pagone Emmanuele, Agbadede Roupa, Allison Isaiah

Abstract:

Climate change represents one of the single most challenging problems facing the world today. According to the National Oceanic and Administrative Association, Atmospheric temperature rose almost 25% since 1958, Artic sea ice has shrunk 40% since 1959 and global sea levels have risen more than 5.5cm since 1990. Power plants are the major culprits of GHG emission to the atmosphere. Several technologies have been proposed to reduce the amount of GHG emitted to the atmosphere from power plant, one of which is the less researched Advanced zero-emission power plant. The advanced zero emission power plants make use of mixed conductive membrane (MCM) reactor also known as oxygen transfer membrane (OTM) for oxygen transfer. The MCM employs membrane separation process. The membrane separation process was first introduced in 1899 when Walter Hermann Nernst investigated electric current between metals and solutions. He found that when a dense ceramic is heated, the current of oxygen molecules move through it. In the bid to curb the amount of GHG emitted to the atmosphere, the membrane separation process was applied to the field of power engineering in the low carbon cycle known as the Advanced zero emission power plant (AZEP cycle). The AZEP cycle was originally invented by Norsk Hydro, Norway and ABB Alstom power (now known as Demag Delaval Industrial turbomachinery AB), Sweden. The AZEP drew a lot of attention because its ability to capture ~100% CO2 and also boasts of about 30-50% cost reduction compared to other carbon abatement technologies, the penalty in efficiency is also not as much as its counterparts and crowns it with almost zero NOx emissions due to very low nitrogen concentrations in the working fluid. The advanced zero emission power plants differ from a conventional gas turbine in the sense that its combustor is substituted with the mixed conductive membrane (MCM-reactor). The MCM-reactor is made up of the combustor, low-temperature heat exchanger LTHX (referred to by some authors as air preheater the mixed conductive membrane responsible for oxygen transfer and the high-temperature heat exchanger and in some layouts, the bleed gas heat exchanger. Air is taken in by the compressor and compressed to a temperature of about 723 Kelvin and pressure of 2 Mega-Pascals. The membrane area needed for oxygen transfer is reduced by increasing the temperature of 90% of the air using the LTHX; the temperature is also increased to facilitate oxygen transfer through the membrane. The air stream enters the LTHX through the transition duct leading to inlet of the LTHX. The temperature of the air stream is then increased to about 1150 K depending on the design point specification of the plant and the efficiency of the heat exchanging system. The amount of oxygen transported through the membrane is directly proportional to the temperature of air going through the membrane. The AZEP cycle was developed using the Fortran software and economic analysis was conducted using excel and Matlab followed by optimization case study. The Simple bleed gas heat exchange layout (100 % CO2 capture), Bleed gas heat exchanger layout with flue gas turbine (100 % CO2 capture), Pre-expansion reheating layout (Sequential burning layout)–AZEP 85% (85% CO2 capture) and Pre-expansion reheating layout (Sequential burning layout) with flue gas turbine–AZEP 85% (85% CO2 capture). This paper discusses monte carlo risk analysis of four possible layouts of the AZEP cycle.

Keywords: gas turbine, global warming, green house gas, fossil fuel power plants

Procedia PDF Downloads 397
28489 Dimension Free Rigid Point Set Registration in Linear Time

Authors: Jianqin Qu

Abstract:

This paper proposes a rigid point set matching algorithm in arbitrary dimensions based on the idea of symmetric covariant function. A group of functions of the points in the set are formulated using rigid invariants. Each of these functions computes a pair of correspondence from the given point set. Then the computed correspondences are used to recover the unknown rigid transform parameters. Each computed point can be geometrically interpreted as the weighted mean center of the point set. The algorithm is compact, fast, and dimension free without any optimization process. It either computes the desired transform for noiseless data in linear time, or fails quickly in exceptional cases. Experimental results for synthetic data and 2D/3D real data are provided, which demonstrate potential applications of the algorithm to a wide range of problems.

Keywords: covariant point, point matching, dimension free, rigid registration

Procedia PDF Downloads 168
28488 Power Circuit Schemes in AC Drive is Made by Condition of the Minimum Electric Losses

Authors: M. A. Grigoryev, A. N. Shishkov, D. A. Sychev

Abstract:

The article defines the necessity of choosing the optimal power circuits scheme of the electric drive with field regulated reluctance machine. The specific weighting factors are calculation, the linear regression dependence of specific losses in semiconductor frequency converters are presented depending on the values of the rated current. It is revealed that with increase of the carrier frequency PWM improves the output current waveform, but increases the loss, so you will need depending on the task in a certain way to choose from the carrier frequency. For task of optimization by criterion of the minimum electrical losses regression dependence of the electrical losses in the frequency converter circuit at a frequency of a PWM signal of 0 Hz. The surface optimization criterion is presented depending on the rated output torque of the motor and number of phases. In electric drives with field regulated reluctance machine with at low output power optimization criterion appears to be the worst for multiphase circuits. With increasing output power this trend hold true, but becomes insignificantly different optimal solutions for three-phase and multiphase circuits. This is explained to the linearity of the dependence of the electrical losses from the current.

Keywords: field regulated reluctance machine, the electrical losses, multiphase power circuit, the surface optimization criterion

Procedia PDF Downloads 295
28487 Studying the Theoretical and Laboratory Design of a Concrete Frame and Optimizing Its Design for Impact and Earthquake Resistance

Authors: Mehrdad Azimzadeh, Seyed Mohammadreza Jabbari, Mohammadreza Hosseinzadeh Alherd

Abstract:

This paper includes experimental results and analytical studies about increasing resistance of single-span reinforced concreted frames against impact factor and their modeling according to optimization methods and optimizing the behavior of these frames under impact loads. During this study, about 30 designs for different frames were modeled and made using specialized software like ANSYS and Sap and their behavior were examined under variable impacts. Then suitable strategies were offered for frames in terms of concrete mixing in order to optimize frame modeling. To reduce the weight of the frames, we had to use fine-grained stones. After designing about eight types of frames for each type of frames, three samples were designed with the aim of controlling the impact strength parameters, and a good shape of the frame was created for the impact resistance, which was a solid frame with muscular legs, and as a bond away from each other as much as possible with a 3 degree gradient in the upper part of the beam.

Keywords: optimization, reinforced concrete, optimization methods, impact load, earthquake

Procedia PDF Downloads 184
28486 Molecular and Electronic Structure of Chromium (III) Cyclopentadienyl Complexes

Authors: Salem El-Tohami Ashoor

Abstract:

Here we show that the reduction of [Cr(ArN(CH2)3NAr)2Cl2] (1) where (Ar = 2,6-Pri2C6H3) and in presence of NaCp (2) (Cp= C5H5 = cyclopentadien), with a center coordination η5 interaction between Cp as co-ligand and chromium metal center, this was optimization by using density functional theory (DFT) and then was comparing with experimental data, also other possibility of Cp interacted with ion metal were tested like η1 ,η2 ,η3 and η4 under optimization system. These were carried out under investigation of density functional theory (DFT) calculation, and comparing together. Other methods, explicitly including electron correlation, are necessary for more accurate calculations; MB3LYP ( Becke)( Lee–Yang–Parr ) level of theory often being used to obtain more exact results. These complexes were estimated of electronic energy for molecular system, because it accounts for all electron correlation interactions. The optimised of [Cr(ArN(CH2)3NAr)2(η5-Cp)] (Ar = 2,6-Pri2C6H3 and Cp= C5H5) was found to be thermally more stable than others of chromium cyclopentadienyl. By using Dewar-Chatt-Duncanson model, as a basis of the molecular orbital (MO) analysis and showed the highest occupied molecular orbital (HOMO) and lowest occupied molecular orbital LUMO.

Keywords: Chromium(III) cyclopentadienyl complexes, DFT, MO, HOMO, LUMO

Procedia PDF Downloads 506
28485 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures

Authors: Mariem Saied, Jens Gustedt, Gilles Muller

Abstract:

We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.

Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments

Procedia PDF Downloads 127
28484 Profit-Based Artificial Neural Network (ANN) Trained by Migrating Birds Optimization: A Case Study in Credit Card Fraud Detection

Authors: Ashkan Zakaryazad, Ekrem Duman

Abstract:

A typical classification technique ranks the instances in a data set according to the likelihood of belonging to one (positive) class. A credit card (CC) fraud detection model ranks the transactions in terms of probability of being fraud. In fact, this approach is often criticized, because firms do not care about fraud probability but about the profitability or costliness of detecting a fraudulent transaction. The key contribution in this study is to focus on the profit maximization in the model building step. The artificial neural network proposed in this study works based on profit maximization instead of minimizing the error of prediction. Moreover, some studies have shown that the back propagation algorithm, similar to other gradient–based algorithms, usually gets trapped in local optima and swarm-based algorithms are more successful in this respect. In this study, we train our profit maximization ANN using the Migrating Birds optimization (MBO) which is introduced to literature recently.

Keywords: neural network, profit-based neural network, sum of squared errors (SSE), MBO, gradient descent

Procedia PDF Downloads 475
28483 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 237
28482 Applications of Evolutionary Optimization Methods in Reinforcement Learning

Authors: Rahul Paul, Kedar Nath Das

Abstract:

The paradigm of Reinforcement Learning (RL) has become prominent in training intelligent agents to make decisions in environments that are both dynamic and uncertain. The primary objective of RL is to optimize the policy of an agent in order to maximize the cumulative reward it receives throughout a given period. Nevertheless, the process of optimization presents notable difficulties as a result of the inherent trade-off between exploration and exploitation, the presence of extensive state-action spaces, and the intricate nature of the dynamics involved. Evolutionary Optimization Methods (EOMs) have garnered considerable attention as a supplementary approach to tackle these challenges, providing distinct capabilities for optimizing RL policies and value functions. The ongoing advancement of research in both RL and EOMs presents an opportunity for significant advancements in autonomous decision-making systems. The convergence of these two fields has the potential to have a transformative impact on various domains of artificial intelligence (AI) applications. This article highlights the considerable influence of EOMs in enhancing the capabilities of RL. Taking advantage of evolutionary principles enables RL algorithms to effectively traverse extensive action spaces and discover optimal solutions within intricate environments. Moreover, this paper emphasizes the practical implementations of EOMs in the field of RL, specifically in areas such as robotic control, autonomous systems, inventory problems, and multi-agent scenarios. The article highlights the utilization of EOMs in facilitating RL agents to effectively adapt, evolve, and uncover proficient strategies for complex tasks that may pose challenges for conventional RL approaches.

Keywords: machine learning, reinforcement learning, loss function, optimization techniques, evolutionary optimization methods

Procedia PDF Downloads 81
28481 Heat Transfer Coefficients of Layers of Greenhouse Thermal Screens

Authors: Vitaly Haslavsky, Helena Vitoshkin

Abstract:

The total energy saving effect of different types of greenhouse thermal/shade screens was determined by measuring and calculating the overall heat transfer coefficients (U-values) for single and several layers of screens. The measurements were carried out using the hot box method, and the calculations were performed according to the ISO Standard 15099. The goal was to examine different types of materials with a wide range of thermal radiation properties used for thermal screens in combination with a dehumidification system in order to improve greenhouse insulation. The experimental results were in good agreement with the calculated heat transfer coefficients. It was shown that a high amount of infra-red (IR) radiation can be blocked by the greenhouse covering material in combination with moveable thermal screens. The aluminum foil screen could be replaced by transparent screens, depending on shading requirements. The results indicated that using a single layer, the U-value was reduced by approximately 70% compared to covering material alone, while the contributions of additional screen layers containing aluminum foil strips could reduce the U-value by approximately 90%. It was shown that three screen layers are sufficient for effective insulation.

Keywords: greenhouse insulation, heat loss, thermal screens, U-value

Procedia PDF Downloads 113
28480 Initial Dip: An Early Indicator of Neural Activity in Functional Near Infrared Spectroscopy Waveform

Authors: Mannan Malik Muhammad Naeem, Jeong Myung Yung

Abstract:

Functional near infrared spectroscopy (fNIRS) has a favorable position in non-invasive brain imaging techniques. The concentration change of oxygenated hemoglobin and de-oxygenated hemoglobin during particular cognitive activity is the basis for this neuro-imaging modality. Two wavelengths of near-infrared light can be used with modified Beer-Lambert law to explain the indirect status of neuronal activity inside brain. The temporal resolution of fNIRS is very good for real-time brain computer-interface applications. The portability, low cost and an acceptable temporal resolution of fNIRS put it on a better position in neuro-imaging modalities. In this study, an optimization model for impulse response function has been used to estimate/predict initial dip using fNIRS data. In addition, the activity strength parameter related to motor based cognitive task has been analyzed. We found an initial dip that remains around 200-300 millisecond and better localize neural activity.

Keywords: fNIRS, brain-computer interface, optimization algorithm, adaptive signal processing

Procedia PDF Downloads 226
28479 Big Data in Construction Project Management: The Colombian Northeast Case

Authors: Sergio Zabala-Vargas, Miguel Jiménez-Barrera, Luz VArgas-Sánchez

Abstract:

In recent years, information related to project management in organizations has been increasing exponentially. Performance data, management statistics, indicator results have forced the collection, analysis, traceability, and dissemination of project managers to be essential. In this sense, there are current trends to facilitate efficient decision-making in emerging technology projects, such as: Machine Learning, Data Analytics, Data Mining, and Big Data. The latter is the most interesting in this project. This research is part of the thematic line Construction methods and project management. Many authors present the relevance that the use of emerging technologies, such as Big Data, has taken in recent years in project management in the construction sector. The main focus is the optimization of time, scope, budget, and in general mitigating risks. This research was developed in the northeastern region of Colombia-South America. The first phase was aimed at diagnosing the use of emerging technologies (Big-Data) in the construction sector. In Colombia, the construction sector represents more than 50% of the productive system, and more than 2 million people participate in this economic segment. The quantitative approach was used. A survey was applied to a sample of 91 companies in the construction sector. Preliminary results indicate that the use of Big Data and other emerging technologies is very low and also that there is interest in modernizing project management. There is evidence of a correlation between the interest in using new data management technologies and the incorporation of Building Information Modeling BIM. The next phase of the research will allow the generation of guidelines and strategies for the incorporation of technological tools in the construction sector in Colombia.

Keywords: big data, building information modeling, tecnology, project manamegent

Procedia PDF Downloads 128
28478 Improved Predictive Models for the IRMA Network Using Nonlinear Optimisation

Authors: Vishwesh Kulkarni, Nikhil Bellarykar

Abstract:

Cellular complexity stems from the interactions among thousands of different molecular species. Thanks to the emerging fields of systems and synthetic biology, scientists are beginning to unravel these regulatory, signaling, and metabolic interactions and to understand their coordinated action. Reverse engineering of biological networks has has several benefits but a poor quality of data combined with the difficulty in reproducing it limits the applicability of these methods. A few years back, many of the commonly used predictive algorithms were tested on a network constructed in the yeast Saccharomyces cerevisiae (S. cerevisiae) to resolve this issue. The network was a synthetic network of five genes regulating each other for the so-called in vivo reverse-engineering and modeling assessment (IRMA). The network was constructed in S. cereviase since it is a simple and well characterized organism. The synthetic network included a variety of regulatory interactions, thus capturing the behaviour of larger eukaryotic gene networks on a smaller scale. We derive a new set of algorithms by solving a nonlinear optimization problem and show how these algorithms outperform other algorithms on these datasets.

Keywords: synthetic gene network, network identification, optimization, nonlinear modeling

Procedia PDF Downloads 156