Search results for: dynamic multi-objective optimization
4346 Advantages of Neural Network Based Air Data Estimation for Unmanned Aerial Vehicles
Authors: Angelo Lerro, Manuela Battipede, Piero Gili, Alberto Brandl
Abstract:
Redundancy requirements for UAV (Unmanned Aerial Vehicle) are hardly faced due to the generally restricted amount of available space and allowable weight for the aircraft systems, limiting their exploitation. Essential equipment as the Air Data, Attitude and Heading Reference Systems (ADAHRS) require several external probes to measure significant data as the Angle of Attack or the Sideslip Angle. Previous research focused on the analysis of a patented technology named Smart-ADAHRS (Smart Air Data, Attitude and Heading Reference System) as an alternative method to obtain reliable and accurate estimates of the aerodynamic angles. This solution is based on an innovative sensor fusion algorithm implementing soft computing techniques and it allows to obtain a simplified inertial and air data system reducing external devices. In fact, only one external source of dynamic and static pressures is needed. This paper focuses on the benefits which would be gained by the implementation of this system in UAV applications. A simplification of the entire ADAHRS architecture will bring to reduce the overall cost together with improved safety performance. Smart-ADAHRS has currently reached Technology Readiness Level (TRL) 6. Real flight tests took place on ultralight aircraft equipped with a suitable Flight Test Instrumentation (FTI). The output of the algorithm using the flight test measurements demonstrates the capability for this fusion algorithm to embed in a single device multiple physical and virtual sensors. Any source of dynamic and static pressure can be integrated with this system gaining a significant improvement in terms of versatility.Keywords: aerodynamic angles, air data system, flight test, neural network, unmanned aerial vehicle, virtual sensor
Procedia PDF Downloads 2244345 A Literature Review of Precision Agriculture: Applications of Diagnostic Diseases in Corn, Potato, and Rice Based on Artificial Intelligence
Authors: Carolina Zambrana, Grover Zurita
Abstract:
The food loss production that occurs in deficient agricultural production is one of the major problems worldwide. This puts the population's food security and the efficiency of farming investments at risk. It is to be expected that this food security will be achieved with the own and efficient production of each country. It will have an impact on the well-being of its population and, thus, also on food sovereignty. The production losses in quantity and quality occur due to the lack of efficient detection of diseases at an early stage. It is very difficult to solve the agriculture efficiency using traditional methods since it takes a long time to be carried out due to detection imprecision of the main diseases, especially when the production areas are extensive. Therefore, the main objective of this research study is to perform a systematic literature review, of the latest five years, of Precision Agriculture (PA) to be able to understand the state of the art of the set of new technologies, procedures, and optimization processes with Artificial Intelligence (AI). This study will focus on Corns, Potatoes, and Rice diagnostic diseases. The extensive literature review will be performed on Elsevier, Scopus, and IEEE databases. In addition, this research will focus on advanced digital imaging processing and the development of software and hardware for PA. The convolution neural network will be handling special attention due to its outstanding diagnostic results. Moreover, the studied data will be incorporated with artificial intelligence algorithms for the automatic diagnosis of crop quality. Finally, precision agriculture with technology applied to the agricultural sector allows the land to be exploited efficiently. This system requires sensors, drones, data acquisition cards, and global positioning systems. This research seeks to merge different areas of science, control engineering, electronics, digital image processing, and artificial intelligence for the development, in the near future, of a low-cost image measurement system that allows the optimization of crops with AI.Keywords: precision agriculture, convolutional neural network, deep learning, artificial intelligence
Procedia PDF Downloads 854344 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 1534343 The Application of Pareto Local Search to the Single-Objective Quadratic Assignment Problem
Authors: Abdullah Alsheddy
Abstract:
This paper presents the employment of Pareto optimality as a strategy to help (single-objective) local search escaping local optima. Instead of local search, Pareto local search is applied to solve the quadratic assignment problem which is multi-objectivized by adding a helper objective. The additional objective is defined as a function of the primary one with augmented penalties that are dynamically updated.Keywords: Pareto optimization, multi-objectivization, quadratic assignment problem, local search
Procedia PDF Downloads 4714342 Study of Individual Parameters on the Enzymatic Glycosidation of Betulinic Acid by Novozyme-435
Authors: A. U. Adamu, Hamisu Abdu, A. A. Saidu
Abstract:
The enzymatic synthesis of 3-O-β-D-glucopyranoside-betulinic acid using Novozyme-435 as a catalyst was studied. The effect of various parameters such as substrate molar ratio, reaction temperature, reaction time, re-used enzymes and amount of enzymes were investigated. The optimum rection conditions for the enzymatic glycosidation of betulinic acid in an organic solvent using Novozym-435 was found to be at 1:1.2 substrate molar ratio, 55oC, 24 h and 180 mg of enzymes with percentage conversion of 88.69 %.Keywords: betulinic acid, glycosidation, novozyme-435, optimization
Procedia PDF Downloads 4304341 Light Harvesting Titanium Nanocatalyst for Remediation of Methyl Orange
Authors: Brajesh Kumar, Luis Cumbal
Abstract:
An eco-friendly Citrus paradisi peel extract mediated synthesis of TiO2 nanoparticles is reported under sonication. U.V.-vis, Transmission Electron Microscopy, Dynamic Light Scattering and X-ray analyses are performed to characterize the formation of TiO2 nanoparticles. It is almost spherical in shape, having a size of 60–140 nm and the XRD peaks at 2θ = 25.363° confirm the characteristic facets for anatase form. The synthesized nano catalyst is highly active in the decomposition of methyl orange (64 mg/L) in sunlight (~73%) for 2.5 hours.Keywords: eco-friendly, TiO2 nanoparticles, citrus paradisi, TEM
Procedia PDF Downloads 5304340 On the Topological Entropy of Nonlinear Dynamical Systems
Authors: Graziano Chesi
Abstract:
The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.Keywords: non-linear system, communication constraint, topological entropy
Procedia PDF Downloads 3254339 Nationalization of the Social Life in Argentina: Accumulation of Capital, State Intervention, Labor Market, and System of Rights in the Last Decades
Authors: Mauro Cristeche
Abstract:
This work begins with a very simple question: How does the State spend? Argentina is witnessing a process of growing nationalization of social life, so it is necessary to find out the explanations of the phenomenon on the specific dynamic of the capitalist mode of production in Argentina and its transformations in the last decades. Then the new question is: what happened in Argentina that could explain this phenomenon? Since the seventies, the capital growth in Argentina faces deep competitive problems. Until that moment the agrarian wealth had worked as a compensation mechanism, but it began to find its limits. In the meantime, some important demographical and structural changes had happened. The strategy of the capitalist class had to become to seek in the cheapness of the labor force the main source of compensation of its weakness. As a result, a tendency to worsen the living conditions and fragmentation of the working class started to develop, manifested by unemployment, underemployment, and the fall of the purchasing power of the salary as a highlighted fact. As a consequence, it is suggested that the role of the State became stronger and public expenditure increased, as a historical trend, because it has to intervene to face the contradictions and constant growth problems posed by the development of capitalism in Argentina. On the one hand, the State has to guarantee the process of buying the cheapened workforce and at the same time the process of reproduction of the working class. On the other hand, it has to help to reproduce the individual capitals but needs to ‘attack’ them in different ways. This is why the role of the State is said to be the general political representative to the national portion of the total social capital. What will be studied is the dynamic of the intervention of the Argentine State in the context of the particular national process of capital growth, and its dynamics in the last decades. What this paper wants to show are the main general causes that could explain the phenomenon of nationalization of the social life and how it has impacted the life conditions of the working class and the system of rights.Keywords: Argentina, nationalization, public policies, rights, state
Procedia PDF Downloads 1404338 Displacement Based Design of a Dual Structural System
Authors: Romel Cordova Shedan
Abstract:
The traditional seismic design is the methodology of Forced Based Design (FBD). The Displacement Based Design (DBD) is a seismic design that considers structural damage to achieve a failure mechanism of the structure before the collapse. It is easier to quantify damage of a structure with displacements rather than forces. Therefore, a structure to achieve an inelastic displacement design with good ductility, it is necessary to be damaged. The first part of this investigation is about differences between the methodologies of DBD and FBD with some DBD advantages. In the second part, there is a study case about a dual building 5-story, which is regular in plan and elevation. The building is located in a seismic zone, which acceleration in firm soil is 45% of the acceleration of gravity. Then it is applied both methodologies into the study case to compare its displacements, shear forces and overturning moments. In the third part, the Dynamic Time History Analysis (DTHA) is done, to compare displacements with DBD and FBD methodologies. Three accelerograms were used and the magnitude of the acceleration scaled to be spectrum compatible with design spectrum. Then, using ASCE 41-13 guidelines, the hinge plastics were assigned to structure. Finally, both methodologies results about study case are compared. It is important to take into account that the seismic performance level of the building for DBD is greater than FBD method. This is due to drifts of DBD are in the order of 2.0% and 2.5% comparing with FBD drifts of 0.7%. Therefore, displacements of DBD is greater than the FBD method. Shear forces of DBD result greater than FBD methodology. These strengths of DBD method ensures that structure achieves design inelastic displacements, because those strengths were obtained due to a displacement spectrum reduction factor which depends on damping and ductility of the dual system. Also, the displacements for the study case for DBD results to be greater than FBD and DTHA. In that way, it proves that the seismic performance level of the building for DBD is greater than FBD method. Due to drifts of DBD which are in the order of 2.0% and 2.5% compared with little FBD drifts of 0.7%.Keywords: displacement-based design, displacement spectrum reduction factor, dynamic time history analysis, forced based design
Procedia PDF Downloads 2294337 The Effect of Manure Loaded Biochar on Soil Microbial Communities
Authors: T. Weber, D. MacKenzie
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.Keywords: cattle, biochar, manure, microbial activity
Procedia PDF Downloads 1074336 Characterisation of Chitooligomers Prepared with the Aid of Cellulase, Xylanase and Chitosanase
Authors: Anna Zimoch-Korzycka, Dominika Kulig, Andrzej Jarmoluk
Abstract:
The aim of this study was to obtain chitooligosaccharides from chitosan with better functional properties using three different enzyme preparations and compare the products of enzymatic hydrolysis. Commercially available cellulase (CL), xylanase (X) and chitosanase (CS) preparations were used to investigate hydrolytic activity on chitosan (CH) with low molecular weight and DD of 75-85%. It has been reported that CL and X have side activities of other enzymes, such as β-glucanase or β-glucosidase. CS enzyme has a foreign activity of chitinase. Each preparation was used in 1000 U of activity and in the same reaction conditions. The degree of deacetylation and molecular weight of chitosan were specified using titration and viscometric methods, respectively. The hydrolytic activity of enzymes preparations on chitosan was monitored by dynamic viscosity measurement. After 4 h reaction with stirring, solutions were filtered and chitosan oligomers were isolated by methanol solution into two fractions: precipitate (A) and supernatant (B). A Fourier-transform infrared spectroscopy was used to characterize the structural changes of chitosan oligomers fractions and initial chitosan. Furthermore, the solubility of lyophilized hydrolytic mixture (C) and two chitooligomers fractions (A, B) of each enzyme hydrolysis was assayed. The antioxidant activity of chitosan oligomers was evaluated as DPPH free radical scavenging activity. The dynamic viscosity measured after addition of enzymes preparation to the chitosan solution decreased dramatically over time in the sample with X in comparison to solution without the enzyme. For mixtures with CL and CS, lower viscosities were also recorded but not as low as the ones with X. A and B fractions were characterized by the most similar viscosity obtained by the xylanase hydrolysis and were 15 mPas and 9 mPas, respectively. Structural changes of chitosan oligomers A, B, C and their differences related with various enzyme preparations used were confirmed. Water solubility of A fractions was not possible to filter and the result was not recorded. Solubility of supernatants was approximately 95% and was higher than hydrolytic mixture. It was observed that the DPPH radical scavenging effect of A, B, C samples is the highest for X products and was approximately 13, 17, 19% respectively. In summary, a mixture of chitooligomers may be useful for the design of edible protective coatings due to the improved biophysical properties.Keywords: cellulase, xylanase, chitosanase, chitosan, chitooligosaccharides
Procedia PDF Downloads 3324335 Dynamic of an Invasive Insect Gut Microbiome When Facing to Abiotic Stress
Authors: Judith Mogouong, Philippe Constant, Robert Lavallee, Claude Guertin
Abstract:
The emerald ash borer (EAB) is an exotic wood borer insect native from China, which is associated with important environmental and economic damages in North America. Beetles are known to be vectors of microbial communities related to their adaptive capacities. It is now established that environmental stress factors may induce physiological events on the host trees, such as phytochemical changes. Consequently, that may affect the establishment comportment of herbivorous insect. Considering the number of insects collected on ash trees (insects’ density) as an abiotic factor related to stress damage, the aim of our study was to explore the dynamic of EAB gut microbial community genome (microbiome) when facing that factor and to monitor its diversity. Insects were trapped using specific green Lindgren© traps. A gradient of the captured insect population along the St. Lawrence River was used to create three levels of insects’ density (low, intermediate, and high). After dissection, total DNA extracted from insect guts of each level has been sent for amplicon sequencing of bacterial 16S rRNA gene and fungal ITS2 region. The composition of microbial communities among sample appeared largely diversified with the Simpson index significantly different across the three levels of density for bacteria. Add to that; bacteria were represented by seven phyla and twelve classes, whereas fungi were represented by two phyla and seven known classes. Using principal coordinate analysis (PCoA) based on Bray Curtis distances of 16S rRNA sequences, we observed a significant variation between the structure of the bacterial communities depending on insects’ density. Moreover, the analysis showed significant correlations between some bacterial taxa and the three classes of insects’ density. This study is the first to present a complete overview of the bacterial and fungal communities associated with the gut of EAB base on culture-independent methods, and to correlate those communities with a potential stress factor of the host trees.Keywords: gut microbiome, DNA, 16S rRNA sequences, emerald ash borer
Procedia PDF Downloads 4084334 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis
Authors: Bangkit Adhi Nugraha, Bambang Suripto
Abstract:
On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016
Procedia PDF Downloads 3824333 Enhancing Single Channel Minimum Quantity Lubrication through Bypass Controlled Design for Deep Hole Drilling with Small Diameter Tool
Authors: Yongrong Li, Ralf Domroes
Abstract:
Due to significant energy savings, enablement of higher machining speed as well as environmentally friendly features, Minimum Quantity Lubrication (MQL) has been used for many machining processes efficiently. However, in the deep hole drilling field (small tool diameter D < 5 mm) and long tool (length L > 25xD) it is always a bottle neck for a single channel MQL system. The single channel MQL, based on the Venturi principle, faces a lack of enough oil quantity caused by dropped pressure difference during the deep hole drilling process. In this paper, a system concept based on a bypass design has explored its possibility to dynamically reach the required pressure difference between the air inlet and the inside of aerosol generator, so that the deep hole drilling demanded volume of oil can be generated and delivered to tool tips. The system concept has been investigated in static and dynamic laboratory testing. In the static test, the oil volume with and without bypass control were measured. This shows an oil quantity increasing potential up to 1000%. A spray pattern test has demonstrated the differences of aerosol particle size, aerosol distribution and reaction time between single channel and bypass controlled single channel MQL systems. A dynamic trial machining test of deep hole drilling (drill tool D=4.5mm, L= 40xD) has been carried out with the proposed system on a difficult machining material AlSi7Mg. The tool wear along a 100 meter drilling was tracked and analyzed. The result shows that the single channel MQL with a bypass control can overcome the limitation and enhance deep hole drilling with a small tool. The optimized combination of inlet air pressure and bypass control results in a high quality oil delivery to tool tips with a uniform and continuous aerosol flow.Keywords: deep hole drilling, green production, Minimum Quantity Lubrication (MQL), near dry machining
Procedia PDF Downloads 2094332 Design and Implementation of A 10-bit SAR ADC with A Programmable Reference
Authors: Hasmayadi Abdul Majid, Yuzman Yusoff, Noor Shelida Salleh
Abstract:
This paper presents the development of a single-ended 38.5 kS/s 10-bit programmable reference SAR ADC which is realized in MIMOS’s 0.35 µm CMOS process. The design uses a resistive DAC, a dynamic comparator with pre-amplifier and a SAR digital logic to create 10 effective bits ADC. A programmable reference circuitry allows the ADC to operate with different input range from 0.6 V to 2.1 V. A single ended 38.5 kS/s 10-bit programmable reference SAR ADC was proposed and implemented in a 0.35 µm CMOS technology and consumed less than 7.5 mW power with a 3 V supply.Keywords: successive approximation register analog-to-digital converter, SAR ADC, resistive DAC, programmable reference
Procedia PDF Downloads 5224331 Modulating Vortex Dynamics Around Circular Cylinder Via Asymmetric Cross-Sectional Profile Morphing: A Comparative Study of Cylindrical and Elliptical Configurations
Authors: Kamila Fethallah, Mahmoud Mekadem, Hamid Ouali
Abstract:
Active flow control around a cylinder is an extensively studied subject in aerodynamics. Researchers apply a range of techniques to alter the fluid flow surrounding a cylindrical body, with the intent of reducing drag, enhancing lift, and optimizing overall aerodynamic performance. This study investigates the manipulation of flow dynamics around a circular cylinder by introducing an original elliptical cylindrical deformation to the traditionally straight section. Through the use of a crank mechanism, precise control of the deformation is achieved, allowing a comprehensive examination of its effects on fluid flow patterns. The main objective of this research is to evaluate the effectiveness of this advanced approach in reducing the drag coefficient and modifying the wake pattern, providing valuable information on flow control and optimization. Experimental results show that varying deformation amplitudes (10%, 15% and 20%) and control frequencies strongly influence drag and flow structure, the maximum reduction in drag coefficient (approximately 44%) observed at 15% amplitude and optimum frequency. The flow structure is strongly influenced by the deformation amplitude and frequency, particularly in the frequency range close to that of the natural shedding. These results suggest that the deformation frequency and amplitude play a crucial role in modifying the flow structure and reducing the drag coefficient. Numerical simulations further support the efficiency of the active flow control technique using cylindrical-elliptical deformation, underlining a consistent drag reduction of up to 42% at extreme deformation conditions (100%). The present study aims at highlighting the potential of this original approach in the enhancement of efficiency and performance of systems involved in energy exchange with fluids. Concluding this, the current study offers fresh routes toward the development of flow control and optimization strategies in a wide range of engineering applications.Keywords: control frequencies, deformation amplitudes, drag coefficient, elliptical cylindrical deformation, flow dynamics, wake pattern
Procedia PDF Downloads 124330 Low-Power Digital Filters Design Using a Bypassing Technique
Authors: Thiago Brito Bezerra
Abstract:
This paper presents a novel approach to reduce power consumption of digital filters based on dynamic bypassing of partial products in their multipliers. The bypassing elements incorporated into the multiplier hardware eliminate redundant signal transitions, which appear within the carry-save adders when the partial product is zero. This technique reduces the power consumption by around 20%. The circuit implementation was made using the AMS 0.18 um technology. The bypassing technique applied to the circuits is outlined.Keywords: digital filter, low-power, bypassing technique, low-pass filter
Procedia PDF Downloads 3874329 Time Temperature Dependence of Long Fiber Reinforced Polypropylene Manufactured by Direct Long Fiber Thermoplastic Process
Authors: K. A. Weidenmann, M. Grigo, B. Brylka, P. Elsner, T. Böhlke
Abstract:
In order to reduce fuel consumption, the weight of automobiles has to be reduced. Fiber reinforced polymers offer the potential to reach this aim because of their high stiffness to weight ratio. Additionally, the use of fiber reinforced polymers in automotive applications has to allow for an economic large-scale production. In this regard, long fiber reinforced thermoplastics made by direct processing offer both mechanical performance and processability in injection moulding and compression moulding. The work presented in this contribution deals with long glass fiber reinforced polypropylene directly processed in compression moulding (D-LFT). For the use in automotive applications both the temperature and the time dependency of the materials properties have to be investigated to fulfill performance requirements during crash or the demands of service temperatures ranging from -40 °C to 80 °C. To consider both the influence of temperature and time, quasistatic tensile tests have been carried out at different temperatures. These tests have been complemented by high speed tensile tests at different strain rates. As expected, the increase in strain rate results in an increase of the elastic modulus which correlates to an increase of the stiffness with decreasing service temperature. The results are in good accordance with results determined by dynamic mechanical analysis within the range of 0.1 to 100 Hz. The experimental results from different testing methods were grouped and interpreted by using different time temperature shift approaches. In this regard, Williams-Landel-Ferry and Arrhenius approach based on kinetics have been used. As the theoretical shift factor follows an arctan function, an empirical approach was also taken into consideration. It could be shown that this approach describes best the time and temperature superposition for glass fiber reinforced polypropylene manufactured by D-LFT processing.Keywords: composite, dynamic mechanical analysis, long fibre reinforced thermoplastics, mechanical properties, time temperature superposition
Procedia PDF Downloads 2024328 In Search of Good Fortune: Individualization, Youth and the Spanish Labour Market within a Context of Crisis
Authors: Matthew Lee Turnbough
Abstract:
In 2007 Spain began to experience the effects of a deep economic crisis, which would generate a situation characterised by instability and uncertainty. This has been an obstacle, especially acute for the youth of this country seeking to enter the workforce. As a result of the impact of COVID-19, the youth in Spain are now suffering the effects of a new crisis that has deepened an already fragile labour environment. In this paper, we analyse the discourses that have emerged from a precarious labour market, specifically from two companies dedicated to operating job portals and job listings in Spain, Job Today, and CornerJob. These two start-up businesses have developed mobile applications geared towards young adults in search of employment in the service sector, two of the companies with the highest user rates in Spain. Utilizing a discourse analysis approach, we explore the impact of individualization and how the process of psychologization may contribute to an increasing reliance on individual solutions to social problems. As such, we seek to highlight the expectations and demands that are placed upon young workers and the type of subjectivity that this dynamic could foster, all this within an unstable framework seemingly marked by chance, a context which is key for the emergence of individualization. Furthermore, we consider the extent to which young adults incorporate these discourses and the strategies they employ basing our analysis on the VULSOCU (New Forms of Socio-Existential Vulnerability, Supports, and Care in Spain) research project, specifically the results of nineteen in-depth interviews and three discussion groups with young adults in this country. Consequently, we seek to elucidate the argumentative threads rooted in the process of individualization and underline the implications of this dynamic for the young worker and his/her labour insertion while also identifying manifestations of the goddess of fortune as a representation of chance in this context. Finally, we approach this panorama of social change in Spain from the perspective of the individuals or young adults who find themselves immersed in this transition from one crisis to another.Keywords: chance, crisis, discourses, individualization, work, youth
Procedia PDF Downloads 1214327 APPLE: Providing Absolute and Proportional Throughput Guarantees in Wireless LANs
Authors: Zhijie Ma, Qinglin Zhao, Hongning Dai, Huan Zhang
Abstract:
This paper proposes an APPLE scheme that aims at providing absolute and proportional throughput guarantees, and maximizing system throughput simultaneously for wireless LANs with homogeneous and heterogenous traffic. We formulate our objectives as an optimization problem, present its exact and approximate solutions, and prove the existence and uniqueness of the approximate solution. Simulations validate that APPLE scheme is accurate, and the approximate solution can well achieve the desired objectives already.Keywords: IEEE 802.11e, throughput guarantee, priority, WLANs
Procedia PDF Downloads 3684326 Optimization of Water Pipeline Routes Using a GIS-Based Multi-Criteria Decision Analysis and a Geometric Search Algorithm
Authors: Leon Mortari
Abstract:
The Metropolitan East region of Rio de Janeiro state, Brazil, faces a historic water scarcity. Among the alternatives studied to solve this situation, the possibility of adduction of the available water in the reservoir Lagoa de Juturnaíba to supply the region's municipalities stands out. The allocation of a linear engineering project must occur through an evaluation of different aspects, such as altitude, slope, proximity to roads, distance from watercourses, land use and occupation, and physical and chemical features of the soil. This work aims to apply a multi-criteria model that combines geoprocessing techniques, decision-making, and geometric search algorithm to optimize a hypothetical adductor system in the scenario of expanding the water supply system that serves this region, known as Imunana-Laranjal, using the Lagoa de Juturnaíba as the source. It is proposed in this study, the construction of a spatial database related to the presented evaluation criteria, treatment and rasterization of these data, and standardization and reclassification of this information in a Geographic Information System (GIS) platform. The methodology involves the integrated analysis of these criteria, using their relative importance defined by weighting them based on expert consultations and the Analytic Hierarchy Process (AHP) method. Three approaches are defined for weighting the criteria by AHP: the first treats all criteria as equally important, the second considers weighting based on a pairwise comparison matrix, and the third establishes a hierarchy based on the priority of the criteria. For each approach, a distinct group of weightings is defined. In the next step, map algebra tools are used to overlay the layers and generate cost surfaces, that indicates the resistance to the passage of the adductor route, using the three groups of weightings. The Dijkstra algorithm, a geometric search algorithm, is then applied to these cost surfaces to find an optimized path within the geographical space, aiming to minimize resources, time, investment, maintenance, and environmental and social impacts.Keywords: geometric search algorithm, GIS, pipeline, route optimization, spatial multi-criteria analysis model
Procedia PDF Downloads 394325 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry
Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc
Abstract:
Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning
Procedia PDF Downloads 5234324 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting
Authors: Kourosh Modarresi
Abstract:
The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation
Procedia PDF Downloads 4614323 Area-Efficient FPGA Implementation of an FFT Processor by Reusing Butterfly Units
Authors: Atin Mukherjee, Amitabha Sinha, Debesh Choudhury
Abstract:
Fast Fourier transform (FFT) of large-number of samples requires larger hardware resources of field programmable gate arrays and it asks for more area as well as power. In this paper, an area efficient architecture of FFT processor is proposed, that reuses the butterfly units more than once. The FFT processor is emulated and the results are validated on Virtex-6 FPGA. The proposed architecture outperforms the conventional architecture of a N-point FFT processor in terms of area which is reduced by a factor of log_N(2) with the negligible increase of processing time.Keywords: FFT, FPGA, resource optimization, butterfly units
Procedia PDF Downloads 5254322 On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources
Authors: Hidouri Sami, Aguili Taoufik
Abstract:
We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These two solutions are compared to results from the reference bibliography.Keywords: method of auxiliary sources, scattering, large object, RCS, computational resources
Procedia PDF Downloads 2454321 Exceptional Cost and Time Optimization with Successful Leak Repair and Restoration of Oil Production: West Kuwait Case Study
Authors: Nasser Al-Azmi, Al-Sabea Salem, Abu-Eida Abdullah, Milan Patra, Mohamed Elyas, Daniel Freile, Larisa Tagarieva
Abstract:
Well intervention was done along with Production Logging Tools (PLT) to detect sources of water, and to check well integrity for two West Kuwait oil wells started to produce 100 % water. For the first well, to detect the source of water, PLT was performed to check the perforations, no production observed from the bottom two perforation intervals, and an intake of water was observed from the top most perforation. Then a decision was taken to extend the PLT survey from tag depth to the Y-tool. For the second well, the aim was to detect the source of water and if there was a leak in the 7’’liner in front of the upper zones. Data could not be recorded in flowing conditions due to the casing deformation at almost 8300 ft. For the first well from the interpretation of PLT and well integrity data, there was a hole in the 9 5/8'' casing from 8468 ft to 8494 ft producing almost the majority of water, which is 2478 bbl/d. The upper perforation from 10812 ft to 10854 ft was taking 534 stb/d. For the second well, there was a hole in the 7’’liner from 8303 ft MD to 8324 ft MD producing 8334.0 stb/d of water with an intake zone from10322.9-10380.8 ft MD taking the whole fluid. To restore the oil production, W/O rig was mobilized to prevent dump flooding, and during the W/O, the leaking interval was confirmed for both wells. The leakage was cement squeezed and tested at 900-psi positive pressure and 500-psi drawdown pressure. The cement squeeze job was successful. After W/O, the wells kept producing for cleaning, and eventually, the WC reduced to 0%. Regular PLT and well integrity logs are required to study well performance, and well integrity issues, proper cement behind casing is essential to well longevity and well integrity, and the presence of the Y-tool is essential as monitoring of well parameters and ESP to facilitate well intervention tasks. Cost and time optimization in oil and gas and especially during rig operations is crucial. PLT data quality and the accuracy of the interpretations contributed a lot to identify the leakage interval accurately and, in turn, saved a lot of time and reduced the repair cost with almost 35 to 45 %. The added value here was more related to the cost reduction and effective and quick proper decision making based on the economic environment.Keywords: leak, water shut-off, cement, water leak
Procedia PDF Downloads 1204320 Reading and Writing Memories in Artificial and Human Reasoning
Authors: Ian O'Loughlin
Abstract:
Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.Keywords: artificial reasoning, human memory, machine learning, neural networks
Procedia PDF Downloads 2754319 Research and Development of Net-Centric Information Sharing Platform
Authors: Wang Xiaoqing, Fang Youyuan, Zheng Yanxing, Gu Tianyang, Zong Jianjian, Tong Jinrong
Abstract:
Compared with traditional distributed environment, the net-centric environment brings on more demanding challenges for information sharing with the characteristics of ultra-large scale and strong distribution, dynamic, autonomy, heterogeneity, redundancy. This paper realizes an information sharing model and a series of core services, through which provides an open, flexible and scalable information sharing platform.Keywords: net-centric environment, information sharing, metadata registry and catalog, cross-domain data access control
Procedia PDF Downloads 5754318 Mechanical Response Investigation of Wafer Probing Test with Vertical Cobra Probe via the Experiment and Transient Dynamic Simulation
Authors: De-Shin Liu, Po-Chun Wen, Zhen-Wei Zhuang, Hsueh-Chih Liu, Pei-Chen Huang
Abstract:
Wafer probing tests play an important role in semiconductor manufacturing procedures in accordance with the yield and reliability requirement of the wafer after the backend-of-the-line process. Accordingly, the stable physical and electrical contact between the probe and the tested wafer during wafer probing is regarded as an essential issue in identifying the known good die. The probe card can be integrated with multiple probe needles, which are classified as vertical, cantilever and micro-electro-mechanical systems type probe selections. Among all potential probe types, the vertical probe has several advantages as compared with other probe types, including maintainability, high probe density and feasibility for high-speed wafer testing. In the present study, the mechanical response of the wafer probing test with the vertical cobra probe on 720 μm thick silicon (Si) substrate with a 1.4 μm thick aluminum (Al) pad is investigated by the experiment and transient dynamic simulation approach. Because the deformation mechanism of the vertical cobra probe is determined by both bending and buckling mechanisms, the stable correlation between contact forces and overdrive (OD) length must be carefully verified. Moreover, the decent OD length with corresponding contact force contributed to piercing the native oxide layer of the Al pad and preventing the probing test-induced damage on the interconnect system. Accordingly, the scratch depth of the Al pad under various OD lengths is estimated by the atomic force microscope (AFM) and simulation work. In the wafer probing test configuration, the contact phenomenon between the probe needle and the tested object introduced large deformation and twisting of mesh gridding, causing the subsequent numerical divergence issue. For this reason, the arbitrary Lagrangian-Eulerian method is utilized in the present simulation work to conquer the aforementioned issue. The analytic results revealed a slight difference when the OD is considered as 40 μm, and the simulated is almost identical to the measured scratch depths of the Al pad under higher OD lengths up to 70 μm. This phenomenon can be attributed to the unstable contact of the probe at low OD length with the scratch depth below 30% of Al pad thickness, and the contact status will be being stable when the scratch depth over 30% of pad thickness. The splash of the Al pad is observed by the AFM, and the splashed Al debris accumulates on a specific side; this phenomenon is successfully simulated in the transient dynamic simulation. Thus, the preferred testing OD lengths are found as 45 μm to 70 μm, and the corresponding scratch depths on the Al pad are represented as 31.4% and 47.1% of Al pad thickness, respectively. The investigation approach demonstrated in this study contributed to analyzing the mechanical response of wafer probing test configuration under large strain conditions and assessed the geometric designs and material selections of probe needles to meet the requirement of high resolution and high-speed wafer-level probing test for thinned wafer application.Keywords: wafer probing test, vertical probe, probe mark, mechanical response, FEA simulation
Procedia PDF Downloads 644317 Identification of Stakeholders and Practices of Inclusive Education
Authors: Luis Javier Serrano-Tamayo
Abstract:
This paper focuses on the recent interest in the concept of inclusion from multiple areas of social sciences, but particularly from the academic studies on what do scholars mean when they refer to inclusive education. Therefore, this paper has been based on a three-year systematic review of near two hundred peer-reviewed documents in the last two decades. The results illustrate some of the use, misuse, and abuse of inclusive education as well as shed some light on the identification of the different stakeholders involved in the dynamic concept of inclusive education and their suggested practices.Keywords: inclusion, inclusive education, inclusive practices, education stakeholders
Procedia PDF Downloads 246