Search results for: stochastic extreme wind load
2889 Optimization of the Aerodynamic Performances of an Unmanned Aerial Vehicle
Authors: Fares Senouci, Bachir Imine
Abstract:
This document provides numerical and experimental optimization of the aerodynamic performance of a drone equipped with three types of horizontal stabilizer. To build this optimal configuration, an experimental and numerical study was conducted on three parameters: the geometry of the stabilizer (horizontal form or reverse V form), the position of the horizontal stabilizer (up or down), and the landing gear position (closed or open). The results show that up-stabilizer position with respect to the horizontal plane of the fuselage provides better aerodynamic performance, and that the landing gear increases the lift in the zone of stability, that is to say where the flow is not separated.Keywords: aerodynamics, drag, lift, turbulence model, wind tunnel
Procedia PDF Downloads 2542888 Evaluation of Fracture Resistance and Moisture Damage of Hot Mix Asphalt Using Plastic Coated Aggregates
Authors: Malleshappa Japagal, Srinivas Chitragar
Abstract:
The use of waste plastic in pavement is becoming important alternative worldwide for disposal of plastic as well as to improve the stability of pavement and to meet out environmental issues. However, there are still concerns on fatigue and fracture resistance of Hot Mix Asphalt with the addition of plastic waste, (HMA-Plastic mixes) and moisture damage potential. The present study was undertaken to evaluate fracture resistance of HMA-Plastic mixes using semi-circular bending (SCB) test and moisture damage potential by Indirect Tensile strength (ITS) test using retained tensile strength (TSR). In this study, a dense graded asphalt mix with 19 mm nominal maximum aggregate size was designed in the laboratory using Marshall Mix design method. Aggregates were coated with different percentages of waste plastic (0%, 2%, 3% and 4%) by weight of aggregate and performance evaluation of fracture resistance and Moisture damage was carried out. The following parameters were estimated for the mixes: J-Integral or Jc, strain energy at failure, peak load at failure, and deformation at failure. It was found that the strain energy and peak load of all the mixes decrease with an increase in notch depth, indicating that increased percentage of plastic waste gave better fracture resistance. The moisture damage potential was evaluated by Tensile strength ratio (TSR). The experimental results shown increased TRS value up to 3% addition of waste plastic in HMA mix which gives better performance hence the use of waste plastic in road construction is favorable.Keywords: hot mix asphalt, semi circular bending, marshall mix design, tensile strength ratio
Procedia PDF Downloads 3082887 MIMO PID Controller of a Power Plant Boiler–Turbine Unit
Authors: N. Ben-Mahmoud, M. Elfandi, A. Shallof
Abstract:
This paper presents a methodology to design multivariable PID controllers for multi-input and multi-output systems. The proposed control strategy, which is centralized, combines of PID controllers. The proportional gains in the P controllers act as tuning parameters of (SISO) in order to modify the behavior of the loops almost independently. The design procedure consists of three steps: first, an ideal decoupler including integral action is determined. Second, the decoupler is approximated with PID controllers. Third, the proportional gains are tuned to achieve the specified performance. The proposed method is applied to representative processes.Keywords: boiler turbine, MIMO, PID controller, control by decoupling, anti wind-up techniques
Procedia PDF Downloads 3282886 Cryptographic Resource Allocation Algorithm Based on Deep Reinforcement Learning
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decision-making problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security) by modeling the multi-job collaborative cryptographic service scheduling problem as a multi-objective optimized job flow scheduling problem and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real-time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: cloud computing, cryptography on-demand service, reinforcement learning, workflow scheduling
Procedia PDF Downloads 182885 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 1562884 Analyzing Tensile Strength in Different Composites at High Temperatures: Insights from 761 Tests
Authors: Milad Abolfazli, Milad Bazli
Abstract:
In this critical review, the topic of how composites maintain their tensile strength when exposed to elevated temperatures will be studied. A comprehensive database of 761 tests have been analyzed and closely examined to study the various factors that affect the strength retention. Conclusions are drawn from the collective research efforts of numerous scholars who have investigated this subject. Through the analysis of these tests, the relationships between the tensile strength retention and various effective factors are investigated. This review is meant to be a practical resource for researchers and engineers. It provides valuable information that can guide the development of composites tailored for high-temperature applications. By offering a deeper understanding of how composites behave in extreme heat, the paper contributes to the advancement of materials science and engineering.Keywords: tesnile tests, high temperatures, FRP composites, mechanical perfomance
Procedia PDF Downloads 712883 Literature and the Extremism: Case Study on and Qualitative Analysis of the Impact of Literature on Extremism in Afghanistan
Authors: Mohibullah Zegham
Abstract:
In conducting a case study to analyze the impact of literature on extremism and fundamentalism in Afghanistan, the author of this paper uses qualitative research method. For this purpose the author of the paper has a glance at the history of extremism and fundamentalism in Afghanistan, as well the major causes and predisposing factors of it; then analyzes the impact of literature on extremism and fundamentalism using qualitative method. This study relies on the moral engagement theory to reveal how some extreme-Islamists quit the ideological interpretation of Islam and return to normal life by reading certain literary works. The goal of this case study is to help fighting extremism and fundamentalism by using literature. The research showed that literary works are useful in this regard and there are several evidences of its effectiveness.Keywords: extremism, fundamentalism, communist, jihad, madrasa, literature
Procedia PDF Downloads 2752882 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 1692881 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies
Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid
Abstract:
Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.Keywords: climate, renewable energy, R strategies, sustainability
Procedia PDF Downloads 1372880 Assessment of Treatment Methods to Remove Hazardous Dyes from Synthetic Wastewater
Authors: Abhiram Siva Prasad Pamula
Abstract:
Access to clean drinking water becomes scarce due to the increase in extreme weather events because of the rise in the average global temperatures and climate change. By 2030, approximately 47% of the world’s population will face water shortages due to uncertainty in seasonal rainfall. Over 10000 varieties of synthetic dyes are commercially available in the market and used by textile and paper industries, negatively impacting human health when ingested. Besides humans, textile dyes have a negative impact on aquatic ecosystems by increasing biological oxygen demand and chemical oxygen demand. This study assesses different treatment methods that remove dyes from textile wastewater while focusing on energy, economic, and engineering aspects of the treatment processes.Keywords: textile wastewater, dye removal, treatment methods, hazardous pollutants
Procedia PDF Downloads 962879 Nonlinear Vibration of FGM Plates Subjected to Acoustic Load in Thermal Environment Using Finite Element Modal Reduction Method
Authors: Hassan Parandvar, Mehrdad Farid
Abstract:
In this paper, a finite element modeling is presented for large amplitude vibration of functionally graded material (FGM) plates subjected to combined random pressure and thermal load. The material properties of the plates are assumed to vary continuously in the thickness direction by a simple power law distribution in terms of the volume fractions of the constituents. The material properties depend on the temperature whose distribution along the thickness can be expressed explicitly. The von Karman large deflection strain displacement and extended Hamilton's principle are used to obtain the governing system of equations of motion in structural node degrees of freedom (DOF) using finite element method. Three-node triangular Mindlin plate element with shear correction factor is used. The nonlinear equations of motion in structural degrees of freedom are reduced by using modal reduction method. The reduced equations of motion are solved numerically by 4th order Runge-Kutta scheme. In this study, the random pressure is generated using Monte Carlo method. The modeling is verified and the nonlinear dynamic response of FGM plates is studied for various values of volume fraction and sound pressure level under different thermal loads. Snap-through type behavior of FGM plates is studied too.Keywords: nonlinear vibration, finite element method, functionally graded material (FGM) plates, snap-through, random vibration, thermal effect
Procedia PDF Downloads 2642878 Optimal Maintenance Policy for a Three-Unit System
Authors: A. Abbou, V. Makis, N. Salari
Abstract:
We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.Keywords: reliability, maintenance optimization, Markov decision process, heuristics
Procedia PDF Downloads 2192877 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition
Authors: H. Mousavi, M. Sharifi, H. Pourvaziri
Abstract:
Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation
Procedia PDF Downloads 4132876 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 3182875 Dynamics Analyses of Swing Structure Subject to Rotational Forces
Authors: Buntheng Chhorn, WooYoung Jung
Abstract:
Large-scale swing has been used in entertainment and performance, especially in circus, for a very long time. To increase the safety of this type of structure, a thorough analysis for displacement and bearing stress was performed for an extreme condition where a full cycle swing occurs. Different masses, ranging from 40 kg to 220 kg, and velocities were applied on the swing. Then, based on the solution of differential dynamics equation, swing velocity response to harmonic force was obtained. Moreover, the resistance capacity was estimated based on ACI steel structure design guide. Subsequently, numerical analysis was performed in ABAQUS to obtain the stress on each frame of the swing. Finally, the analysis shows that the expansion of swing structure frame section was required for mass bigger than 150kg.Keywords: swing structure, displacement, bearing stress, dynamic loads response, finite element analysis
Procedia PDF Downloads 3782874 Design of an Automated Deep Learning Recurrent Neural Networks System Integrated with IoT for Anomaly Detection in Residential Electric Vehicle Charging in Smart Cities
Authors: Wanchalerm Patanacharoenwong, Panaya Sudta, Prachya Bumrungkun
Abstract:
The paper focuses on the development of a system that combines Internet of Things (IoT) technologies and deep learning algorithms for anomaly detection in residential Electric Vehicle (EV) charging in smart cities. With the increasing number of EVs, ensuring efficient and reliable charging systems has become crucial. The aim of this research is to develop an integrated IoT and deep learning system for detecting anomalies in residential EV charging and enhancing EV load profiling and event detection in smart cities. This approach utilizes IoT devices equipped with infrared cameras to collect thermal images and household EV charging profiles from the database of Thailand utility, subsequently transmitting this data to a cloud database for comprehensive analysis. The methodology includes the use of advanced deep learning techniques such as Recurrent Neural Networks (RNN) and Long Short-Term Memory (LSTM) algorithms. IoT devices equipped with infrared cameras are used to collect thermal images and EV charging profiles. The data is transmitted to a cloud database for comprehensive analysis. The researchers also utilize feature-based Gaussian mixture models for EV load profiling and event detection. Moreover, the research findings demonstrate the effectiveness of the developed system in detecting anomalies and critical profiles in EV charging behavior. The system provides timely alarms to users regarding potential issues and categorizes the severity of detected problems based on a health index for each charging device. The system also outperforms existing models in event detection accuracy. This research contributes to the field by showcasing the potential of integrating IoT and deep learning techniques in managing residential EV charging in smart cities. The system ensures operational safety and efficiency while also promoting sustainable energy management. The data is collected using IoT devices equipped with infrared cameras and is stored in a cloud database for analysis. The collected data is then analyzed using RNN, LSTM, and feature-based Gaussian mixture models. The approach includes both EV load profiling and event detection, utilizing a feature-based Gaussian mixture model. This comprehensive method aids in identifying unique power consumption patterns among EV owners and outperforms existing models in event detection accuracy. In summary, the research concludes that integrating IoT and deep learning techniques can effectively detect anomalies in residential EV charging and enhance EV load profiling and event detection accuracy. The developed system ensures operational safety and efficiency, contributing to sustainable energy management in smart cities.Keywords: cloud computing framework, recurrent neural networks, long short-term memory, Iot, EV charging, smart grids
Procedia PDF Downloads 682873 The Same Rules of Traditional Chinese Herbal Medicine in Treating Chronic Idiopathic Urticaria and Hypertension
Authors: Heng W. Chang, Mao F. Sun
Abstract:
Chronic Idiopathic Urticaria (CIU) and hypertension are rarely discussed together in modern and traditional Chinese medicine, and often belong to different medical departments. However, in traditional Chinese medicinal theory, the two diseases have some similar characters. For example, they are both relevant to 'wind'. This study conducted a literature review using the China National Knowledge Infrastructure to identify herbs yielding the same effect for the two diseases. The finding showed that the common herbs used most frequently is Rehmanniae. The conclusion is that the same TCM (Traditional Chinese Medicine) mechanism of the two diseases may be 'blood heat'. It requires further study to prove it in the future.Keywords: urticaria, herbs, hypertension, Rehmanniae
Procedia PDF Downloads 1562872 The Relationship between Lithological and Geomechanical Properties of Carbonate Rocks. Case study: Arab-D Reservoir Outcrop Carbonate, Central Saudi Arabia
Authors: Ammar Juma Abdlmutalib, Osman Abdullatif
Abstract:
Upper Jurrasic Arab-D Reservoir is considered as the largest oil reservoir in Saudi Arabia. The equivalent outcrop is exposed near Riyadh. The study investigates the relationships between lithofacies properties changes and geomechanical properties of Arab-D Reservoir in the outcrop scale. The methods used included integrated field observations and laboratory measurements. Schmidt Hammer Rebound Hardness, Point Load Index tests were carried out to estimate the strength of the samples, ultrasonic wave velocity test also was applied to measure P-wave, S-wave, and dynamic Poisson's ratio. Thin sections have been analyzed and described. The results show that there is a variation in geomechanical properties between the Arab-D member and Upper Jubaila Formation at outcrop scale, the change in texture or grain size has no or little effect on these properties. This is because of the clear effect of diagenesis which changes the strength of the samples. The result also shows the negative or inverse correlation between porosity and geomechanical properties. As for the strength, dolomitic mudstone and wackestone within Upper Jubaila Formation has higher Schmidt hammer values, wavy rippled sandy grainstone which is rich in quarts has the greater point load index values. While laminated mudstone and breccias, facies has lower strength. This emphasizes the role of mineral content in the geomechanical properties of Arab-D reservoir lithofacies.Keywords: geomechanical properties, Arab-D reservoir, lithofacies changes, Poisson's ratio, diageneis
Procedia PDF Downloads 4012871 Experimental Characterization of the AA7075 Aluminum Alloy Using Hot Shear Tensile Test
Authors: Trunal Bhujangrao, Catherine Froustey, Fernando Veiga, Philippe Darnis, Franck Girot Mata
Abstract:
The understanding of the material behavior under shear loading has great importance for a researcher in manufacturing processes like cutting, machining, milling, turning, friction stir welding, etc. where the material experiences large deformation at high temperature. For such material behavior analysis, hot shear tests provide a useful means to investigate the evolution of the microstructure at a wide range of temperature and to improve the material behavior model. Shear tests can be performed by direct shear loading (e.g. torsion of thin-walled tubular samples), or appropriate specimen design to convert a tensile or compressive load into shear (e.g. simple shear tests). The simple shear tests are straightforward and designed to obtained very large deformation. However, many of these shear tests are concerned only with the elastic response of the material. It is becoming increasingly important to capture a plastic response of the material. Plastic deformation is significantly more complex and is known to depend more heavily on the strain rate, temperature, deformation, etc. Besides, there is not enough work is done on high-temperature shear loading, because of geometrical instability occurred during the plastic deformation. The aim of this study is to design a new shear tensile specimen geometry to convert the tensile load into dominant shear loading under plastic deformation. Design of the specimen geometry is based on FEM. The material used in this paper is AA7075 alloy, tested quasi statically under elevated temperature. Finally, the microstructural changes taking place duringKeywords: AA7075 alloy, dynamic recrystallization, edge effect, large strain, shear tensile test
Procedia PDF Downloads 1492870 Quadrature Mirror Filter Bank Design Using Population Based Stochastic Optimization
Authors: Ju-Hong Lee, Ding-Chen Chung
Abstract:
The paper deals with the optimal design of two-channel linear-phase (LP) quadrature mirror filter (QMF) banks using a metaheuristic based optimization technique. Based on the theory of two-channel QMF banks using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the group delay error of the designed QMF bank and the magnitude response error of the designed low-pass analysis filter. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a particle swarm optimization algorithm. The resulting two-channel QMF banks can possess approximately LP response without magnitude distortion. Simulation results are presented for illustration and comparison.Keywords: quadrature mirror filter bank, digital all-pass filter, weighted least squares algorithm, particle swarm optimization
Procedia PDF Downloads 5232869 Simulation of Low Cycle Fatigue Behaviour of Nickel-Based Alloy at Elevated Temperatures
Authors: Harish Ramesh Babu, Marco Böcker, Mario Raddatz, Sebastian Henkel, Horst Biermann, Uwe Gampe
Abstract:
Thermal power machines are subjected to cyclic loading conditions under elevated temperatures. At these extreme conditions, the durability of the components has a significant influence. The material mechanical behaviour has to be known in detail for a failsafe construction. For this study a nickel-based alloy is considered, the deformation and fatigue behaviour of the material is analysed under cyclic loading. A viscoplastic model is used for calculating the deformation behaviour as well as to simulate the rate-dependent and cyclic plasticity effects. Finally, the cyclic deformation results of the finite element simulations are compared with low cycle fatigue (LCF) experiments.Keywords: complex low cycle fatigue, elevated temperature, fe-simulation, viscoplastic
Procedia PDF Downloads 2372868 Design and Implementation of Pseudorandom Number Generator Using Android Sensors
Authors: Mochamad Beta Auditama, Yusuf Kurniawan
Abstract:
A smartphone or tablet require a strong randomness to establish secure encrypted communication, encrypt files, etc. Therefore, random number generation is one of the main keys to provide secrecy. Android devices are equipped with hardware-based sensors, such as accelerometer, gyroscope, etc. Each of these sensors provides a stochastic process which has a potential to be used as an extra randomness source, in addition to /dev/random and /dev/urandom pseudorandom number generators. Android sensors can provide randomness automatically. To obtain randomness from Android sensors, each one of Android sensors shall be used to construct an entropy source. After all entropy sources are constructed, output from these entropy sources are combined to provide more entropy. Then, a deterministic process is used to produces a sequence of random bits from the combined output. All of these processes are done in accordance with NIST SP 800-22 and the series of NIST SP 800-90. The operation conditions are done 1) on Android user-space, and 2) the Android device is placed motionless on a desk.Keywords: Android hardware-based sensor, deterministic process, entropy source, random number generation/generators
Procedia PDF Downloads 3762867 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load
Authors: Sanjin Kršćanski, Josip Brnić
Abstract:
Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending
Procedia PDF Downloads 3062866 Using HABIT to Establish the Chemicals Analysis Methodology for Maanshan Nuclear Power Plant
Authors: J. R. Wang, S. W. Chen, Y. Chiang, W. S. Hsu, J. H. Yang, Y. S. Tseng, C. Shih
Abstract:
In this research, the HABIT analysis methodology was established for Maanshan nuclear power plant (NPP). The Final Safety Analysis Report (FSAR), reports, and other data were used in this study. To evaluate the control room habitability under the CO2 storage burst, the HABIT methodology was used to perform this analysis. The HABIT result was below the R.G. 1.78 failure criteria. This indicates that Maanshan NPP habitability can be maintained. Additionally, the sensitivity study of the parameters (wind speed, atmospheric stability classification, air temperature, and control room intake flow rate) was also performed in this research.Keywords: PWR, HABIT, Habitability, Maanshan
Procedia PDF Downloads 4462865 Using HABIT to Estimate the Concentration of CO2 and H2SO4 for Kuosheng Nuclear Power Plant
Authors: Y. Chiang, W. Y. Li, J. R. Wang, S. W. Chen, W. S. Hsu, J. H. Yang, Y. S. Tseng, C. Shih
Abstract:
In this research, the HABIT code was used to estimate the concentration under the CO2 and H2SO4 storage burst conditions for Kuosheng nuclear power plant (NPP). The Final Safety Analysis Report (FSAR) and reports were used in this research. In addition, to evaluate the control room habitability for these cases, the HABIT analysis results were compared with the R.G. 1.78 failure criteria. The comparison results show that the HABIT results are below the criteria. Additionally, some sensitivity studies (stability classification, wind speed and control room intake rate) were performed in this study.Keywords: BWR, HABIT, habitability, Kuosheng
Procedia PDF Downloads 4912864 Scoring System for the Prognosis of Sepsis Patients in Intensive Care Units
Authors: Javier E. García-Gallo, Nelson J. Fonseca-Ruiz, John F. Duitama-Munoz
Abstract:
Sepsis is a syndrome that occurs with physiological and biochemical abnormalities induced by severe infection and carries a high mortality and morbidity, therefore the severity of its condition must be interpreted quickly. After patient admission in an intensive care unit (ICU), it is necessary to synthesize the large volume of information that is collected from patients in a value that represents the severity of their condition. Traditional severity of illness scores seeks to be applicable to all patient populations, and usually assess in-hospital mortality. However, the use of machine learning techniques and the data of a population that shares a common characteristic could lead to the development of customized mortality prediction scores with better performance. This study presents the development of a score for the one-year mortality prediction of the patients that are admitted to an ICU with a sepsis diagnosis. 5650 ICU admissions extracted from the MIMICIII database were evaluated, divided into two groups: 70% to develop the score and 30% to validate it. Comorbidities, demographics and clinical information of the first 24 hours after the ICU admission were used to develop a mortality prediction score. LASSO (least absolute shrinkage and selection operator) and SGB (Stochastic Gradient Boosting) variable importance methodologies were used to select the set of variables that make up the developed score; each of this variables was dichotomized and a cut-off point that divides the population into two groups with different mean mortalities was found; if the patient is in the group that presents a higher mortality a one is assigned to the particular variable, otherwise a zero is assigned. These binary variables are used in a logistic regression (LR) model, and its coefficients were rounded to the nearest integer. The resulting integers are the point values that make up the score when multiplied with each binary variables and summed. The one-year mortality probability was estimated using the score as the only variable in a LR model. Predictive power of the score, was evaluated using the 1695 admissions of the validation subset obtaining an area under the receiver operating characteristic curve of 0.7528, which outperforms the results obtained with Sequential Organ Failure Assessment (SOFA), Oxford Acute Severity of Illness Score (OASIS) and Simplified Acute Physiology Score II (SAPSII) scores on the same validation subset. Observed and predicted mortality rates within estimated probabilities deciles were compared graphically and found to be similar, indicating that the risk estimate obtained with the score is close to the observed mortality, it is also observed that the number of events (deaths) is indeed increasing as the outcome go from the decile with the lowest probabilities to the decile with the highest probabilities. Sepsis is a syndrome that carries a high mortality, 43.3% for the patients included in this study; therefore, tools that help clinicians to quickly and accurately predict a worse prognosis are needed. This work demonstrates the importance of customization of mortality prediction scores since the developed score provides better performance than traditional scoring systems.Keywords: intensive care, logistic regression model, mortality prediction, sepsis, severity of illness, stochastic gradient boosting
Procedia PDF Downloads 2252863 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 1792862 Performance Evaluation of Filtration System for Groundwater Recharging Well in the Presence of Medium Sand-Mixed Storm Water
Authors: Krishna Kumar Singh, Praveen Jain
Abstract:
The collection of storm water runoff and forcing it into the groundwater is the need of the hour to sustain the ground water table. However, the runoff entraps various types of sediments and other floating objects whose removal are essential to avoid pollution of ground water and blocking of pores of aquifer. However, it requires regular cleaning and maintenance due to the problem of clogging. To evaluate the performance of filter system consisting of coarse sand (CS), gravel (G) and pebble (P) layers, a laboratory experiment was conducted in a rectangular column. The effect of variable thickness of CS, G and P layers of the filtration unit of the recharge shaft on the recharge rate and the sediment concentration of effluent water were evaluated. Medium sand (MS) of three particle sizes, viz. 0.150–0.300 mm (T1), 0.300–0.425 mm (T2) and 0.425–0.600 mm of thickness 25 cm, 30 cm, and 35 cm respectively in the top layer of the filter system and having seven influent sediment concentrations of 250–3,000 mg/l were used for the experimental study. The performance was evaluated in terms of recharge rates and clogging time. The results indicated that 100 % suspended solids were entrapped in the upper 10 cm layer of MS, the recharge rates declined sharply for influent concentrations of more than 1,000 mg/l. All treatments with a higher thickness of MS media indicated recharge rate slightly more than that of all treatment with a lower thickness of MS media respectively. The performance of storm water infiltration systems was highly dependent on the formation of a clogging layer at the filter. An empirical relationship has been derived between recharge rates, inflow sediment load, size of MS and thickness of MS with using MLR.Keywords: groundwater, medium sand-mixed storm water filter, inflow sediment load
Procedia PDF Downloads 3932861 Introduction to Multi-Agent Deep Deterministic Policy Gradient
Authors: Xu Jie
Abstract:
As a key network security method, cryptographic services must fully cope with problems such as the wide variety of cryptographic algorithms, high concurrency requirements, random job crossovers, and instantaneous surges in workloads. Its complexity and dynamics also make it difficult for traditional static security policies to cope with the ever-changing situation. Cyber Threats and Environment. Traditional resource scheduling algorithms are inadequate when facing complex decisionmaking problems in dynamic environments. A network cryptographic resource allocation algorithm based on reinforcement learning is proposed, aiming to optimize task energy consumption, migration cost, and fitness of differentiated services (including user, data, and task security). By modeling the multi-job collaborative cryptographic service scheduling problem as a multiobjective optimized job flow scheduling problem, and using a multi-agent reinforcement learning method, efficient scheduling and optimal configuration of cryptographic service resources are achieved. By introducing reinforcement learning, resource allocation strategies can be adjusted in real time in a dynamic environment, improving resource utilization and achieving load balancing. Experimental results show that this algorithm has significant advantages in path planning length, system delay and network load balancing, and effectively solves the problem of complex resource scheduling in cryptographic services.Keywords: multi-agent reinforcement learning, non-stationary dynamics, multi-agent systems, cooperative and competitive agents
Procedia PDF Downloads 262860 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network
Authors: T. Lydon, A. McNabola, P. Coughlan
Abstract:
Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network
Procedia PDF Downloads 262