Search results for: reinforcement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 695

Search results for: reinforcement

635 Conscious Intention-based Processes Impact the Neural Activities Prior to Voluntary Action on Reinforcement Learning Schedules

Authors: Xiaosheng Chen, Jingjing Chen, Phil Reed, Dan Zhang

Abstract:

Conscious intention can be a promising point cut to grasp consciousness and orient voluntary action. The current study adopted a random ratio (RR), yoked random interval (RI) reinforcement learning schedule instead of the previous highly repeatable and single decision point paradigms, aimed to induce voluntary action with the conscious intention that evolves from the interaction between short-range-intention and long-range-intention. Readiness potential (RP) -like-EEG amplitude and inter-trial-EEG variability decreased significantly prior to voluntary action compared to cued action for inter-trial-EEG variability, mainly featured during the earlier stage of neural activities. Notably, (RP) -like-EEG amplitudes decreased significantly prior to higher RI-reward rates responses in which participants formed a higher plane of conscious intention. The present study suggests the possible contribution of conscious intention-based processes to the neural activities from the earlier stage prior to voluntary action on reinforcement leanring schedule.

Keywords: Reinforcement leaning schedule, voluntary action, EEG, conscious intention, readiness potential

Procedia PDF Downloads 45
634 Experimental Study of Infill Walls with Joint Reinforcement Subjected to In-Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frames subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. Confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame, and the aspect ratio of the wall. All cases included tie columns and tie beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frames with identical characteristics to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in a cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls. This type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking, and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is a property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, infill wall, infilled frame, masonry wall

Procedia PDF Downloads 146
633 Fatigue of Multiscale Nanoreinforced Composites: 3D Modelling

Authors: Leon Mishnaevsky Jr., Gaoming Dai

Abstract:

3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro-micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement (localized in the fiber/matrix interface (fiber sizing) and distributed throughout the matrix) on the crack path, damage mechanisms and fatigue behavior is investigated in numerical experiments.

Keywords: computational mechanics, fatigue, nanocomposites, composites

Procedia PDF Downloads 576
632 Effectiveness of Reinforcement Learning (RL) for Autonomous Energy Management Solutions

Authors: Tesfaye Mengistu

Abstract:

This thesis aims to investigate the effectiveness of Reinforcement Learning (RL) for Autonomous Energy Management solutions. The study explores the potential of Model Free RL approaches, such as Monte Carlo RL and Q-learning, to improve energy management by autonomously adjusting energy management strategies to maximize efficiency. The research investigates the implementation of RL algorithms for optimizing energy consumption in a single-agent environment. The focus is on developing a framework for the implementation of RL algorithms, highlighting the importance of RL for enabling autonomous systems to adapt quickly to changing conditions and make decisions based on previous experiences. Moreover, the paper proposes RL as a novel energy management solution to address nations' CO2 emission goals. Reinforcement learning algorithms are well-suited to solving problems with sequential decision-making patterns and can provide accurate and immediate outputs to ease the planning and decision-making process. This research provides insights into the challenges and opportunities of using RL for energy management solutions and recommends further studies to explore its full potential. In conclusion, this study provides valuable insights into how RL can be used to improve the efficiency of energy management systems and supports the use of RL as a promising approach for developing autonomous energy management solutions in residential buildings.

Keywords: artificial intelligence, reinforcement learning, monte carlo, energy management, CO2 emission

Procedia PDF Downloads 51
631 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 89
630 Numerical Analysis of Rainfall-Induced Roadside Slope Failures and Their Stabilizing Solution

Authors: Muhammad Suradi, Sugiarto, Abdullah Latip

Abstract:

Many roadside slope failures occur during the rainy season, particularly in the period of extreme rainfall along Connecting National Road of Salubatu-Mambi, West Sulawesi, Indonesia. These occurrences cause traffic obstacles and endanger people along and around the road. Research collaboration between P2JN (National Road Construction Board) West Sulawesi Province, who authorize to supervise the road condition, and Ujung Pandang State Polytechnic (Applied University) was established to cope with the landslide problem. This research aims to determine factors triggering roadside slope failures and their optimum stabilizing solution. To achieve this objective, site observation and soil investigation were carried out to obtain parameters for analyses of rainfall-induced slope instability and reinforcement design using the SV Flux and SV Slope software. The result of this analysis will be taken into account for the next analysis to get an optimum design of the slope reinforcement. The result indicates some factors such as steep slopes, sandy soils, and unvegetated slope surface mainly contribute to the slope failures during intense rainfall. With respect to the contributing factors as well as construction material and technology, cantilever/butressing retaining wall becomes the optimum solution for the roadside slope reinforcement.

Keywords: roadside slope, failure, rainfall, slope reinforcement, optimum solution

Procedia PDF Downloads 62
629 A Reinforcement Learning Approach for Evaluation of Real-Time Disaster Relief Demand and Network Condition

Authors: Ali Nadi, Ali Edrissi

Abstract:

Relief demand and transportation links availability is the essential information that is needed for every natural disaster operation. This information is not in hand once a disaster strikes. Relief demand and network condition has been evaluated based on prediction method in related works. Nevertheless, prediction seems to be over or under estimated due to uncertainties and may lead to a failure operation. Therefore, in this paper a stochastic programming model is proposed to evaluate real-time relief demand and network condition at the onset of a natural disaster. To address the time sensitivity of the emergency response, the proposed model uses reinforcement learning for optimization of the total relief assessment time. The proposed model is tested on a real size network problem. The simulation results indicate that the proposed model performs well in the case of collecting real-time information.

Keywords: disaster management, real-time demand, reinforcement learning, relief demand

Procedia PDF Downloads 269
628 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces

Authors: Shweta Singh, Sudaman Katti

Abstract:

The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.

Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity

Procedia PDF Downloads 93
627 Using Q-Learning to Auto-Tune PID Controller Gains for Online Quadcopter Altitude Stabilization

Authors: Y. Alrubyli

Abstract:

Unmanned Arial Vehicles (UAVs), and more specifically, quadcopters need to be stable during their flights. Altitude stability is usually achieved by using a PID controller that is built into the flight controller software. Furthermore, the PID controller has gains that need to be tuned to reach optimal altitude stabilization during the quadcopter’s flight. For that, control system engineers need to tune those gains by using extensive modeling of the environment, which might change from one environment and condition to another. As quadcopters penetrate more sectors, from the military to the consumer sectors, they have been put into complex and challenging environments more than ever before. Hence, intelligent self-stabilizing quadcopters are needed to maneuver through those complex environments and situations. Here we show that by using online reinforcement learning with minimal background knowledge, the altitude stability of the quadcopter can be achieved using a model-free approach. We found that by using background knowledge instead of letting the online reinforcement learning algorithm wander for a while to tune the PID gains, altitude stabilization can be achieved faster. In addition, using this approach will accelerate development by avoiding extensive simulations before applying the PID gains to the real-world quadcopter. Our results demonstrate the possibility of using the trial and error approach of reinforcement learning combined with background knowledge to achieve faster quadcopter altitude stabilization in different environments and conditions.

Keywords: reinforcement learning, Q-leanring, online learning, PID tuning, unmanned aerial vehicle, quadcopter

Procedia PDF Downloads 136
626 Wear Map for Cu-Based Friction Materials with Different Contents of Fe Reinforcement

Authors: Haibin Zhou, Pingping Yao, Kunyang Fan

Abstract:

Copper-based sintered friction materials are widely used in the brake system of different applications such as engineering machinery or high-speed train, due to the excellent mechanical, thermal and tribological performance. Considering the diversity of the working conditions of brake system, it is necessary to identify well and understand the tribological performance and wear mechanisms of friction materials for different conditions. Fe has been a preferred reinforcement for copper-based friction materials, due to its ability to improve the wear resistance and mechanical properties of material. Wear map is well accepted as a useful research method for evaluation of wear performances and wear mechanisms over a wider range of working conditions. Therefore, it is significantly important to construct a wear map which can give out the effects of work condition and Fe reinforcement on tribological performance of Cu-based friction materials. In this study, the copper-based sintered friction materials with the different addition of Fe reinforcement (0-20 vol. %) were studied. The tribological tests were performed against stainless steel in a ring-on-ring braking tester with varying braking energy density (0-5000 J/cm2). The linear wear and friction coefficient were measured. The worn surface, cross section and debris were analyzed to determine the dominant wear mechanisms for different testing conditions. On the basis of experimental results, the wear map and wear mechanism map were established, in terms of braking energy density and the addition of Fe. It was found that with low contents of Fe and low braking energy density, adhesive wear was the dominant wear mechanism of friction materials. Oxidative wear and abrasive wear mainly occurred under moderate braking energy density. In the condition of high braking energy density, with both high and low addition of Fe, delamination appeared as the main wear mechanism.

Keywords: Cu-based friction materials, Fe reinforcement, wear map, wear mechanism

Procedia PDF Downloads 239
625 A Rapid Reinforcement Technique for Columns by Carbon Fiber/Epoxy Composite Materials

Authors: Faruk Elaldi

Abstract:

There are lots of concrete columns and beams around in our living cities. Those columns are mostly open to aggressive environmental conditions and earthquakes. Mostly, they are deteriorated by sand, wind, humidity and other external applications at times. After a while, these beams and columns need to be repaired. Within the scope of this study, for reinforcement of concrete columns, samples were designed and fabricated to be strengthened with carbon fiber reinforced composite materials and conventional concrete encapsulation and followed by, and they were put into the axial compression test to determine load-carrying performance before column failure. In the first stage of this study, concrete column design and mold designs were completed for a certain load-carrying capacity. Later, the columns were exposed to environmental deterioration in order to reduce load-carrying capacity. To reinforce these damaged columns, two methods were applied, “concrete encapsulation” and the other one “wrapping with carbon fiber /epoxy” material. In the second stage of the study, the reinforced columns were applied to the axial compression test and the results obtained were analyzed. Cost and load-carrying performance comparisons were made and it was found that even though the carbon fiber/epoxy reinforced method is more expensive, this method enhances higher load-carrying capacity and reduces the reinforcement processing period.

Keywords: column reinforcement, composite, earth quake, carbon fiber reinforced

Procedia PDF Downloads 146
624 Review on Wear Behavior of Magnesium Matrix Composites

Authors: Amandeep Singh, Niraj Bala

Abstract:

In the last decades, light-weight materials such as magnesium matrix composites have become hot topic for material research due to their excellent mechanical and physical properties. However, relatively very less work has been done related to the wear behavior of these composites. Magnesium matrix composites have wide applications in automobile and aerospace sector. In this review, attempt has been done to collect the literature related to wear behavior of magnesium matrix composites fabricated through various processing techniques such as stir casting, powder metallurgy, friction stir processing etc. Effect of different reinforcements, reinforcement content, reinforcement size, wear load, sliding speed and time have been studied by different researchers in detail. Wear mechanism under different experimental condition has been reviewed in detail. The wear resistance of magnesium and its alloys can be enhanced with the addition of different reinforcements. Wear resistance can further be enhanced by increasing the percentage of added reinforcements. Increase in applied load during wear test leads to increase in wear rate of magnesium composites.

Keywords: hardness, magnesium matrix composites, reinforcement, wear

Procedia PDF Downloads 296
623 FEM Study of Different Methods of Fiber Reinforcement Polymer Strengthening of a High Strength Concrete Beam-Column Connection

Authors: Talebi Aliasghar, Ebrahimpour Komeleh Hooman, Maghsoudi Ali Akbar

Abstract:

In reinforced concrete (RC) structures, beam-column connection region has a considerable effect on the behavior of structures. Using fiber reinforcement polymer (FRP) for the strengthening of connections in RC structures can be one of the solutions to retrofitting this zone which result in the enhanced behavior of structure. In this paper, these changes in behavior by using FRP for high strength concrete beam-column connection have been studied by finite element modeling. The concrete damage plasticity (CDP) model has been used to analyze the RC. The results illustrated a considerable development in load-bearing capacity but also a noticeable reduction in ductility. The study also assesses these qualities for several modes of strengthening and suggests the most effective mode of strengthening. Using FRP in flexural zone and FRP with 45-degree oriented fibers in shear zone of joint showed the most significant change in behavior.

Keywords: HSC, beam-column connection, Fiber Reinforcement Polymer, FRP, Finite Element Modeling, FEM

Procedia PDF Downloads 123
622 Deep Reinforcement Learning Model for Autonomous Driving

Authors: Boumaraf Malak

Abstract:

The development of intelligent transportation systems (ITS) and artificial intelligence (AI) are spurring us to pave the way for the widespread adoption of autonomous vehicles (AVs). This is open again opportunities for smart roads, smart traffic safety, and mobility comfort. A highly intelligent decision-making system is essential for autonomous driving around dense, dynamic objects. It must be able to handle complex road geometry and topology, as well as complex multiagent interactions, and closely follow higher-level commands such as routing information. Autonomous vehicles have become a very hot research topic in recent years due to their significant ability to reduce traffic accidents and personal injuries. Using new artificial intelligence-based technologies handles important functions in scene understanding, motion planning, decision making, vehicle control, social behavior, and communication for AV. This paper focuses only on deep reinforcement learning-based methods; it does not include traditional (flat) planar techniques, which have been the subject of extensive research in the past because reinforcement learning (RL) has become a powerful learning framework now capable of learning complex policies in high dimensional environments. The DRL algorithm used so far found solutions to the four main problems of autonomous driving; in our paper, we highlight the challenges and point to possible future research directions.

Keywords: deep reinforcement learning, autonomous driving, deep deterministic policy gradient, deep Q-learning

Procedia PDF Downloads 46
621 Studying the Influence of Stir Cast Parameters on Properties of Al6061/Al2O3 Composite

Authors: Anuj Suhag, Rahul Dayal

Abstract:

Aluminum matrix composites (AMCs) refer to the class of metal matrix composites that are lightweight but high performance aluminum centric material systems. The reinforcement in AMCs could be in the form of continuous/discontinuous fibers, whisker or particulates, in volume fractions. Properties of AMCs can be altered to the requirements of different industrial applications by suitable combinations of matrix, reinforcement and processing route. This work focuses on the fabrication of aluminum alloy (Al6061) matrix composites (AMCs) reinforced with 5 and 3 wt% Al2O3 particulates of 45µm using stir casting route. The aim of the present work is to investigate the effects of process parameters, determined by design of experiments, on microhardness, microstructure, Charpy impact strength, surface roughness and tensile properties of the AMC.

Keywords: aluminium matrix composite, Charpy impact strength test, composite materials, matrix, metal matrix composite, surface roughness, reinforcement

Procedia PDF Downloads 628
620 Conscription or Constriction: Perception of Students on the Reinforcement of Compulsory Military Service

Authors: Krista Mae F. Ramos, Lance Micaiah C. Dauz, Gylza Nicole D. Bautista, Rua R. Galang, Jeric Xyrus G. Karganilla

Abstract:

With the recent proclamation of the possible reinforcement of Compulsory Military Service in the Philippines, debates and societal talks rose and circulated as opinions and perceptions regarding the topic continue to clash. This study aims to determine the perception of the youth on its reimplementation and identify various advantages and disadvantages based on their perspective. The responses were gathered through a virtual call interview, underwent the process of thematization, and were categorized into different themes. Results reflect that the students perceive compulsory military service as a necessity for national defense but requires a long time that can hinder their education and needs a strong foundation to be implemented and sustained. The participants acknowledged that the practice would instill discipline, patriotism, and nationalism, develop an individual’s physical abilities, provide skills and knowledge and improve a person’s self-defense. However, there are also concerns regarding the prominent military shaping and abuse, their loss of freedom of choice, and the chances of health deterioration.

Keywords: compulsory, military, service, reinforcement, perception

Procedia PDF Downloads 125
619 The Effect of Soil Reinforcement on Pullout Behaviour of Flat Under-Reamer Anchor Pile Placed in Sand

Authors: V. K. Arora, Amit Rastogi

Abstract:

To understand the anchor pile behaviour and to predict the capacity of piles under uplift loading are important concerns in foundation analysis. Experimental model tests have been conducted on single anchor pile embedded in cohesionless soil and subjected to pure uplift loading. A gravel-filled geogrid layer was located around the enlarged pile base. The experimental tests were conducted on straight-shafted vertical steel piles with an outer diameter of 20 mm in a steel soil tank. The tested piles have embedment depth-to-diameter ratios (L/D) of 2, 3, and 4. The sand bed is prepared at three different values of density of 1.67, 1.59, and 1.50gm/cc. Single piles embedded in sandy soil were tested and the results are presented and analysed in this paper. The influences of pile embedment ratio, reinforcement, relative density of soil on the uplift capacity of piles were investigated. The study revealed that the behaviour of single piles under uplift loading depends mainly on both the pile embedment depth-to-diameter ratio and the soil density. It is believed that the experimental results presented in this study would be beneficial to the professional understanding of the soil–pile-uplift interaction problem.

Keywords: flat under-reamer anchor pile, geogrid, pullout reinforcement, soil reinforcement

Procedia PDF Downloads 431
618 Deep Reinforcement Learning Approach for Optimal Control of Industrial Smart Grids

Authors: Niklas Panten, Eberhard Abele

Abstract:

This paper presents a novel approach for real-time and near-optimal control of industrial smart grids by deep reinforcement learning (DRL). To achieve highly energy-efficient factory systems, the energetic linkage of machines, technical building equipment and the building itself is desirable. However, the increased complexity of the interacting sub-systems, multiple time-variant target values and stochastic influences by the production environment, weather and energy markets make it difficult to efficiently control the energy production, storage and consumption in the hybrid industrial smart grids. The studied deep reinforcement learning approach allows to explore the solution space for proper control policies which minimize a cost function. The deep neural network of the DRL agent is based on a multilayer perceptron (MLP), Long Short-Term Memory (LSTM) and convolutional layers. The agent is trained within multiple Modelica-based factory simulation environments by the Advantage Actor Critic algorithm (A2C). The DRL controller is evaluated by means of the simulation and then compared to a conventional, rule-based approach. Finally, the results indicate that the DRL approach is able to improve the control performance and significantly reduce energy respectively operating costs of industrial smart grids.

Keywords: industrial smart grids, energy efficiency, deep reinforcement learning, optimal control

Procedia PDF Downloads 162
617 Personalized Email Marketing Strategy: A Reinforcement Learning Approach

Authors: Lei Zhang, Tingting Xu, Jun He, Zhenyu Yan

Abstract:

Email marketing is one of the most important segments of online marketing. It has been proved to be the most effective way to acquire and retain customers. The email content is vital to customers. Different customers may have different familiarity with a product, so a successful marketing strategy must personalize email content based on individual customers’ product affinity. In this study, we build our personalized email marketing strategy with three types of emails: nurture, promotion, and conversion. Each type of email has a different influence on customers. We investigate this difference by analyzing customers’ open rates, click rates and opt-out rates. Feature importance from response models is also analyzed. The goal of the marketing strategy is to improve the click rate on conversion-type emails. To build the personalized strategy, we formulate the problem as a reinforcement learning problem and adopt a Q-learning algorithm with variations. The simulation results show that our model-based strategy outperforms the current marketer’s strategy.

Keywords: email marketing, email content, reinforcement learning, machine learning, Q-learning

Procedia PDF Downloads 161
616 Characterization of Aluminium Alloy 6063 Hybrid Metal Matrix Composite by Using Stir Casting Method

Authors: Balwinder Singh

Abstract:

The present research is a paper on the characterization of aluminum alloy-6063 hybrid metal matrix composites using three different reinforcement materials (SiC, red mud, and fly ash) through stir casting method. The red mud was used in solid form, and particle size range varies between 103-150 µm. During this investigation, fly ash is received from Guru Nanak Dev Thermal Plant (GNDTP), Bathinda. The study has been done by using Taguchi’s L9 orthogonal array by taking fraction wt.% (SiC 5%, 7.5%, and 10% and Red Mud and Fly Ash 2%, 4%, and 6%) as input parameters with their respective levels. The study of the mechanical properties (tensile strength, impact strength, and microhardness) has been done by using Analysis of Variance (ANOVA) with the help of MINITAB 17 software. It is revealed that silicon carbide is the most significant parameter followed by red mud and fly ash affecting the mechanical properties, respectively. The fractured surface morphology of the composites using Field Emission Scanning Electron Microscope (FESEM) shows that there is a good mixing of reinforcement particles in the matrix. Energy-dispersive X-ray spectroscopy (EDS) was performed to know the presence of the phases of the reinforced material.

Keywords: reinforcement, silicon carbide, fly ash, red mud

Procedia PDF Downloads 122
615 Load-Settlement Behaviour of Geogrid-Reinforced Sand Bed over Granular Piles

Authors: Sateesh Kumar Pisini, Swetha Priya Darshini Thammadi, Sanjay Kumar Shukla

Abstract:

Granular piles are a popular ground improvement technique in soft cohesive soils as well as for loose non-cohesive soils. The present experimental study has been carried out on granular piles in loose (Relative density = 30%) and medium dense (Relative density = 60%) sands with geogrid reinforcement within the sand bed over the granular piles. A group of five piles were installed in the sand at different spacing, s = 2d, 3d and 4d, d being the diameter of the pile. The length (L = 0.4 m) and diameter (d = 50 mm) of the piles were kept constant for all the series of experiments. The load-settlement behavior of reinforced sand bed and granular piles system was studied by applying the load on a square footing. The results show that the effect of reinforcement increases the load bearing capacity of the piles. It is also found that an increase in spacing between piles decreases the settlement for both loose and medium dense soil.

Keywords: granular pile, load-carrying capacity, settlement, geogrid reinforcement, sand

Procedia PDF Downloads 356
614 Deep Reinforcement Learning for Optimal Decision-Making in Supply Chains

Authors: Nitin Singh, Meng Ling, Talha Ahmed, Tianxia Zhao, Reinier van de Pol

Abstract:

We propose the use of reinforcement learning (RL) as a viable alternative for optimizing supply chain management, particularly in scenarios with stochasticity in product demands. RL’s adaptability to changing conditions and its demonstrated success in diverse fields of sequential decision-making makes it a promising candidate for addressing supply chain problems. We investigate the impact of demand fluctuations in a multi-product supply chain system and develop RL agents with learned generalizable policies. We provide experimentation details for training RL agents and statistical analysis of the results. We study the generalization ability of RL agents for different demand uncertainty scenarios and observe superior performance compared to the agents trained with fixed demand curves. The proposed methodology has the potential to lead to cost reduction and increased profit for companies dealing with frequent inventory movement between supply and demand nodes.

Keywords: inventory management, reinforcement learning, supply chain optimization, uncertainty

Procedia PDF Downloads 74
613 Numerical Analysis of Shallow Footing Rested on Geogrid Reinforced Sandy Soil

Authors: Seyed Abolhasan Naeini, Javad Shamsi Soosahab

Abstract:

The use of geosynthetic reinforcement within the footing soils is a very effective and useful method to avoid the construction of costly deep foundations. This study investigated the use of geosynthetics for soil improvement based on numerical modeling using FELA software. Pressure settlement behavior and bearing capacity ratio of foundation on geogrid reinforced sand is investigated and the effect of different parameters like as number of geogrid layers and vertical distance between elements in three different relative density soil is studied. The effects of geometrical parameters of reinforcement layers were studied for determining the optimal values to reach to maximum bearing capacity. The results indicated that the optimum range of the distance ratio between the reinforcement layers was achieved at 0.5 to 0.6 and after number of geogrid layers of 4, no significant effect on increasing the bearing capacity of footing on reinforced sandy with geogrid

Keywords: geogrid, reinforced sand, FELA software, distance ratio, number of geogrid layers

Procedia PDF Downloads 117
612 Using Fly Ash as a Reinforcement to Increase Wear Resistance of Pure Magnesium

Authors: E. Karakulak, R. Yamanoğlu, M. Zeren

Abstract:

In the current study, fly ash obtained from a thermal power plant was used as reinforcement in pure magnesium. The composite materials with different fly ash contents were produced with powder metallurgical methods. Powder mixtures were sintered at 540oC under 30 MPa pressure for 15 minutes in a vacuum assisted hot press. Results showed that increasing ash content continuously increases hardness of the composite. On the other hand, minimum wear damage was obtained at 2 wt. % ash content. Addition of higher level of fly ash results with formation of cracks in the matrix and increases wear damage of the material.

Keywords: Mg composite, fly ash, wear, powder metallurgy

Procedia PDF Downloads 329
611 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning

Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker

Abstract:

Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.

Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning

Procedia PDF Downloads 119
610 Corrosion Resistance Evaluation of Reinforcing Bars: A Comparative Study of Fusion Bonded Epoxy Coated, Cement Polymer Composite Coated and Dual Zinc Epoxy Coated Rebar for Application in Reinforced Concrete Structures

Authors: Harshit Agrawal, Salman Muhammad

Abstract:

Degradation to reinforced concrete (RC), primarily due to corrosion of embedded reinforcement, has been a major cause of concern worldwide. Among several ways to control corrosion, the use of coated reinforcement has gained significant interest in field applications. However, the choice of proper coating material and the effect of damage over coating are yet to be addressed for effective application of coated reinforcements. The present study aims to investigate and compare the performance of three different types of coated reinforcements —Fusion-Bonded Epoxy Coating (FBEC), Cement Polymer Composite Coating (CPCC), and Dual Zinc-Epoxy Coating (DZEC) —in concrete structures. The aim is to assess their corrosion resistance, durability, and overall effectiveness as coated reinforcement materials both in undamaged and simulated damaged conditions. Through accelerated corrosion tests, electrochemical analysis, and exposure to aggressive marine environments, the study evaluates the long-term performance of each coating system. This research serves as a crucial guide for engineers and construction professionals in selecting the most suitable corrosion protection for reinforced concrete, thereby enhancing the durability and sustainability of infrastructure.

Keywords: corrosion, reinforced concrete, coated reinforcement, seawater exposure, electrochemical analysis, service life, corrosion prevention

Procedia PDF Downloads 33
609 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graphs and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improved strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain a better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference

Procedia PDF Downloads 193
608 ROOP: Translating Sequential Code Fragments to Distributed Code Fragments Using Deep Reinforcement Learning

Authors: Arun Sanjel, Greg Speegle

Abstract:

Every second, massive amounts of data are generated, and Data Intensive Scalable Computing (DISC) frameworks have evolved into effective tools for analyzing such massive amounts of data. Since the underlying architecture of these distributed computing platforms is often new to users, building a DISC application can often be time-consuming and prone to errors. The automated conversion of a sequential program to a DISC program will consequently significantly improve productivity. However, synthesizing a user’s intended program from an input specification is complex, with several important applications, such as distributed program synthesizing and code refactoring. Existing works such as Tyro and Casper rely entirely on deductive synthesis techniques or similar program synthesis approaches. Our approach is to develop a data-driven synthesis technique to identify sequential components and translate them to equivalent distributed operations. We emphasize using reinforcement learning and unit testing as feedback mechanisms to achieve our objectives.

Keywords: program synthesis, distributed computing, reinforcement learning, unit testing, DISC

Procedia PDF Downloads 68
607 Mutiple Medical Landmark Detection on X-Ray Scan Using Reinforcement Learning

Authors: Vijaya Yuvaram Singh V M, Kameshwar Rao J V

Abstract:

The challenge with development of neural network based methods for medical is the availability of data. Anatomical landmark detection in the medical domain is a process to find points on the x-ray scan report of the patient. Most of the time this task is done manually by trained professionals as it requires precision and domain knowledge. Traditionally object detection based methods are used for landmark detection. Here, we utilize reinforcement learning and query based method to train a single agent capable of detecting multiple landmarks. A deep Q network agent is trained to detect single and multiple landmarks present on hip and shoulder from x-ray scan of a patient. Here a single agent is trained to find multiple landmark making it superior to having individual agents per landmark. For the initial study, five images of different patients are used as the environment and tested the agents performance on two unseen images.

Keywords: reinforcement learning, medical landmark detection, multi target detection, deep neural network

Procedia PDF Downloads 115
606 Research on Knowledge Graph Inference Technology Based on Proximal Policy Optimization

Authors: Yihao Kuang, Bowen Ding

Abstract:

With the increasing scale and complexity of knowledge graph, modern knowledge graph contains more and more types of entity, relationship, and attribute information. Therefore, in recent years, it has been a trend for knowledge graph inference to use reinforcement learning to deal with large-scale, incomplete, and noisy knowledge graph and improve the inference effect and interpretability. The Proximal Policy Optimization (PPO) algorithm utilizes a near-end strategy optimization approach. This allows for more extensive updates of policy parameters while constraining the update extent to maintain training stability. This characteristic enables PPOs to converge to improve strategies more rapidly, often demonstrating enhanced performance early in the training process. Furthermore, PPO has the advantage of offline learning, effectively utilizing historical experience data for training and enhancing sample utilization. This means that even with limited resources, PPOs can efficiently train for reinforcement learning tasks. Based on these characteristics, this paper aims to obtain better and more efficient inference effect by introducing PPO into knowledge inference technology.

Keywords: reinforcement learning, PPO, knowledge inference, supervised learning

Procedia PDF Downloads 31