Search results for: Network Time Protocol
17757 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 13517756 Optimal Allocation of Multiple Emergency Resources for a Single Potential Accident Node: A Mixed Integer Linear Program
Authors: Yongjian Du, Jinhua Sun, Kim M. Liew, Huahua Xiao
Abstract:
Optimal allocation of emergency resources before a disaster is of great importance for emergency response. In reality, the pre-protection for a single critical node where accidents may occur is common. In this study, a model is developed to determine location and inventory decisions of multiple emergency resources among a set of candidate stations to minimize the total cost based on the constraints of budgetary and capacity. The total cost includes the economic accident loss which is accorded with probability distribution of time and the warehousing cost of resources which is increasing over time. A ratio is set to measure the degree of a storage station only serving the target node that becomes larger with the decrease of the distance between them. For the application of linear program, it is assumed that the length of travel time to the accident scene of emergency resources has a linear relationship with the economic accident loss. A computational experiment is conducted to illustrate how the proposed model works, and the results indicate its effectiveness and practicability.Keywords: emergency response, integer linear program, multiple emergency resources, pre-allocation decisions, single potential accident node
Procedia PDF Downloads 15717755 Deep Q-Network for Navigation in Gazebo Simulator
Authors: Xabier Olaz Moratinos
Abstract:
Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.Keywords: machine learning, DQN, Gazebo, navigation
Procedia PDF Downloads 8317754 Evaluation of Toxicity of Cerium Oxide on Zebrafish Developmental Stages
Authors: Roberta Pecoraro, Elena Maria Scalisi
Abstract:
Engineered Nanoparticles (ENPs) and Nanomaterials (ENMs) concern an active research area and a sector in full expansion. They have physical-chemical characteristics and small size that improve their performance compared to common materials. Due to the increase in their production and their subsequent release into the environment, new strategies are emerging to assess risk of nanomaterials. NPs can be released into the environment through aquatic systems by human activities and exert toxicity on living organisms. We evaluated the potential toxic effect of cerium oxide (CeO2) nanoparticles because it’s used in different fields due to its peculiar properties. In order to assess nanoparticles toxicity, Fish Embryo Toxicity (FET) test was performed. Powders of CeO2 NPs supplied by the CNR-IMM of Catania are indicated as CeO2 type 1 (as-prepared) and CeO2 type 2 (modified), while CeO2 type 3 (commercial) is supplied by Sigma-Aldrich. Starting from a stock solution (0.001g/10 ml dilution water) of each type of CeO2 NPs, the other concentration solutions were obtained adding 1 ml of the stock solution to 9 ml of dilution water, leading to three different solutions of concentration (10-4, 10-5, 10-6 g/ml). All the solutions have been sonicated to avoid natural tendency of NPs to aggregate and sediment. FET test was performed according to the OECD guidelines for testing chemicals using our internal protocol procedure. A number of eight selected fertilized eggs were placed in each becher filled with 5 ml of each concentration of the three types of CeO2 NPs; control samples were incubated only with dilution water. Replication was performed for each concentration. During the exposure period, we observed four endpoints (embryo coagulation, lack of formation of somites, failure to lift the yolk bag, no heartbeat) by a stereomicroscope every 24 hours. Immunohistochemical analysis on treated larvae was performed to evaluate the expression of metallothioneins (MTs), Heat Shock Proteins 70 (HSP70) and 7-ethoxyresorufin-O-diethylase (EROD). Our results have not shown evident alterations on embryonic development because all embryos completed the development and the hatching of the eggs, started around the 48th hour after exposure, took place within the last observation at 72 hours. A good reactivity, both in the embryos and in the newly hatched larvae, was found. The presence of heartbeat has also been observed in embryos with reduced mobility confirming their viability. A higher expression of EROD biomarker was observed in the larvae exposed to the three types of CeO2, showing a clear difference with the control. A weak positivity was found for MTs biomarker in treated larvae as well as in the control. HSP70 are expressed homogeneously in all the type of nanoparticles tested but not too much greater than control. Our results are in agreement with other studies in the literature, in which the exposure of Danio rerio larvae to other metal oxide nanoparticles does not show adverse effects on survival and hatching time. Further studies are necessary to clarify the role of these NPs and also to solve conflicting opinions.Keywords: Danio rerio, endpoints, fish embryo toxicity test, metallic nanoparticles
Procedia PDF Downloads 13817753 Dynamic Network Approach to Air Traffic Management
Authors: Catia S. A. Sima, K. Bousson
Abstract:
Congestion in the Terminal Maneuvering Areas (TMAs) of larger airports impacts all aspects of air traffic flow, not only at national level but may also induce arrival delays at international level. Hence, there is a need to monitor appropriately the air traffic flow in TMAs so that efficient decisions may be taken to manage their occupancy rates. It would be desirable to physically increase the existing airspace to accommodate all existing demands, but this question is entirely utopian and, given this possibility, several studies and analyses have been developed over the past decades to meet the challenges that have arisen due to the dizzying expansion of the aeronautical industry. The main objective of the present paper is to propose concepts to manage and reduce the degree of uncertainty in the air traffic operations, maximizing the interest of all involved, ensuring a balance between demand and supply, and developing and/or adapting resources that enable a rapid and effective adaptation of measures to the current context and the consequent changes perceived in the aeronautical industry. A central task is to emphasize the increase in air traffic flow management capacity to the present day, taking into account not only a wide range of methodologies but also equipment and/or tools already available in the aeronautical industry. The efficient use of these resources is crucial as the human capacity for work is limited and the actors involved in all processes related to air traffic flow management are increasingly overloaded and, as a result, operational safety could be compromised. The methodology used to answer and/or develop the issues listed above is based on the advantages promoted by the application of Markov Chain principles that enable the construction of a simplified model of a dynamic network that describes the air traffic flow behavior anticipating their changes and eventual measures that could better address the impact of increased demand. Through this model, the proposed concepts are shown to have potentials to optimize the air traffic flow management combined with the operation of the existing resources at each moment and the circumstances found in each TMA, using historical data from the air traffic operations and specificities found in the aeronautical industry, namely in the Portuguese context.Keywords: air traffic flow, terminal maneuvering area, TMA, air traffic management, ATM, Markov chains
Procedia PDF Downloads 13517752 One-Step Time Series Predictions with Recurrent Neural Networks
Authors: Vaidehi Iyer, Konstantin Borozdin
Abstract:
Time series prediction problems have many important practical applications, but are notoriously difficult for statistical modeling. Recently, machine learning methods have been attracted significant interest as a practical tool applied to a variety of problems, even though developments in this field tend to be semi-empirical. This paper explores application of Long Short Term Memory based Recurrent Neural Networks to the one-step prediction of time series for both trend and stochastic components. Two types of data are analyzed - daily stock prices, that are often considered to be a typical example of a random walk, - and weather patterns dominated by seasonal variations. Results from both analyses are compared, and reinforced learning framework is used to select more efficient between Recurrent Neural Networks and more traditional auto regression methods. It is shown that both methods are able to follow long-term trends and seasonal variations closely, but have difficulties with reproducing day-to-day variability. Future research directions and potential real world applications are briefly discussed.Keywords: long short term memory, prediction methods, recurrent neural networks, reinforcement learning
Procedia PDF Downloads 23517751 Earthquake Forecasting Procedure Due to Diurnal Stress Transfer by the Core to the Crust
Authors: Hassan Gholibeigian, Kazem Gholibeigian
Abstract:
In this paper, our goal is determination of loading versus time in crust. For this goal, we present a computational procedure to propose a cumulative strain energy time profile which can be used to predict the approximate location and time of the next major earthquake (M > 4.5) along a specific fault, which we believe, is more accurate than many of the methods presently in use. In the coming pages, after a short review of the research works presently going on in the area of earthquake analysis and prediction, earthquake mechanisms in both the jerk and sequence earthquake direction is discussed, then our computational procedure is presented using differential equations of equilibrium which govern the nonlinear dynamic response of a system of finite elements, modified with an extra term to account for the jerk produced during the quake. We then employ Von Mises developed model for the stress strain relationship in our calculations, modified with the addition of an extra term to account for thermal effects. For calculation of the strain energy the idea of Pulsating Mantle Hypothesis (PMH) is used. This hypothesis, in brief, states that the mantle is under diurnal cyclic pulsating loads due to unbalanced gravitational attraction of the sun and the moon. A brief discussion is done on the Denali fault as a case study. The cumulative strain energy is then graphically represented versus time. At the end, based on some hypothetic earthquake data, the final results are verified.Keywords: pulsating mantle hypothesis, inner core’s dislocation, outer core’s bulge, constitutive model, transient hydro-magneto-thermo-mechanical load, diurnal stress, jerk, fault behaviour
Procedia PDF Downloads 27917750 The Effect of Elapsed Time on the Cardiac Troponin-T Degradation and Its Utility as a Time Since Death Marker in Cases of Death Due to Burn
Authors: Sachil Kumar, Anoop K.Verma, Uma Shankar Singh
Abstract:
It’s extremely important to study postmortem interval in different causes of death since it assists in a great way in making an opinion on the exact cause of death following such incident often times. With diligent knowledge of the interval one could really say as an expert that the cause of death is not feigned hence there is a great need in evaluating such death to have been at the CRIME SCENE before performing an autopsy on such body. The approach described here is based on analyzing the degradation or proteolysis of a cardiac protein in cases of deaths due to burn as a marker of time since death. Cardiac tissue samples were collected from (n=6) medico-legal autopsies, (Department of Forensic Medicine and Toxicology), King George’s Medical University, Lucknow India, after informed consent from the relatives and studied post-mortem degradation by incubation of the cardiac tissue at room temperature (20±2 OC) for different time periods (~7.30, 18.20, 30.30, 41.20, 41.40, 54.30, 65.20, and 88.40 Hours). The cases included were the subjects of burn without any prior history of disease who died in the hospital and their exact time of death was known. The analysis involved extraction of the protein, separation by denaturing gel electrophoresis (SDS-PAGE) and visualization by Western blot using cTnT specific monoclonal antibodies. The area of the bands within a lane was quantified by scanning and digitizing the image using Gel Doc. As time postmortem progresses the intact cTnT band degrades to fragments that are easily detected by the monoclonal antibodies. A decreasing trend in the level of cTnT (% of intact) was found as the PM hours increased. A significant difference was observed between <15 h and other PM hours (p<0.01). Significant difference in cTnT level (% of intact) was also observed between 16-25 h and 56-65 h & >75 h (p<0.01). Western blot data clearly showed the intact protein at 42 kDa, three major (28 kDa, 30kDa, 10kDa) fragments, three additional minor fragments (12 kDa, 14kDa, and 15 kDa) and formation of low molecular weight fragments. Overall, both PMI and cardiac tissue of burned corpse had a statistically significant effect where the greatest amount of protein breakdown was observed within the first 41.40 Hrs and after it intact protein slowly disappears. If the percent intact cTnT is calculated from the total area integrated within a Western blot lane, then the percent intact cTnT shows a pseudo-first order relationship when plotted against the time postmortem. A strong significant positive correlation was found between cTnT and PM hours (r=0.87, p=0.0001). The regression analysis showed a good variability explained (R2=0.768) The post-mortem Troponin-T fragmentation observed in this study reveals a sequential, time-dependent process with the potential for use as a predictor of PMI in cases of burning.Keywords: burn, degradation, postmortem interval, troponin-T
Procedia PDF Downloads 45217749 Structural Health Monitoring and Damage Structural Identification Using Dynamic Response
Authors: Reza Behboodian
Abstract:
Monitoring the structural health and diagnosing their damage in the early stages has always been one of the topics of concern. Nowadays, research on structural damage detection methods based on vibration analysis is very extensive. Moreover, these methods can be used as methods of permanent and timely inspection of structures and prevent further damage to structures. Non-destructive methods are the low-cost and economical methods for determining the damage of structures. In this research, a non-destructive method for detecting and identifying the failure location in structures based on dynamic responses resulting from time history analysis is proposed. When the structure is damaged due to the reduction of stiffness, and due to the applied loads, the displacements in different parts of the structure were increased. In the proposed method, the damage position is determined based on the calculation of the strain energy difference in each member of the damaged structure and the healthy structure at any time. Defective members of the structure are indicated by the amount of strain energy relative to the healthy state. The results indicated that the proper accuracy and performance of the proposed method for identifying failure in structures.Keywords: failure, time history analysis, dynamic response, strain energy
Procedia PDF Downloads 13917748 Influence of the Compression Force and Powder Particle Size on Some Physical Properties of Date (Phoenix dactylifera) Tablets
Authors: Djemaa Megdoud, Messaoud Boudaa, Fatima Ouamrane, Salem Benamara
Abstract:
In recent years, the compression of date (Phoenix dactylifera L.) fruit powders (DP) to obtain date tablets (DT) has been suggested as a promising form of valorization of non commercial valuable date fruit (DF) varieties. To further improve and characterize DT, the present study aims to investigate the influence of the DP particle size and compression force on some physical properties of DT. The results show that independently of particle size, the hardness (y) of tablets increases with the increase of the compression force (x) following a logarithmic law (y = a ln (bx) where a and b are the constants of model). Further, a full factorial design (FFD) at two levels, applied to investigate the erosion %, reveals that the effects of time and particle size are the same in absolute value and they are beyond the effect of the compression. Regarding the disintegration time, the obtained results also by means of a FFD show that the effect of the compression force exceeds 4 times that of the DP particle size. As final stage, the color parameters in the CIELab system of DT immediately after their obtaining are differently influenced by the size of the initial powder.Keywords: powder, tablets, date (Phoenix dactylifera L.), hardness, erosion, disintegration time, color
Procedia PDF Downloads 43717747 The Lean Manufacturing Practices in an Automotive Company Using Value Stream Mapping Technique
Authors: Seher Arslankaya, Merve Si̇mge Usuk
Abstract:
Lean manufacturing, which is based on the Toyota Production System, has focused on increasing the performance in various fields by eliminating the waste. By waste elimination, the lead time is reduced significantly and lean manufacturing provides companies with an important privilege under today's competitive conditions. The initial point of lean thinking is the value. This notion create of a specific product with specific properties for which the customer is ready to pay and which satisfies his needs within a specific time frame and at a specific price. Considering this, the final customer determines the value but the manufacturer creates this value of the product. The value stream is the whole set of activities required for each product. These activities may or may not be essential for the value. Through value stream mapping, all employees can see the sources of waste and develop future cases to eliminate it. This study focused on manufacturing to eliminate the waste which created a cost but did not create any value. The study was carried out at the Department of Assembly/Logistics at Toyota Motor Manufacturing Turkey from the automotive industry with a high product mix and variable demands. As a result of the value stream analysis, improvements are planned for the future cases. The process was improved by applying these suggestions.Keywords: lead time, lean manufacturing, performance improvement, value stream papping
Procedia PDF Downloads 31617746 A Standard Operating Procedure (SOP) for Forensic Soil Analysis: Tested Using a Simulated Crime Scene
Authors: Samara A. Testoni, Vander F. Melo, Lorna A. Dawson, Fabio A. S. Salvador
Abstract:
Soil traces are useful as forensic evidence due to their potential to transfer and adhere to different types of surfaces on a range of objects or persons. The great variability expressed by soil physical, chemical, biological and mineralogical properties show soil traces as complex mixtures. Soils are continuous and variable, no two soil samples being indistinguishable, nevertheless, the complexity of soil characteristics can provide powerful evidence for comparative forensic purposes. This work aimed to establish a Standard Operating Procedure (SOP) for forensic soil analysis in Brazil. We carried out a simulated crime scene with double blind sampling to calibrate the sampling procedures. Samples were collected at a range of locations covering a range of soil types found in South of Brazil: Santa Candida and Boa Vista, neighbourhoods from Curitiba (State of Parana) and in Guarani and Guaraituba, neighbourhoods from Colombo (Curitiba Metropolitan Region). A previously validated sequential analyses of chemical, physical and mineralogical analyses was developed in around 2 g of soil. The suggested SOP and the sequential range of analyses were effective in grouping the samples from the same place and from the same parent material together, as well as successfully discriminated samples from different locations and originated from different rocks. In addition, modifications to the sample treatment and analytical protocol can be made depending on the context of the forensic work.Keywords: clay mineralogy, forensic soils analysis, sequential analyses, kaolinite, gibbsite
Procedia PDF Downloads 25717745 Real-Time Kinetic Analysis of Labor-Intensive Repetitive Tasks Using Depth-Sensing Camera
Authors: Sudip Subedi, Nipesh Pradhananga
Abstract:
The musculoskeletal disorders, also known as MSDs, are common in construction workers. MSDs include lower back injuries, knee injuries, spinal injuries, and joint injuries, among others. Since most construction tasks are still manual, construction workers often need to perform repetitive, labor-intensive tasks. And they need to stay in the same or an awkward posture for an extended time while performing such tasks. It induces significant stress to the joints and spines, increasing the risk of getting into MSDs. Manual monitoring of such tasks is virtually impossible with the handful of safety managers in a construction site. This paper proposes a methodology for performing kinetic analysis of the working postures while performing such tasks in real-time. Skeletal of different workers will be tracked using a depth-sensing camera while performing the task to create training data for identifying the best posture. For this, the kinetic analysis will be performed using a human musculoskeletal model in an open-source software system (OpenSim) to visualize the stress induced by essential joints. The “safe posture” inducing lowest stress on essential joints will be computed for different actions involved in the task. The identified “safe posture” will serve as a basis for real-time monitoring and identification of awkward and unsafe postural behaviors of construction workers. Besides, the temporal simulation will be carried out to find the associated long-term effect of repetitive exposure to such observed postures. This will help to create awareness in workers about potential future health hazards and encourage them to work safely. Furthermore, the collected individual data can then be used to provide need-based personalized training to the construction workers.Keywords: construction workers’ safety, depth sensing camera, human body kinetics, musculoskeletal disorders, real time monitoring, repetitive labor-intensive tasks
Procedia PDF Downloads 13917744 Improving Fused Deposition Modeling Efficiency: A Parameter Optimization Approach
Authors: Wadea Ameen
Abstract:
Rapid prototyping (RP) technology, such as fused deposition modeling (FDM), is gaining popularity because it can produce functioning components with intricate geometric patterns in a reasonable amount of time. A multitude of process variables influences the quality of manufactured parts. In this study, four important process parameters such as layer thickness, model interior fill style, support fill style and orientation are considered. Their influence on three responses, such as build time, model material, and support material, is studied. Experiments are conducted based on factorial design, and the results are presented.Keywords: fused deposition modeling, factorial design, optimization, 3D printing
Procedia PDF Downloads 3117743 Review of Different Machine Learning Algorithms
Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui
Abstract:
Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.Keywords: Data Mining, Web Mining, classification, ML Algorithms
Procedia PDF Downloads 30517742 Investigation of Electrochemical, Morphological, Rheological and Mechanical Properties of Nano-Layered Graphene/Zinc Nanoparticles Incorporated Cold Galvanizing Compound at Reduced Pigment Volume Concentration
Authors: Muhammad Abid
Abstract:
The ultimate goal of this research was to produce a cold galvanizing compound (CGC) at reduced pigment volume concentration (PVC) to protect metallic structures from corrosion. The influence of the partial replacement of Zn dust by nano-layered graphene (NGr) and Zn metal nanoparticles on the electrochemical, morphological, rheological, and mechanical properties of CGC was investigated. EIS was used to explore the electrochemical nature of coatings. The EIS results revealed that the partial replacement of Zn by NGr and Zn nanoparticles enhanced the cathodic protection at reduced PVC (4:1) by improving the electrical contact between the Zn particles and the metal substrate. The Tafel scan was conducted to support the cathodic behaviour of the coatings. The sample formulated solely with Zn at PVC 4:1 was found to be dominated in physical barrier characteristics over cathodic protection. By increasing the concentration of NGr in the formulation, the corrosion potential shifted towards a more negative side. The coating with 1.5% NGr showed the highest galvanic action at reduced PVC. FE-SEM confirmed the interconnected network of conducting particles. The coating without NGr and Zn nanoparticles at PVC 4:1 showed significant gaps between the Zn dust particles. The novelty was evidenced when micrographs showed the consistent distribution of NGr and Zn nanoparticles all over the surface, which acted as a bridge between spherical Zn particles and provided cathodic protection at a reduced PVC. The layered structure of graphene also improved the physical shielding effect of the coatings, which limited the diffusion of electrolytes and corrosion products (oxides/hydroxides) into the coatings, which was reflected by the salt spray test. The rheological properties of coatings showed good liquid/fluid properties. All the coatings showed excellent adhesion but had different strength values. A real-time scratch resistance assessment showed all the coatings had good scratch resistance.Keywords: protective coatings, anti-corrosion, galvanization, graphene, nanomaterials, polymers
Procedia PDF Downloads 10217741 Effects on Inflammatory Biomarkers and Respiratory Mechanics in Laparoscopic Bariatric Surgery: Desflurane vs. Total Intravenous Anaesthesia with Propofol
Authors: L. Kashyap, S. Jha, D. Shende, V. K. Mohan, P. Khanna, A. Aravindan, S. Kashyap, L. Singh, S. Aggarwal
Abstract:
Obesity is associated with a chronic inflammatory state. During surgery, there is an interplay between anaesthetic and surgical stress vis-a-vis the already present complex immune state. Moreover, the postoperative period is dictated by inflammation, which is crucial for wound healing and regeneration. An excess of inflammatory response might hamper recovery besides increasing the risk for infection and complications. There is definite evidence of the immunosuppressive role of inhaled anaesthetic agents. This immune modulation may be brought into effect directly by influencing the innate and adaptive immunity cells. The effects of propofol on immune mechanisms in has been widely elucidated because of its popularity. It reduces superoxide generation, elastase release, and chemotaxis. However, there is no unequivocal proof of one’s superiority over the other. Hence, an anaesthetic regimen with lesser inflammatory potential and specific to the obese patient is needed. OBESITA trial protocol (2019) by Sousa and co-workers in progress aims to test the hypothesis that anaesthesia with sevoflurane results in a weaker proinflammatory response compared to propofol, as evidenced by lower IL-6 and other biomarkers and an increased macrophage differentiation into M2 phenotype in adipose tissue. IL-6 was used as the objective parameter to evaluate inflammation as it is regulated by both surgery and anesthesia. It is the most sensitive marker of the inflammatory response to tissue damage since it is released within minutes by blood leukocytes. We hypothesized that maintenance of anaesthesia with propofol would lead to less inflammation than that with desflurane. Aims: The effect of two anaesthetic techniques, total intravenous anaesthesia (TIVA) with propofol and desflurane, on surgical stress response was evaluated. The primary objective was to compare serum interleukin-6 (IL-6) levels before and after surgery. Methods: In this prospective single-blinded randomized controlled trial undertaken, 30 obese patients (BMI>30 kg/m2) undergoing laparoscopic bariatric surgery under general anaesthesia were recruited. Patients were randomized to receive desflurane or TIVA using a target-controlled infusion for maintenance of anaesthesia. As a marker of inflammation, pre-and post-surgery IL-6 levels were compared. Results: After surgery, IL-6 levels increased significantly in both groups. The rise in IL-6 was less with TIVA than with desflurane; however, it did not reach significance. IL-6 rise post-surgery correlated positively with the complexity of procedure and duration of surgery and anaesthesia, rather than anaesthetic technique. Both groups did not differ in terms of intra-operative hemodynamic and respiratory variables, time to awakening, postoperative pulmonary complications, and duration of hospital stay. The incidence of nausea was significantly higher with desflurane than with TIVA. Conclusion: Inflammatory response did not differ as a function of anaesthetic technique when propofol and desflurane were compared. Also, patient and surgical variables dictated post-operative inflammation more than the anaesthetic factors. Further, larger sample size is needed to confirm or refute these findings.Keywords: bariatric, biomarkers, inflammation, laparoscopy
Procedia PDF Downloads 12617740 The Optical OFDM Equalization Based on the Fractional Fourier Transform
Authors: A. Cherifi, B. S. Bouazza, A. O. Dahman, B. Yagoubi
Abstract:
Transmission over Optical channels will introduce inter-symbol interference (ISI) as well as inter-channel (or inter-carrier) interference (ICI). To decrease the effects of ICI, this paper proposes equalizer for the Optical OFDM system based on the fractional Fourier transform (FrFFT). In this FrFT-OFDM system, traditional Fourier transform is replaced by fractional Fourier transform to modulate and demodulate the data symbols. The equalizer proposed consists of sampling the received signal in the different time per time symbol. Theoretical analysis and numerical simulation are discussed.Keywords: OFDM, fractional fourier transform, internet and information technology
Procedia PDF Downloads 40917739 Effects of the Different Recovery Durations on Some Physiological Parameters during 3 X 3 Small-Sided Games in Soccer
Authors: Samet Aktaş, Nurtekin Erkmen, Faruk Guven, Halil Taskin
Abstract:
This study aimed to determine the effects of 3 versus 3 small-sided games (SSG) with different recovery times on soma physiological parameters in soccer players. Twelve soccer players from Regional Amateur League volunteered for this study (mean±SD age, 20.50±2.43 years; height, 177.73±4.13 cm; weight, 70.83±8.38 kg). Subjects were performing soccer training for five days per week. The protocol of the study was approved by the local ethic committee in School of Physical Education and Sport, Selcuk University. The subjects were divided into teams with 3 players according to Yo-Yo Intermittent Recovery Test. The field dimension was 26 m wide and 34 m in length. Subjects performed two times in a random order a series of 3 bouts of 3-a-side SSGs with 3 min and 5 min recovery durations. In SSGs, each set were performed with 6 min duration. The percent of maximal heart rate (% HRmax), blood lactate concentration (LA) and Rated Perceived Exertion (RPE) scale points were collected before the SSGs and at the end of each set. Data were analyzed by analysis of variance (ANOVA) with repeated measures. Significant differences were found between %HRmax in before SSG and 1st set, 2nd set, and 3rd set in both SSG with 3 min recovery duration and SSG with 5 min recovery duration (p<0.05). Means of %HRmax in SSG with 3 min recovery duration at both 1st and 2nd sets were significantly higher than SSG with 5 min recovery duration (p<0.05). No significant difference was found between sets of either SSGs in terms of LA (p>0.05). LA in SSG with 3 min recovery duration was higher than SSG with 5 min recovery duration at 2nd sets (p<0.05). RPE in soccer players was not different between SSGs (p>0.05).In conclusion, this study demonstrates that exercise intensity in SSG with 3 min recovery durations is higher than SSG with 5 min recovery durations.Keywords: small-sided games, soccer, heart rate, lactate
Procedia PDF Downloads 46817738 Using Data Mining in Automotive Safety
Authors: Carine Cridelich, Pablo Juesas Cano, Emmanuel Ramasso, Noureddine Zerhouni, Bernd Weiler
Abstract:
Safety is one of the most important considerations when buying a new car. While active safety aims at avoiding accidents, passive safety systems such as airbags and seat belts protect the occupant in case of an accident. In addition to legal regulations, organizations like Euro NCAP provide consumers with an independent assessment of the safety performance of cars and drive the development of safety systems in automobile industry. Those ratings are mainly based on injury assessment reference values derived from physical parameters measured in dummies during a car crash test. The components and sub-systems of a safety system are designed to achieve the required restraint performance. Sled tests and other types of tests are then carried out by car makers and their suppliers to confirm the protection level of the safety system. A Knowledge Discovery in Databases (KDD) process is proposed in order to minimize the number of tests. The KDD process is based on the data emerging from sled tests according to Euro NCAP specifications. About 30 parameters of the passive safety systems from different data sources (crash data, dummy protocol) are first analysed together with experts opinions. A procedure is proposed to manage missing data and validated on real data sets. Finally, a procedure is developed to estimate a set of rough initial parameters of the passive system before testing aiming at reducing the number of tests.Keywords: KDD process, passive safety systems, sled test, dummy injury assessment reference values, frontal impact
Procedia PDF Downloads 38717737 Aerobic Capacity Outcomes after an Aerobic Exercise Program with an Upper Body Ergometer in Diabetic Amputees
Authors: Cecilia Estela Jiménez Pérez Campos
Abstract:
Introduction: Amputation comes from a series of complications in diabetic persons; at that point, of the illness evolution they have a deplored aerobic capacity. Adding to that, cardiac rehabs programs are almost base in several activities in a standing position. The cardiac rehabilitation programs have to improve for them, based on scientific advice. Objective: Evaluation of aerobic capacity of diabetic amputee after an aerobic exercise program, with upper limb ergometer. Methodology: The design is longitudinal, prospective, comparative and no randomized. We include all diabetic pelvic limb amputees, who assist to the cardiac rehabilitation. We made 2 groups: an experimental and a control group. The patients did the exercise testing, with the author’s design protocol. The experimental group completed 24 exercise sessions (3 sessions/week), with an intensity determined with the training heart rate. At the end of 8 weeks period, the subjects did a second exercise test. Results: Both groups were a homogeneous sample in age (experimental n=15) 57.6+12.5 years old and (control n=8) 52.5+8.0 years old, sex, occupation, education and economic features. (square chi) (p=0.28). The initial aerobic capacity was similar in both groups. And the aerobic capacity accomplishes after the program was statistically greater in the experimental group than in the control one. The final media VO2peak (mlO2/kg/min) was experimental (17.1+3.8), control (10.5+3.8), p=0.001. (t student). Conclusions: The aerobic capacity improved after an arm ergometer exercise program and the quality of life improve too, in diabetic amputees. So this program is fundamental in diabetic amputee’s rehabilitation management.Keywords: aerobic fitness, metabolic equivalent (MET), oxygen output, upper limb ergometer
Procedia PDF Downloads 23617736 Audit Examining Maternity Assessment Suite Triage Compliance with Birmingham Symptom Specific Obstetric Triage System in a London Teaching Hospital
Authors: Sarah Atalla, Shubham Gupta, Kim Alipio, Tanya Maric
Abstract:
Background: Chelsea and Westminster Hospital have introduced the Birmingham Symptom Specific Obstetric Triage System (BSOTS) for patients who present acutely to the Maternity Assessment Suite (MAS) to prioritise care by urgency. The primary objective was to evaluate whether BSOTS was used appropriately to assess patients (defined as a 90% threshold). The secondary objective was to assess whether patients were seen within their designated triaged timeframe (defined as a 90% threshold). Methodology: MAS records were retrospectively reviewed for a randomly selected one-week period of data from 2020 (21/09/2020 - 27/09/2020). 189 patients presented to MAS during this time. Data were collected on the presenting complaint, time of attendance (divided into four time categories), and triage colour code for the urgency of a review by a doctor (red: immediately, orange: within 15 minutes, yellow: within 1 hour, green: within 4 hours). The number of triage waiting times that were breached and the outcome of the attendance was noted. Results: 49% of patients presenting to MAS during this time period were triaged, which therefore did not meet the 90% target. 67% of patients who were triaged were seen within their allocated timeframe as designated by their triage colour code, which therefore did not meet the 90% target. The most frequent reason for patient attendance was reduced fetal movements (30.5% of attendances). The busiest time of day (when most patients presented) was between 06:01-12:00, and this was also when the highest number of patients were not triaged (26 patients or 54% of patients presenting in this time category). The most used triage category (59%) was the green colour code (to be seen by a doctor within 4 hours), followed by orange (24%), yellow (14%), and red (3%). 45% of triaged patients were admitted, whilst 55% were discharged. 62% of patients allocated to the green triage category were discharged, as compared to 56% of yellow category patients, 27% of orange category patients, and 50% of red category patients. The time of patient presentation to the hospital was also associated with the level of urgency and outcome. Patients presenting from 12:01 to 18:00 were more likely to be discharged (72% discharged) compared to 00:01-06:00 where only 12.5% of patients were discharged. Conclusion: The triage system for assessing the urgency of acutely presenting obstetric patients is only being effectively utilised for 49% of patients. There is potential for enhancing the employment of the triage system to enable further efficiency and boost the promotion of patient safety. It is noted that MAS was busiest at 06:01 - 12:00 when there was also the highest number of non-triaged patients – this highlights some areas where we can improve, including higher levels of staffing, better use of BSOTS to triage patients, and patient education.Keywords: birmingham, BSOTS, maternal, obstetric, pregnancy, specific, symptom, triage
Procedia PDF Downloads 10817735 Use of Smartphones in 6th and 7th Grade (Elementary Schools) in Istria: Pilot Study
Authors: Maja Ruzic-Baf, Vedrana Keteles, Andrea Debeljuh
Abstract:
Younger and younger children are now using a smartphone, a device which has become ‘a must have’ and the life of children would be almost ‘unthinkable’ without one. Devices are becoming lighter and lighter but offering an array of options and applications as well as the unavoidable access to the Internet, without which it would be almost unusable. Numerous features such as taking of photographs, listening to music, information search on the Internet, access to social networks, usage of some of the chatting and messaging services, are only some of the numerous features offered by ‘smart’ devices. They have replaced the alarm clock, home phone, camera, tablet and other devices. Their use and possession have become a part of the everyday image of young people. Apart from the positive aspects, the use of smartphones has also some downsides. For instance, free time was usually spent in nature, playing, doing sports or other activities enabling children an adequate psychophysiological growth and development. The greater usage of smartphones during classes to check statuses on social networks, message your friends, play online games, are just some of the possible negative aspects of their application. Considering that the age of the population using smartphones is decreasing and that smartphones are no longer ‘foreign’ to children of pre-school age (smartphones are used at home or in coffee shops or shopping centers while waiting for their parents, playing video games often inappropriate to their age), particular attention must be paid to a very sensitive group, the teenagers who almost never separate from their ‘pets’. This paper is divided into two sections, theoretical and empirical ones. The theoretical section gives an overview of the pros and cons of the usage of smartphones, while the empirical section presents the results of a research conducted in three elementary schools regarding the usage of smartphones and, specifically, their usage during classes, during breaks and to search information on the Internet, check status updates and 'likes’ on the Facebook social network.Keywords: education, smartphone, social networks, teenagers
Procedia PDF Downloads 45617734 System Identification and Quantitative Feedback Theory Design of a Lathe Spindle
Authors: M. Khairudin
Abstract:
This paper investigates the system identification and design quantitative feedback theory (QFT) for the robust control of a lathe spindle. The dynamic of the lathe spindle is uncertain and time variation due to the deepness variation on cutting process. System identification was used to obtain the dynamics model of the lathe spindle. In this work, real time system identification is used to construct a linear model of the system from the nonlinear system. These linear models and its uncertainty bound can then be used for controller synthesis. The real time nonlinear system identification process to obtain a set of linear models of the lathe spindle that represents the operating ranges of the dynamic system. With a selected input signal, the data of output and response is acquired and nonlinear system identification is performed using Matlab to obtain a linear model of the system. Practical design steps are presented in which the QFT-based conditions are formulated to obtain a compensator and pre-filter to control the lathe spindle. The performances of the proposed controller are evaluated in terms of velocity responses of the the lathe machine spindle in corporating deepness on cutting process.Keywords: lathe spindle, QFT, robust control, system identification
Procedia PDF Downloads 54517733 Economic of Chickpea Cultivars as Influenced by Sowing Time and Seed Rate
Authors: Indu Bala Sethi, Meena Sewhag, Rakesh Kumar, Parveen Kumar
Abstract:
Field experiment was conducted at Pulse Research Area of CCS Haryana Agricultural University, Hisar during rabi 2012-13 to study the economics of chickpea cultivars as influenced by sowing time and seed rate on sandy loam soils under irrigated conditions. The factorial experiment consisting of 24 treatment combinations with two sowing time (1st fortnight of November and 1st fortnight of December.) and four cultivars (H09-23, H08-18, C-235 and HC-1) kept in main plots while three seed rates viz. 40 kg ha-1, 50 kg ha-1 and 60 kg ha-1 was laid out in split plot design with three replications. The crop was sown with common row spacing of 30 cm as per the dates of sowing. The fertilizer was applied in the form of di- ammonium phosphate. The soil of the experimental site was deep sandy loam having pH of 7.9, EC of 0.13 dS/m and low in organic carbon (0.34%), low in available N status (193.36 kg ha-1), medium in available P2O5 (32.18 kg ha-1) and high in available K2O (249.67 kg ha-1). The crop was irrigated as and when required so as to maintain adequate soil moisture in the root zone The crop was sprayed with monocrotophos (1.25 l/ha) at initiation of flowering and at pod filling stage to protect the crop from pod borer attack. The yield was measured at the time of harvest. The cost of field preparation, sowing of seeds, thinning, weeding, plant protection, harvesting and cleaning contributed to fixed cost. The experiment was laid out in a split plot design with two sowing time (1st fortnight of November and 1st fortnight of December.) and four cultivars (H09-23, H08-18, C-235 and HC-1) kept in main plots while three seed rates viz. 40 kg ha-1, 50 kg ha-1 and 60 kg ha-1 were kept in subplots and replicated thrice. Results revealed that 1st fortnight of November sowing recorded significantly higher gross (Rs.1, 01,254 ha-1), net returns (Rs. 68,504 ha-1) and BC (3.09) ratio as compared to delayed crop of chickpea. Highest gross (Rs.91826 ha-1), net returns (Rs. 59076ha-1) and BC ratio (2.81) was recorded with H08-18. Higher value of cost of cultivation of chickpea was observed in higher seed rate than the lower ones. However no significant variation in net and gross returns was observed due to seed rates. Highest BC (2.72) ratio was recorded with 50 kg ha-1 which differs significantly from 60 kg ha-1 but was at par with 40 kg ha-1. This is because of higher grain yield obtained with 50 kg ha-1 seed rate. Net profit for farmers growing chickpea with seed rate of 50 kg ha-1 was higher than the farmers growing chickpea with seed rate of 40 and 60 kg ha.Keywords: chickpea, cultivars, seed rate, sowing time
Procedia PDF Downloads 44617732 Improving Rural Access to Specialist Emergency Mental Health Care: Using a Time and Motion Study in the Evaluation of a Telepsychiatry Program
Authors: Emily Saurman, David Lyle
Abstract:
In Australia, a well serviced rural town might have a psychiatrist visit once-a-month with more frequent visits from a psychiatric nurse, but many have no resident access to mental health specialists. Access to specialist care, would not only reduce patient distress and benefit outcomes, but facilitate the effective use of limited resources. The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) was developed to improve access to specialist emergency mental health care in rural and remote communities using telehealth technologies. However, there has been no current benchmark to gauge program efficiency or capacity; to determine whether the program activity is justifiably sufficient. The evaluation of MHEC-RAP used multiple methods and applied a modified theory of access to assess the program and its aim of improved access to emergency mental health care. This was the first evaluation of a telepsychiatry service to include a time and motion study design examining program time expenditure, efficiency, and capacity. The time and motion study analysis was combined with an observational study of the program structure and function to assess the balance between program responsiveness and efficiency. Previous program studies have demonstrated that MHEC-RAP has improved access and is used and effective. The findings from the time and motion study suggest that MHEC-RAP has the capacity to manage increased activity within the current model structure without loss to responsiveness or efficiency in the provision of care. Enhancing program responsiveness and efficiency will also support a claim of the program’s value for money. MHEC-RAP is a practical telehealth solution for improving access to specialist emergency mental health care. The findings from this evaluation have already attracted the attention of other regions in Australia interested in implementing emergency telepsychiatry programs and are now informing the progressive establishment of mental health resource centres in rural New South Wales. Like MHEC-RAP, these centres will provide rapid, safe, and contextually relevant assessments and advice to support local health professionals to manage mental health emergencies in the smaller rural emergency departments. Sharing the application of this methodology and research activity may help to improve access to and future evaluations of telehealth and telepsychiatry services for others around the globe.Keywords: access, emergency, mental health, rural, time and motion
Procedia PDF Downloads 23817731 Effect of Temperatures on Growth and Development Time of Aphis fabae Scopoli (Homoptera: Aphididae): On Bean (Phaseolus vulgaris L.)
Authors: Rochelyn Dona, Serdar Satar
Abstract:
The aim of this study was to evaluate the biological parameters of A. fabae Scopoli (Hemiptera: Aphididae). Developmental, survival, and reproductive data were collected for Aphis fabae reared on detached bean leaves (Phaseolus vulgaris L.) ‘pinto beans’ at five temperature regimes (12, 16, 20, 24, and 28 °C), 65% relative humidity (RH), relative and a photoperiod of 16:8 (LD) h. The developmental times of immature stages ranged from 16, 65 days at 12°C to 5.70 days at 24°C, but a slight increase again at 28°C (6.62 days). At 24°C from this study presented the developmental threshold for A. fabae slightly to 24°C. The average longevity of mature females significantly decreased from 42.32 days at 12°C to 16.12 days at 28°C. The reproduction rate per female was 62.27 at 16°C and 12.72 at 28°C. The mean generation period of the population ranged from 29.24 at 12°C to 11.50 at 28°C. The highest intrinsic rate of increase (rm = 0.41) were recorded at 24°C, the lowest at 12°C (rm = 0.15). It was evident that temperatures over 28°C augmented the development time, accelerated the death ratio of the nymphal stages, Shrunk Adult longevity, and reduced fecundity. The optimal range of temperature for the population growth of A. fabae on the bean was 16°C-24°C, according to this study.Keywords: developmental time, intrinsic rate, reproduction period, temperature dependence
Procedia PDF Downloads 23217730 Biodiesel Production From Waste Cooking Oil Using g-C3N4 Photocatalyst
Authors: A. Elgendi, H. Farag, M. E. Ossman, M. Abd-Elfatah
Abstract:
This paper explores the using of waste cooking oil (WCO) as an attractive option to reduce the raw material cost for the biodiesel production. This can be achieved through two steps; esterification using g-C3N4photocatalyst and then alkali transesterification. Several parameters have been studied to determine the yield of the biodiesel produced such as: Reaction time (2-6 hrs), catalyst concentration (0.3-1.5 wt.%), number of UV lamps (1or 3 lamps) and methanol: oil ratio (6:1-12:1). From the obtained results, the highest percentage yield was obtained using methanol: Oil molar ratio of 12:1, catalyst dosage 0.3%, time of 4 hrs and using 1 lamp. From the results it was clear that the produced biodiesel from waste cooking oil can be used as fuel.Keywords: biodiesel, heterogeneous catalyst, photocatalytic esterification, waste cooking oil
Procedia PDF Downloads 53317729 Behavioral Pattern of 2G Mobile Internet Subscribers: A Study on an Operator of Bangladesh
Authors: Azfar Adib
Abstract:
Like many other countries of the world, mobile internet has been playing a key role in the growth of internet subscriber base in Bangladesh. This study has attempted to identify particular behavioral or usage patterns of 2G mobile internet subscribers who were using the service of the topmost internet service provider (as well as the top mobile operator) of Bangladesh prior to the launching of 3G services (when 2G was fully dominant). It contains some comprehensive analysis carried on different info regarding 2G mobile internet subscribers, obtained from the operator’s own network insights.This is accompanied by the results of a survey conducted among 40 high-frequency users of this service.Keywords: mobile internet, Symbian, Android, iPhone
Procedia PDF Downloads 44417728 Effect of Pack Aluminising Conditions on βNiAl Coatings
Authors: A. D. Chandio, P. Xiao
Abstract:
In this study, nickel aluminide coatings were deposited onto CMSX-4 single crystal superalloy and pure Ni substrates by using in-situ chemical vapour deposition (CVD) technique. The microstructural evolutions and coating thickness (CT) were studied upon the variation of processing conditions i.e. time and temperature. The results demonstrated (under identical conditions) that coating formed on pure Ni contains no substrate entrapments and have lower CT in comparison to one deposited on the CMSX-4 counterpart. In addition, the interdiffusion zone (IDZ) of Ni substrate is a γ’-Ni3Al in comparison to the CMSX-4 alloy that is βNiAl phase. The higher CT on CMSX-4 superalloy is attributed to presence of γ-Ni/γ’-Ni3Al structure which contains ~ 15 at.% Al before deposition (that is already present in superalloy). Two main deposition parameters (time and temperature) of the coatings were also studied in addition to standard comparison of substrate effects. The coating formation time was found to exhibit profound effect on CT, whilst temperature was found to change coating activities. In addition, the CT showed linear trend from 800 to 1000 °C, thereafter reduction was observed. This was attributed to the change in coating activities.Keywords: βNiAl, in-situ CVD, CT, CMSX-4, Ni, microstructure
Procedia PDF Downloads 251