Search results for: optimal condition
5616 Towards the Rapid Synthesis of High-Quality Monolayer Continuous Film of Graphene on High Surface Free Energy Existing Plasma Modified Cu Foil
Authors: Maddumage Don Sandeepa Lakshad Wimalananda, Jae-Kwan Kim, Ji-Myon Lee
Abstract:
Graphene is an extraordinary 2D material that shows superior electrical, optical, and mechanical properties for the applications such as transparent contacts. Further, chemical vapor deposition (CVD) technique facilitates to synthesizing of large-area graphene, including transferability. The abstract is describing the use of high surface free energy (SFE) and nano-scale high-density surface kinks (rough) existing Cu foil for CVD graphene growth, which is an opposite approach to modern use of catalytic surfaces for high-quality graphene growth, but the controllable rough morphological nature opens new era to fast synthesis (less than the 50s with a short annealing process) of graphene as a continuous film over conventional longer process (30 min growth). The experiments were shown that high SFE condition and surface kinks on Cu(100) crystal plane existing Cu catalytic surface facilitated to synthesize graphene with high monolayer and continuous nature because it can influence the adsorption of C species with high concentration and which can be facilitated by faster nucleation and growth of graphene. The fast nucleation and growth are lowering the diffusion of C atoms to Cu-graphene interface, which is resulting in no or negligible formation of bilayer patches. High energy (500W) Ar plasma treatment (inductively Coupled plasma) was facilitated to form rough and high SFE existing (54.92 mJm-2) Cu foil. This surface was used to grow the graphene by using CVD technique at 1000C for 50s. The introduced kink-like high SFE existing point on Cu(100) crystal plane facilitated to faster nucleation of graphene with a high monolayer ratio (I2D/IG is 2.42) compared to another different kind of smooth morphological and low SFE existing Cu surfaces such as Smoother surface, which is prepared by the redeposit of Cu evaporating atoms during the annealing (RRMS is 13.3nm). Even high SFE condition was favorable to synthesize graphene with monolayer and continuous nature; It fails to maintain clean (surface contains amorphous C clusters) and defect-free condition (ID/IG is 0.46) because of high SFE of Cu foil at the graphene growth stage. A post annealing process was used to heal and overcome previously mentioned problems. Different CVD atmospheres such as CH4 and H2 were used, and it was observed that there is a negligible change in graphene nature (number of layers and continuous condition) but it was observed that there is a significant difference in graphene quality because the ID/IG ratio of the graphene was reduced to 0.21 after the post-annealing with H2 gas. Addition to the change of graphene defectiveness the FE-SEM images show there was a reduction of C cluster contamination of the surface. High SFE conditions are favorable to form graphene as a monolayer and continuous film, but it fails to provide defect-free graphene. Further, plasma modified high SFE existing surface can be used to synthesize graphene within 50s, and a post annealing process can be used to reduce the defectiveness.Keywords: chemical vapor deposition, graphene, morphology, plasma, surface free energy
Procedia PDF Downloads 2445615 Defining the Turbulent Coefficients with the Effect of Atmospheric Stability in Wake of a Wind Turbine Wake
Authors: Mohammad A. Sazzad, Md M. Alam
Abstract:
Wind energy is one of the cleanest form of renewable energy. Despite wind industry is growing faster than ever there are some roadblocks towards the improvement. One of the difficulties the industry facing is insufficient knowledge about wake within the wind farms. As we know energy is generated in the lowest layer of the atmospheric boundary layer (ABL). This interaction between the wind turbine (WT) blades and wind introduces a low speed wind region which is defined as wake. This wake region shows different characteristics under each stability condition of the ABL. So, it is fundamental to know this wake region well which is defined mainly by turbulence transport and wake shear. Defining the wake recovery length and width are very crucial for wind farm to optimize the generation and reduce the waste of power to the grid. Therefore, in order to obtain the turbulent coefficients of velocity and length, this research focused on the large eddy simulation (LES) data for neutral ABL (NABL). According to turbulent theory, if we can present velocity defect and Reynolds stress in the form of local length and velocity scales, they become invariant. In our study velocity and length coefficients are 0.4867 and 0.4794 respectively which is close to the theoretical value of 0.5 for NABL. There are some invariant profiles because of the presence of thermal and wind shear power coefficients varied a little from the ideal condition.Keywords: atmospheric boundary layer, renewable energy, turbulent coefficient, wind turbine, wake
Procedia PDF Downloads 1325614 Trajectory Optimization for Autonomous Deep Space Missions
Authors: Anne Schattel, Mitja Echim, Christof Büskens
Abstract:
Trajectory planning for deep space missions has become a recent topic of great interest. Flying to space objects like asteroids provides two main challenges. One is to find rare earth elements, the other to gain scientific knowledge of the origin of the world. Due to the enormous spatial distances such explorer missions have to be performed unmanned and autonomously. The mathematical field of optimization and optimal control can be used to realize autonomous missions while protecting recourses and making them safer. The resulting algorithms may be applied to other, earth-bound applications like e.g. deep sea navigation and autonomous driving as well. The project KaNaRiA ('Kognitionsbasierte, autonome Navigation am Beispiel des Ressourcenabbaus im All') investigates the possibilities of cognitive autonomous navigation on the example of an asteroid mining mission, including the cruise phase and approach as well as the asteroid rendezvous, landing and surface exploration. To verify and test all methods an interactive, real-time capable simulation using virtual reality is developed under KaNaRiA. This paper focuses on the specific challenge of the guidance during the cruise phase of the spacecraft, i.e. trajectory optimization and optimal control, including first solutions and results. In principle there exist two ways to solve optimal control problems (OCPs), the so called indirect and direct methods. The indirect methods are being studied since several decades and their usage needs advanced skills regarding optimal control theory. The main idea of direct approaches, also known as transcription techniques, is to transform the infinite-dimensional OCP into a finite-dimensional non-linear optimization problem (NLP) via discretization of states and controls. These direct methods are applied in this paper. The resulting high dimensional NLP with constraints can be solved efficiently by special NLP methods, e.g. sequential quadratic programming (SQP) or interior point methods (IP). The movement of the spacecraft due to gravitational influences of the sun and other planets, as well as the thrust commands, is described through ordinary differential equations (ODEs). The competitive mission aims like short flight times and low energy consumption are considered by using a multi-criteria objective function. The resulting non-linear high-dimensional optimization problems are solved by using the software package WORHP ('We Optimize Really Huge Problems'), a software routine combining SQP at an outer level and IP to solve underlying quadratic subproblems. An application-adapted model of impulsive thrusting, as well as a model of an electrically powered spacecraft propulsion system, is introduced. Different priorities and possibilities of a space mission regarding energy cost and flight time duration are investigated by choosing different weighting factors for the multi-criteria objective function. Varying mission trajectories are analyzed and compared, both aiming at different destination asteroids and using different propulsion systems. For the transcription, the robust method of full discretization is used. The results strengthen the need for trajectory optimization as a foundation for autonomous decision making during deep space missions. Simultaneously they show the enormous increase in possibilities for flight maneuvers by being able to consider different and opposite mission objectives.Keywords: deep space navigation, guidance, multi-objective, non-linear optimization, optimal control, trajectory planning.
Procedia PDF Downloads 4125613 Embedded Digital Image System
Authors: Dawei Li, Cheng Liu, Yiteng Liu
Abstract:
This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application.Keywords: ADV212, image system, JPEG2000, sounding rocket
Procedia PDF Downloads 4215612 Enhancing the Pricing Expertise of an Online Distribution Channel
Authors: Luis N. Pereira, Marco P. Carrasco
Abstract:
Dynamic pricing is a revenue management strategy in which hotel suppliers define, over time, flexible and different prices for their services for different potential customers, considering the profile of e-consumers and the demand and market supply. This means that the fundamentals of dynamic pricing are based on economic theory (price elasticity of demand) and market segmentation. This study aims to define a dynamic pricing strategy and a contextualized offer to the e-consumers profile in order to improve the number of reservations of an online distribution channel. Segmentation methods (hierarchical and non-hierarchical) were used to identify and validate an optimal number of market segments. A profile of the market segments was studied, considering the characteristics of the e-consumers and the probability of reservation a room. In addition, the price elasticity of demand was estimated for each segment using econometric models. Finally, predictive models were used to define rules for classifying new e-consumers into pre-defined segments. The empirical study illustrates how it is possible to improve the intelligence of an online distribution channel system through an optimal dynamic pricing strategy and a contextualized offer to the profile of each new e-consumer. A database of 11 million e-consumers of an online distribution channel was used in this study. The results suggest that an appropriate policy of market segmentation in using of online reservation systems is benefit for the service suppliers because it brings high probability of reservation and generates more profit than fixed pricing.Keywords: dynamic pricing, e-consumers segmentation, online reservation systems, predictive analytics
Procedia PDF Downloads 2345611 A Study on FWD Deflection Bowl Parameters for Condition Assessment of Flexible Pavement
Authors: Ujjval J. Solanki, Prof.(Dr.) P.J. Gundaliya, Prof.M.D. Barasara
Abstract:
The application of Falling Weight Deflectometer is to evaluate structural performance of the flexible pavement. The exercise of back calculation is required to know the modulus of elasticity of existing in-service pavement. The process of back calculation needs in-depth field experience for the input of range of modulus of elasticity of bituminous, granular and subgrade layer, and its required number of trial to find such matching moduli with the observed FWD deflection on the field. The study carried out at Barnala-Mansa State Highway Punjab-India using FWD before and after overlay; the deflections obtained at 0 on the load cell, 300, 600, 900,1200, 1500 and 1800 mm interval from the load cell these seven deflection results used to calculate Surface Curvature Index (SCI), Base damage Index (BDI), Base curvature index (BCI). This SCI, BCI and BDI indices are useful to predict the structural performance of in-service pavement and also useful to identify homogeneous section for condition assessment. The SCI, BCI and BDI range are determined for before and after overlay the range of SCI 520 to 51 BDI 294 to 63 BCI 83 to 0.27 for old pavement and SCI 272 to 23 BDI 228 to 28, BCI 25.85 to 4.60 for new pavement. It also shows good correlation with back calculated modulus of elasticity of all the three layer.Keywords: back calculation, base damage index, base curvature index, FWD (Falling Weight Deflectometer), surface curvature index
Procedia PDF Downloads 3325610 Evaluation of Associated Risk Factors and Determinants of near Miss Obstetric Cases at B.P. Koirala Institute of Health Sciences, Dharan
Authors: Madan Khadka, Dhruba Uprety, Rubina Rai
Abstract:
Background and objective: In 2011, around 273,465 women died worldwide during pregnancy, childbirth or within 42 days after childbirth. Near-miss is recognized as the predictor of the level of care and maternal death. The objective of the study was to evaluate the associated risk factors of near-miss obstetric cases and maternal death. Material and Methods A Prospective Observational Study was done from August 1, 2014, to June 30, 2015, in Department of Obstetrics and Gynecology at BPKIHS hospital, tertiary care hospital in Eastern Nepal, Dharan. Case eligible by the 5-factor scoring system and WHO near miss criteria were evaluated. Risk factors included severe hemorrhage, hypertensive disorders, and a complication of abortion, ruptured uterus, medical/surgical condition and sepsis. Results: A total of 9,727 delivery were attended during the study period from August 2014 to June 2014. There were 6307 (71.5%) vaginal delivery and 2777(28.5%) caesarean section and 181 perinatal death with a total of 9,546 live birth. A total of 162 near miss was identified, and 16 maternal death occurred during the study. Maternal near miss rate of 16.6 per 1000 live birth, Women with life-threatening conditions (WLTC) of 172, Severe maternal outcome ratio of 18.64 per 1000 live birth, Maternal near-miss mortality ratio (MNM: 1 MD) 10.1:1, Mortality index (MI) of 8.98%. Risk factors were obstetric hemorrhage 27.8%, abortion/ectopic 27.2%, eclampsia 16%, medical/surgical condition 14.8%, sepsis 13.6%, severe preeclamsia 11.1%, ruptured uterus 3.1%, and molar pregnancy 1.9%. 19.75% were prim gravidae, with mean age 25.66 yrs, and cardiovascular and coagulation dysfunction as a major life threatening condition and sepsis (25%) was the major cause of mortality. Conclusion: Hemorrhage and hypertensive disorders are the leading causes of near miss event and sepsis as a leading cause of mortality. As near miss analysis indicates the quality of health care, it is worth presenting in national indices.Keywords: abortion, eclampsia, hemorrhage, maternal mortility, near miss
Procedia PDF Downloads 1965609 Bi-Criteria Vehicle Routing Problem for Possibility Environment
Authors: Bezhan Ghvaberidze
Abstract:
A multiple criteria optimization approach for the solution of the Fuzzy Vehicle Routing Problem (FVRP) is proposed. For the possibility environment the levels of movements between customers are calculated by the constructed simulation interactive algorithm. The first criterion of the bi-criteria optimization problem - minimization of the expectation of total fuzzy travel time on closed routes is constructed for the FVRP. A new, second criterion – maximization of feasibility of movement on the closed routes is constructed by the Choquet finite averaging operator. The FVRP is reduced to the bi-criteria partitioning problem for the so called “promising” routes which were selected from the all admissible closed routes. The convenient selection of the “promising” routes allows us to solve the reduced problem in the real-time computing. For the numerical solution of the bi-criteria partitioning problem the -constraint approach is used. An exact algorithm is implemented based on D. Knuth’s Dancing Links technique and the algorithm DLX. The Main objective was to present the new approach for FVRP, when there are some difficulties while moving on the roads. This approach is called FVRP for extreme conditions (FVRP-EC) on the roads. Also, the aim of this paper was to construct the solving model of the constructed FVRP. Results are illustrated on the numerical example where all Pareto-optimal solutions are found. Also, an approach for more complex model FVRP with time windows was developed. A numerical example is presented in which optimal routes are constructed for extreme conditions on the roads.Keywords: combinatorial optimization, Fuzzy Vehicle routing problem, multiple objective programming, possibility theory
Procedia PDF Downloads 4855608 Estimating Bridge Deterioration for Small Data Sets Using Regression and Markov Models
Authors: Yina F. Muñoz, Alexander Paz, Hanns De La Fuente-Mella, Joaquin V. Fariña, Guilherme M. Sales
Abstract:
The primary approach for estimating bridge deterioration uses Markov-chain models and regression analysis. Traditional Markov models have problems in estimating the required transition probabilities when a small sample size is used. Often, reliable bridge data have not been taken over large periods, thus large data sets may not be available. This study presents an important change to the traditional approach by using the Small Data Method to estimate transition probabilities. The results illustrate that the Small Data Method and traditional approach both provide similar estimates; however, the former method provides results that are more conservative. That is, Small Data Method provided slightly lower than expected bridge condition ratings compared with the traditional approach. Considering that bridges are critical infrastructures, the Small Data Method, which uses more information and provides more conservative estimates, may be more appropriate when the available sample size is small. In addition, regression analysis was used to calculate bridge deterioration. Condition ratings were determined for bridge groups, and the best regression model was selected for each group. The results obtained were very similar to those obtained when using Markov chains; however, it is desirable to use more data for better results.Keywords: concrete bridges, deterioration, Markov chains, probability matrix
Procedia PDF Downloads 3365607 Size and Content of the Doped Silver Affected the Pulmonary Toxicity of Silver-Doped Nano-Titanium Dioxide Photocatalysts and the Optimization of These Two Parameters
Authors: Xiaoquan Huang, Congcong Li, Tingting Wei, Changcun Bai, Na Liu, Meng Tang
Abstract:
Silver is often doped on nano-titanium dioxide photocatalysts (Ag-TiO₂) by photodeposition method to improve their utilization of visible-light while increasing the toxicity of TiO₂。 However, it is not known what factors influence this toxicity and how to reduce toxicity while maintaining the maximum catalytic activity. In this study, Ag-TiO₂ photocatalysts were synthesized by the photodeposition method with different silver content (AgC) and photodeposition time (PDT). Characterization and catalytic experiments demonstrated that silver was well assembled on TiO₂ with excellent visible-light catalytic activity, and the size of silver increased with PDT. In vitro, the cell viability of lung epithelial cells A549 and BEAS-2B showed that the higher content and smaller size of silver doping caused higher toxicity. In vivo, Ag-TiO₂ catalysts with lower AgC or larger silver particle size obviously caused less pulmonary pro-inflammatory and pro-fibrosis responses. However, the visible light catalytic activity decreased with the increase in silver size. Therefore, in order to optimize the Ag-TiO₂ photocatalyst with the lowest pulmonary toxicity and highest catalytic performance, response surface methodology (RSM) was further performed to optimize the two independent variables of AgC and PDT. Visible-light catalytic activity was evaluated by the degradation rate of Rhodamine B, the antibacterial property was evaluated by killing log value for Escherichia coli, and cytotoxicity was evaluated by IC50 to BEAS-2B cells. As a result, the RSM model showed that AgC and PDT exhibited an interaction effect on catalytic activity in the quadratic model. AgC was positively correlated with antibacterial activity. Cytotoxicity was proportional to AgC while inversely proportional to PDT. Finally, the optimization values were AgC 3.08 w/w% and PDT 28 min. Under this optimal condition, the relatively high silver proportion ensured the visible-light catalytic and antibacterial activity, while the longer PDT effectively reduced the cytotoxicity. This study is of significance for the safe and efficient application of silver-doped TiO₂ photocatalysts.Keywords: Ag-doped TiO₂, cytotoxicity, inflammtion, fibrosis, response surface methodology
Procedia PDF Downloads 695606 Cyclone Driven Variation of Chlorophyll-a Concentration in Bay of Bengal
Authors: Nowshin Nabila Siddique, S. M. Mustafizur Rahman
Abstract:
There is evidence of cyclonic events in Bay of Bengal (BoB) throughout the year. These cyclones cause a variety of fluctuations along its track including the is the influence in Chlorophyll-a (chl-a) concentration. The main purpose of this paper is to justify this variation pattern. Six Tropical Cyclones (TC) are studied using observational method. The result suggests that there is a noticeable change in productivity after a cyclone passes, when the pre cyclonic and post cyclonic condition is observed. In case of Cyclone Amphan, it shows 1.79 mg/m3 of chlorophyll-a concentration increase after a week of cyclonic occurrence. This change is affected by several attributes such as translation speed, intensity and Ocean Pre-condition, specifically Mixed Layer Depth (MLD). Translation Speed and MLD shows a strong negative correlation with the induced chlorophyll concentration. Whereas the effect of the intensity on a cyclone is not that prominent. It is also found that the period of starting an induction is not same for all cyclone such as in case of Cyclone Amphan, the changes started to occur after one day however for Cyclone Sidr and Cyclone Mora it started after three days. Furthermore, a slightly increase in overall productivity is also observed after a cyclone. In the case of Cyclone Amphan, Hudhud, Phailin it shows a rise up to 0.12 mg/m3 in productivity which decreases gradually taking around the period of two months. On a whole this paper signifies the changes in chlorophyll concentration caused by numerous cyclones and its different characteristics that regulates these changes.Keywords: tropical cyclone, chlorophyll-a concentration, mixed layer depth, translation speed
Procedia PDF Downloads 885605 A Quantitative Analysis of the Conservation of Resources, Burnout, and Other Selected Behavioral Variables among Law Enforcement Officers
Authors: Nathan Moran, Robert Hanser, Attapol Kuanliang
Abstract:
The purpose of this study is to determine the relationship between personal and social resources and burnout for police officers. Current conceptualizations of the condition of burnout are challenged as being too phenomenological and ambiguous, and consequently, not given to direct empirical testing. The conservation of resources model is based on the supposition that people strive to retain, protect, and build resources as a means to protect them from the impacts of burnout. The model proposes that the effects of stress (i.e. burnout) can be manifested in personal and professional attitudes and attributes, which can measure burnout using self-reports to provide strong support for the conservation of resources model, in that, personal and professional demands are related to the exhaustion component of burnout, whereas personal and professional resources can be compiled to counteract the negative impact of the burnout condition. Highly similar patterns of burnout resistance factors were witnessed in police officers in two department precincts (N:81). In addition, results confirmed the positive influence of key demographic variables in burnout resistance using the conservation of resources model. Participants in this study are all sheriff’s deputies with a populous county in a Pacific Northwestern state (N = 274). Four instruments will be used in this quantitative study for data collection (a) a series of demographic questions, (b) the Organizational Citizenship Behavior, (c) the PANAS-X Scale (OCB: Watson& Clark, 1994), and (d) The Maslach Burnout Inventory.Keywords: behavioral, burnout, law enforcement, quantitative
Procedia PDF Downloads 2865604 Design of Digital IIR Filter Using Opposition Learning and Artificial Bee Colony Algorithm
Authors: J. S. Dhillon, K. K. Dhaliwal
Abstract:
In almost all the digital filtering applications the digital infinite impulse response (IIR) filters are preferred over finite impulse response (FIR) filters because they provide much better performance, less computational cost and have smaller memory requirements for similar magnitude specifications. However, the digital IIR filters are generally multimodal with respect to the filter coefficients and therefore, reliable methods that can provide global optimal solutions are required. The artificial bee colony (ABC) algorithm is one such recently introduced meta-heuristic optimization algorithm. But in some cases it shows insufficiency while searching the solution space resulting in a weak exchange of information and hence is not able to return better solutions. To overcome this deficiency, the opposition based learning strategy is incorporated in ABC and hence a modified version called oppositional artificial bee colony (OABC) algorithm is proposed in this paper. Duplication of members is avoided during the run which also augments the exploration ability. The developed algorithm is then applied for the design of optimal and stable digital IIR filter structure where design of low-pass (LP) and high-pass (HP) filters is carried out. Fuzzy theory is applied to achieve maximize satisfaction of minimum magnitude error and stability constraints. To check the effectiveness of OABC, the results are compared with some well established filter design techniques and it is observed that in most cases OABC returns better or atleast comparable results.Keywords: digital infinite impulse response filter, artificial bee colony optimization, opposition based learning, digital filter design, multi-parameter optimization
Procedia PDF Downloads 4775603 Smart Sensor Data to Predict Machine Performance with IoT-Based Machine Learning and Artificial Intelligence
Authors: C. J. Rossouw, T. I. van Niekerk
Abstract:
The global manufacturing industry is utilizing the internet and cloud-based services to further explore the anatomy and optimize manufacturing processes in support of the movement into the Fourth Industrial Revolution (4IR). The 4IR from a third world and African perspective is hindered by the fact that many manufacturing systems that were developed in the third industrial revolution are not inherently equipped to utilize the internet and services of the 4IR, hindering the progression of third world manufacturing industries into the 4IR. This research focuses on the development of a non-invasive and cost-effective cyber-physical IoT system that will exploit a machine’s vibration to expose semantic characteristics in the manufacturing process and utilize these results through a real-time cloud-based machine condition monitoring system with the intention to optimize the system. A microcontroller-based IoT sensor was designed to acquire a machine’s mechanical vibration data, process it in real-time, and transmit it to a cloud-based platform via Wi-Fi and the internet. Time-frequency Fourier analysis was applied to the vibration data to form an image representation of the machine’s behaviour. This data was used to train a Convolutional Neural Network (CNN) to learn semantic characteristics in the machine’s behaviour and relate them to a state of operation. The same data was also used to train a Convolutional Autoencoder (CAE) to detect anomalies in the data. Real-time edge-based artificial intelligence was achieved by deploying the CNN and CAE on the sensor to analyse the vibration. A cloud platform was deployed to visualize the vibration data and the results of the CNN and CAE in real-time. The cyber-physical IoT system was deployed on a semi-automated metal granulation machine with a set of trained machine learning models. Using a single sensor, the system was able to accurately visualize three states of the machine’s operation in real-time. The system was also able to detect a variance in the material being granulated. The research demonstrates how non-IoT manufacturing systems can be equipped with edge-based artificial intelligence to establish a remote machine condition monitoring system.Keywords: IoT, cyber-physical systems, artificial intelligence, manufacturing, vibration analytics, continuous machine condition monitoring
Procedia PDF Downloads 885602 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 2835601 Variation of Airfoil Pressure Profile Due to Confined Air Streams: Application in Gas-Oil Separators
Authors: Amir Hossein Haji, Nabeel Al-Rawahi, Gholamreza Vakili-Nezhaad
Abstract:
An innovative design has been examined for a gas-oil separator based on pressure reduction over an airfoil surface. The primary motivations are to shorten the release trajectory of the bubbles by minimizing the thickness of the oil layer as well as improving uniform pressure reduction zones. Restricted airflow over an airfoil is investigated for its effect on the pressure drop enhancement and the maximum attainable attack angle prior to the stall condition. Aerodynamic separation is delayed based on numerical simulation of Wortmann FX 63137 Airfoil in a confined domain using FLUENT 6.3.26. The proposed set up results in higher pressure drop compared with the free stream case. With the aim of optimum power consumption we have pursued further restriction to an air jet case over the airfoil. Then, a curved strip model is suggested for the air jet which can be applied as an analysis/design tool for the best performance conditions. Pressure reduction is shown to be inversely proportional to the curvature of the upper airfoil profile. This reduction occurs within the tracking zones where the air jet is effectively attached to the airfoil surface. The zero slope condition is suggested to estimate the onset of these zones after which the minimum curvature should be searched. The corresponding zero slope curvature is applied for estimation of the maximum pressure drop which shows satisfactory agreement with the simulation results.Keywords: airfoil, air jet, curved fluid flow, gas-oil separator
Procedia PDF Downloads 4725600 The Effect of Experimentally Induced Stress on Facial Recognition Ability of Security Personnel’s
Authors: Zunjarrao Kadam, Vikas Minchekar
Abstract:
The facial recognition is an important task in criminal investigation procedure. The security guards-constantly watching the persons-can help to identify the suspected accused. The forensic psychologists are tackled such cases in the criminal justice system. The security personnel may loss their ability to correctly identify the persons due to constant stress while performing the duty. The present study aimed at to identify the effect of experimentally induced stress on facial recognition ability of security personnel’s. For this study 50, security guards from Sangli, Miraj & Jaysingpur city of the Maharashtra States of India were recruited in the experimental study. The randomized two group design was employed to carry out the research. In the initial condition twenty identity card size photographs were shown to both groups. Afterward, artificial stress was induced in the experimental group through the difficultpuzzle-solvingtask in a limited period. In the second condition, both groups were presented earlier photographs with another additional thirty new photographs. The subjects were asked to recognize the photographs which are shown earliest. The analyzed data revealed that control group has ahighest mean score of facial recognition than experimental group. The results were discussed in the present research.Keywords: experimentally induced stress, facial recognition, cognition, security personnel
Procedia PDF Downloads 2615599 A Semi-Markov Chain-Based Model for the Prediction of Deterioration of Concrete Bridges in Quebec
Authors: Eslam Mohammed Abdelkader, Mohamed Marzouk, Tarek Zayed
Abstract:
Infrastructure systems are crucial to every aspect of life on Earth. Existing Infrastructure is subjected to degradation while the demands are growing for a better infrastructure system in response to the high standards of safety, health, population growth, and environmental protection. Bridges play a crucial role in urban transportation networks. Moreover, they are subjected to high level of deterioration because of the variable traffic loading, extreme weather conditions, cycles of freeze and thaw, etc. The development of Bridge Management Systems (BMSs) has become a fundamental imperative nowadays especially in the large transportation networks due to the huge variance between the need for maintenance actions, and the available funds to perform such actions. Deterioration models represent a very important aspect for the effective use of BMSs. This paper presents a probabilistic time-based model that is capable of predicting the condition ratings of the concrete bridge decks along its service life. The deterioration process of the concrete bridge decks is modeled using semi-Markov process. One of the main challenges of the Markov Chain Decision Process (MCDP) is the construction of the transition probability matrix. Yet, the proposed model overcomes this issue by modeling the sojourn times based on some probability density functions. The sojourn times of each condition state are fitted to probability density functions based on some goodness of fit tests such as Kolmogorov-Smirnov test, Anderson Darling, and chi-squared test. The parameters of the probability density functions are obtained using maximum likelihood estimation (MLE). The condition ratings obtained from the Ministry of Transportation in Quebec (MTQ) are utilized as a database to construct the deterioration model. Finally, a comparison is conducted between the Markov Chain and semi-Markov chain to select the most feasible prediction model.Keywords: bridge management system, bridge decks, deterioration model, Semi-Markov chain, sojourn times, maximum likelihood estimation
Procedia PDF Downloads 2115598 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 2045597 An Ontological Approach to Existentialist Theatre and Theatre of the Absurd in the Works of Jean-Paul Sartre and Samuel Beckett
Authors: Gülten Silindir Keretli
Abstract:
The aim of this study is to analyse the works of playwrights within the framework of existential philosophy. It is to observe the ontological existence in the plays of No Exit and Endgame. Literary works will be discussed separately in each section of this study. The despair of post-war generation of Europe problematized the ‘human condition’ in every field of literature which is the very product of social upheaval. With this concern in his mind, Sartre’s creative works portrayed man as a lonely being, burdened with terrifying freedom to choose and create his own meaning in an apparently meaningless world. The traces of the existential thought are to be found throughout the history of philosophy and literature. On the other hand, the theatre of the absurd is a form of drama showing the absurdity of the human condition and it is heavily influenced by the existential philosophy. Beckett is the most influential playwright of the theatre of the absurd. The themes and thoughts in his plays share many tenets of the existential philosophy. The existential philosophy posits the meaninglessness of existence and it regards man as being thrown into the universe and into desolate isolation. To overcome loneliness and isolation, the human ego needs recognition from the other people. Sartre calls this need of recognition as the need for ‘the Look’ (Le regard) from the Other. In this paper, existentialist philosophy and existentialist angst will be elaborated and then the works of existentialist theatre and theatre of absurd will be discussed within the framework of existential philosophy.Keywords: consciousness, existentialism, the notion of the absurd, the other
Procedia PDF Downloads 1585596 Study of Wake Dynamics for a Rim-Driven Thruster Based on Numerical Method
Authors: Bao Liu, Maarten Vanierschot, Frank Buysschaert
Abstract:
The present work examines the wake dynamics of a rim-driven thruster (RDT) with Computational Fluid Dynamics (CFD). Unsteady Reynolds-averaged Navier-Stokes (URANS) equations were solved in the commercial solver ANSYS Fluent in combination with the SST k-ω turbulence model. The application of the moving reference frame (MRF) and sliding mesh (SM) approach to handling the rotational movement of the propeller were compared in the transient simulations. Validation and verification of the numerical model was performed to ensure numerical accuracy. Two representative scenarios were considered, i.e., the bollard condition (J=0) and a very light loading condition(J=0.7), respectively. From the results, it’s confirmed that compared to the SM method, the MRF method is not suitable for resolving the unsteady flow features as it only gives the general mean flow but smooths out lots of characteristic details in the flow field. By evaluating the simulation results with the SM technique, the instantaneous wake flow field under both conditions is presented and analyzed, most notably the helical vortex structure. It’s observed from the results that the tip vortices, blade shed vortices, and hub vortices are present in the wake flow field and convect downstream in a highly non-linear way. The shear layer vortices shedding from the duct displayed a strong interaction with the distorted tip vortices in an irregularmanner.Keywords: computational fluid dynamics, rim-driven thruster, sliding mesh, wake dynamics
Procedia PDF Downloads 2585595 A Robust Spatial Feature Extraction Method for Facial Expression Recognition
Authors: H. G. C. P. Dinesh, G. Tharshini, M. P. B. Ekanayake, G. M. R. I. Godaliyadda
Abstract:
This paper presents a new spatial feature extraction method based on principle component analysis (PCA) and Fisher Discernment Analysis (FDA) for facial expression recognition. It not only extracts reliable features for classification, but also reduces the feature space dimensions of pattern samples. In this method, first each gray scale image is considered in its entirety as the measurement matrix. Then, principle components (PCs) of row vectors of this matrix and variance of these row vectors along PCs are estimated. Therefore, this method would ensure the preservation of spatial information of the facial image. Afterwards, by incorporating the spectral information of the eigen-filters derived from the PCs, a feature vector was constructed, for a given image. Finally, FDA was used to define a set of basis in a reduced dimension subspace such that the optimal clustering is achieved. The method of FDA defines an inter-class scatter matrix and intra-class scatter matrix to enhance the compactness of each cluster while maximizing the distance between cluster marginal points. In order to matching the test image with the training set, a cosine similarity based Bayesian classification was used. The proposed method was tested on the Cohn-Kanade database and JAFFE database. It was observed that the proposed method which incorporates spatial information to construct an optimal feature space outperforms the standard PCA and FDA based methods.Keywords: facial expression recognition, principle component analysis (PCA), fisher discernment analysis (FDA), eigen-filter, cosine similarity, bayesian classifier, f-measure
Procedia PDF Downloads 4255594 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design
Authors: Emiliano Matta
Abstract:
Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers
Procedia PDF Downloads 1485593 An Optimal Path for Virtual Reality Education using Association Rules
Authors: Adam Patterson
Abstract:
This study analyzes the self-reported experiences of virtual reality users to develop insight into an optimal learning path for education within virtual reality. This research uses a sample of 1000 observations to statistically define factors influencing (i) immersion level and (ii) motion sickness rating for virtual reality experience respondents of college age. This paper recommends an efficient duration for each virtual reality session, to minimize sickness and maximize engagement, utilizing modern machine learning methods such as association rules. The goal of this research, in augmentation with previous literature, is to inform logistical decisions relating to implementation of pilot instruction for virtual reality at the collegiate level. Future research will include a Randomized Control Trial (RCT) to quantify the effect of virtual reality education on student learning outcomes and engagement measures. Current research aims to maximize the treatment effect within the RCT by optimizing the learning benefits of virtual reality. Results suggest significant gender heterogeneity amongst likelihood of reporting motion sickness. Females are 1.7 times more likely, than males, to report high levels of motion sickness resulting from a virtual reality experience. Regarding duration, respondents were 1.29 times more likely to select the lowest level of motion sickness after an engagement lasting between 24.3 and 42 minutes. Conversely, respondents between 42 to 60 minutes were 1.2 times more likely to select the higher levels of motion sickness.Keywords: applications and integration of e-education, practices and cases in e-education, systems and technologies in e-education, technology adoption and diffusion of e-learning
Procedia PDF Downloads 675592 Analyzing the Impacts of Sustainable Tourism Development on Residents’ Well-Being Based on Stakeholder Perception: Evidence from a Coastal-Hinterland Region
Authors: Elham Falatoonitoosi, Vikki Schaffer, Don Kerr
Abstract:
Over-development for tourism and its consequences on residents’ well-being turn into a critical issue in tourism destinations. Learning about undesirable impacts of tourism has led many people to seek more sustainable and responsible tourism. The main objective of this research is to understand how and to what extent sustainable tourism development enhances locals’ well-being regarding stakeholder perception. The research was conducted in a coastal-hinterland tourism region through two sequential phases. At the first phase, a unique set of 19 sustainable tourism indicators resulted from a triplex model was used to examine the sustainability effects on the main factors of residents’ well-being including equity and living condition, life satisfaction, health condition, and education quality. The triplex model including i) systematic literature search, ii) convergent interviewing, and iii) DEMATEL aimed to develop sustainability indicators, specify them for a particular destination, and identify the dominant sustainability issues acting as key predictors in sustainable development. At the second phase, a hierarchical multiple regression was used to examine the relationship between sustainable development and local residents’ well-being. A number of 167 participants from five different groups of stakeholders perceived the importance level of each sustainability indicators regarding well-being factors on 5-point Likert scale. Results from the first phase indicated that sustainability training, government support, tourism sociocultural effects, tourism revenue, and climate change are the top dominant sustainability issues in the regional sustainable development. Results from the second phase showed that sustainable development considerably improves the overall residents’ well-being and has positive relationships with all well-being factors except life satisfaction. It explains that it was difficult for stakeholders to recognize a link between sustainable development and their overall life satisfaction and happiness. Among well-being’s factors, health condition was influenced the most by sustainability indicators that indicate stakeholders believed sustainability development can promote public health, health sector performance, quality of drinking water, and sanitation. For the future research, it is highly recommended to analysis the effects of sustainable tourism development on the other features of a tourism destination’s well-being including residents sociocultural empowerment, local economic growth, and attractiveness of the destination.Keywords: residents' well-being, stakeholder perception, sustainability indicators, sustainable tourism
Procedia PDF Downloads 2655591 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 1605590 Integrated Plant Protection Activities against (Tuta absoluta Meyrik) Moth in Tomato Plantings in Azerbaijan
Authors: Nazakat Ismailzada, Carol Jones
Abstract:
Tomato drilling moth Tuta absoluta (Meyrick) (Lepidoptera: Gelechiidae) is the main pest of tomato plants in many countries. The larvae of tomato leaves, the stems inside, in the end buds, they opened the gallery in green and ripe fruit. In this way the harmful products can be fed with all parts of the tomato plant can cause damage to 80-100%. Pest harms all above ground parts of the tomato plant. After the seedlings are planted in areas and during blossoming holder traps with tomato moth’s rubber capsule inside should be placed in the area by using five-tomato moth’s feremon per ha. Then there should be carried out observations in the fields in every three days regularly. During the researches, it was showed that in field condition Carogen 20 SC besides high-level biological efficiency also has low ecological load for environment, and should be used against tomato moth in farms. Therefore it was showed that in field condition Carogen 20 SC besides high-level biological efficiency also has low ecological load for environment, and should be used against tomato moth in farms with insecticide expenditure norm 320 qr\ha. In farms should be used plant rotation, plant fields should be plowed on the 25-30 sm depth, before sowing seeds should be proceeded by insecticides. As element of integrated plant protection activities, should be used pheromones trap. In tomato plant fields as an insecticide should be used AGROSAN 240 SC and Carogen 20 SP.Keywords: lepidoptera, Tuta absoluta, chemical control, integrated pest management
Procedia PDF Downloads 1655589 Making a Resilient Livable City: Explorations of Smart Management Mechanism for Aging Society’s Disaster Prevention
Authors: Wei-Kuang Liu, Ya-Hsu Chiang
Abstract:
In the coming of an aging society, the issues of living quality, health care, and social security for the elderly have been gradually taken seriously. In order to maintain favorable living condition, urban societies are also facing the challenge of disasters caused by extreme climate change. However, in the practice of disaster prevention, elderly people are always weak due to their physiological conditions. That is to say, in the planning of resilient urbanism, the aging society is relatively in need of more care. Thus, this research aims to map areas where have high-density elderly population and fragile environmental condition in Taiwan, and to understand the actual situation of disaster prevention management in these areas, so as to provide suggestions for the development of intellectual resilient urban management. The research takes the cities of Taoyuan and Taichung as examples for explorations. According to GIS mapping of areas with high aging index, high-density population and high flooding potential, the communities of Sihai and Fuyuan in Taoyuan and the communities of Taichang and Nanshih in Taichung are highlighted. In these communities, it can be found that there are more elderly population and less labor population with high-density living condition. In addition, they are located in the areas where they have experienced severe flooding in the recent past. Based on a series of interviews with community organizations, there is only one community out of the four using flood information mobile app and Line messages for the management of disaster prevention, and the others still rely on the traditional approaches that manage the works of disaster prevention by their community security patrol teams and community volunteers. The interview outcome shows that most elderly people are not interested in learning the use of intellectual devices. Therefore, this research suggests to keep doing the GIS mapping of areas with high aging index, high-density population and high flooding potential for grasping the high-risk communities and to help develop smart monitor and forecast systems for disaster prevention practice in these areas. Based on case-study explorations, the research also advises that it is important to develop easy-to-use bottom-up and two-way immediate communication mechanism for the management of aging society’s disaster prevention.Keywords: aging society, disaster prevention, GIS, resilient, Taiwan
Procedia PDF Downloads 1175588 Modal Analysis of FGM Plates Using Finite Element Method
Authors: S. J. Shahidzadeh Tabatabaei, A. M. Fattahi
Abstract:
Modal analysis of an FGM plate containing the ceramic phase of Al2O3 and metal phase of stainless steel 304 was performed using ABAQUS, with the assumptions that the material has an elastic mechanical behavior and its Young modulus and density are varying in thickness direction. For this purpose, a subroutine was written in FORTRAN and linked with ABAQUS. First, a simulation was performed in accordance to other researcher’s model, and then after comparing the obtained results, the accuracy of the present study was verified. The obtained results for natural frequency and mode shapes indicate good performance of user-written subroutine as well as FEM model used in present study. After verification of obtained results, the effect of clamping condition and the material type (i.e. the parameter n) was investigated. In this respect, finite element analysis was carried out in fully clamped condition for different values of n. The results indicate that the natural frequency decreases with increase of n, since with increase of n, the amount of ceramic phase in FGM plate decreases, while the amount of metal phase increases, leading to decrease of the plate stiffness and hence, natural frequency, as the Young modulus of Al2O3 is equal to 380 GPa and the Young modulus of stainless steel 304 is equal to 207 GPa.Keywords: FGM plates, modal analysis, natural frequency, finite element method
Procedia PDF Downloads 3425587 Landscape Factors Eliciting the Sense of Relaxation in Urban Green Space
Authors: Kaowen Grace Chang
Abstract:
Urban green spaces play an important role in promoting wellbeing through the sense of relaxation for urban residents. Among many designing factors, what the principal ones that could effectively influence people’s sense of relaxation? And, what are the relationship between the sense of relaxation and those factors? Regarding those questions, there is still little evidence for sufficient support. Therefore, the purpose of this study, based on individual responses to environmental information, is to investigate the landscape factors that relate to well-being through the sense of relaxation in mixed-use urban environments. We conducted the experimental design and model construction utilizing choice-based conjoint analysis to test the factors of plant arrangement pattern, plant trimming condition, the distance to visible automobile, the number of landmark objects, and the depth of view. Through the operation of balanced fractional orthogonal design, the goal is to know the relationship between the sense of relaxation and different designs. In a result, the three factors of plant trimming condition, the distance to visible automobile, and the depth of view shed are significantly effective to the sense of relaxation. The stronger magnitude of maintenance and trimming, the further distance to visible automobiles, and deeper view shed that allow the users to see further scenes could significantly promote green space users’ sense of relaxation in urban green spaces.Keywords: urban green space, landscape planning and design, sense of relaxation, choice model
Procedia PDF Downloads 148