Search results for: distributed model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18017

Search results for: distributed model

11567 Long-Range Transport of Biomass Burning Aerosols over South America: A Case Study in the 2019 Amazon Rainforest Wildfires Season

Authors: Angel Liduvino Vara-Vela, Dirceu Luis Herdies, Debora Souza Alvim, Eder Paulo Vendrasco, Silvio Nilo Figueroa, Jayant Pendharkar, Julio Pablo Reyes Fernandez

Abstract:

Biomass-burning episodes are quite common in the central Amazon rainforest and represent a dominant source of aerosols during the dry season, between August and October. The increase in the occurrence of fires in 2019 in the world’s largest biomes has captured the attention of the international community. In particular, a rare and extreme smoke-related event occurred in the afternoon of Monday, August 19, 2019, in the most populous city in the Western Hemisphere, the São Paulo Metropolitan Area (SPMA), located in southeastern Brazil. The sky over the SPMA suddenly blackened, with the day turning into night, as reported by several news media around the world. In order to clarify whether or not the smoke that plunged the SPMA into sudden darkness was related to wildfires in the Amazon rainforest region, a set of 48-hour simulations over South America were performed using the Weather Research and Forecasting with Chemistry (WRF-Chem) model at 20 km horizontal resolution, on a daily basis, during the period from August 16 to August 19, 2019. The model results were satisfactorily compared against satellite-based data products and in situ measurements collected from air quality monitoring sites. Although a very strong smoke transport coming from the Amazon rainforest was observed in the middle of the afternoon on August 19, its impact on air quality over the SPMA took place in upper levels far above the surface, where, conversely, low air pollutant concentrations were observed.

Keywords: Amazon rainforest, biomass burning aerosols, São Paulo metropolitan area, WRF-Chem model

Procedia PDF Downloads 125
11566 Integrating Virtual Reality and Building Information Model-Based Quantity Takeoffs for Supporting Construction Management

Authors: Chin-Yu Lin, Kun-Chi Wang, Shih-Hsu Wang, Wei-Chih Wang

Abstract:

A construction superintendent needs to know not only the amount of quantities of cost items or materials completed to develop a daily report or calculate the daily progress (earned value) in each day, but also the amount of quantities of materials (e.g., reinforced steel and concrete) to be ordered (or moved into the jobsite) for performing the in-progress or ready-to-start construction activities (e.g., erection of reinforced steel and concrete pouring). These daily construction management tasks require great effort in extracting accurate quantities in a short time (usually must be completed right before getting off work every day). As a result, most superintendents can only provide these quantity data based on either what they see on the site (high inaccuracy) or the extraction of quantities from two-dimension (2D) construction drawings (high time consumption). Hence, the current practice of providing the amount of quantity data completed in each day needs improvement in terms of more accuracy and efficiency. Recently, a three-dimension (3D)-based building information model (BIM) technique has been widely applied to support construction quantity takeoffs (QTO) process. The capability of virtual reality (VR) allows to view a building from the first person's viewpoint. Thus, this study proposes an innovative system by integrating VR (using 'Unity') and BIM (using 'Revit') to extract quantities to support the above daily construction management tasks. The use of VR allows a system user to be present in a virtual building to more objectively assess the construction progress in the office. This VR- and BIM-based system is also facilitated by an integrated database (consisting of the information and data associated with the BIM model, QTO, and costs). In each day, a superintendent can work through a BIM-based virtual building to quickly identify (via a developed VR shooting function) the building components (or objects) that are in-progress or finished in the jobsite. And he then specifies a percentage (e.g., 20%, 50% or 100%) of completion of each identified building object based on his observation on the jobsite. Next, the system will generate the completed quantities that day by multiplying the specified percentage by the full quantities of the cost items (or materials) associated with the identified object. A building construction project located in northern Taiwan is used as a case study to test the benefits (i.e., accuracy and efficiency) of the proposed system in quantity extraction for supporting the development of daily reports and the orders of construction materials.

Keywords: building information model, construction management, quantity takeoffs, virtual reality

Procedia PDF Downloads 121
11565 Parameterized Lyapunov Function Based Robust Diagonal Dominance Pre-Compensator Design for Linear Parameter Varying Model

Authors: Xiaobao Han, Huacong Li, Jia Li

Abstract:

For dynamic decoupling of linear parameter varying system, a robust dominance pre-compensator design method is given. The parameterized pre-compensator design problem is converted into optimal problem constrained with parameterized linear matrix inequalities (PLMI); To solve this problem, firstly, this optimization problem is equivalently transformed into a new form with elimination of coupling relationship between parameterized Lyapunov function (PLF) and pre-compensator. Then the problem was reduced to a normal convex optimization problem with normal linear matrix inequalities (LMI) constraints on a newly constructed convex polyhedron. Moreover, a parameter scheduling pre-compensator was achieved, which satisfies robust performance and decoupling performances. Finally, the feasibility and validity of the robust diagonal dominance pre-compensator design method are verified by the numerical simulation of a turbofan engine PLPV model.

Keywords: linear parameter varying (LPV), parameterized Lyapunov function (PLF), linear matrix inequalities (LMI), diagonal dominance pre-compensator

Procedia PDF Downloads 388
11564 Collision Detection Algorithm Based on Data Parallelism

Authors: Zhen Peng, Baifeng Wu

Abstract:

Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.

Keywords: data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability

Procedia PDF Downloads 275
11563 Ground Surface Temperature History Prediction Using Long-Short Term Memory Neural Network Architecture

Authors: Venkat S. Somayajula

Abstract:

Ground surface temperature history prediction model plays a vital role in determining standards for international nuclear waste management. International standards for borehole based nuclear waste disposal require paleoclimate cycle predictions on scale of a million forward years for the place of waste disposal. This research focuses on developing a paleoclimate cycle prediction model using Bayesian long-short term memory (LSTM) neural architecture operated on accumulated borehole temperature history data. Bayesian models have been previously used for paleoclimate cycle prediction based on Monte-Carlo weight method, but due to limitations pertaining model coupling with certain other prediction networks, Bayesian models in past couldn’t accommodate prediction cycle’s over 1000 years. LSTM has provided frontier to couple developed models with other prediction networks with ease. Paleoclimate cycle developed using this process will be trained on existing borehole data and then will be coupled to surface temperature history prediction networks which give endpoints for backpropagation of LSTM network and optimize the cycle of prediction for larger prediction time scales. Trained LSTM will be tested on past data for validation and then propagated for forward prediction of temperatures at borehole locations. This research will be beneficial for study pertaining to nuclear waste management, anthropological cycle predictions and geophysical features

Keywords: Bayesian long-short term memory neural network, borehole temperature, ground surface temperature history, paleoclimate cycle

Procedia PDF Downloads 117
11562 Application Methodology for the Generation of 3D Thermal Models Using UAV Photogrammety and Dual Sensors for Mining/Industrial Facilities Inspection

Authors: Javier Sedano-Cibrián, Julio Manuel de Luis-Ruiz, Rubén Pérez-Álvarez, Raúl Pereda-García, Beatriz Malagón-Picón

Abstract:

Structural inspection activities are necessary to ensure the correct functioning of infrastructures. Unmanned Aerial Vehicle (UAV) techniques have become more popular than traditional techniques. Specifically, UAV Photogrammetry allows time and cost savings. The development of this technology has permitted the use of low-cost thermal sensors in UAVs. The representation of 3D thermal models with this type of equipment is in continuous evolution. The direct processing of thermal images usually leads to errors and inaccurate results. A methodology is proposed for the generation of 3D thermal models using dual sensors, which involves the application of visible Red-Blue-Green (RGB) and thermal images in parallel. Hence, the RGB images are used as the basis for the generation of the model geometry, and the thermal images are the source of the surface temperature information that is projected onto the model. Mining/industrial facilities representations that are obtained can be used for inspection activities.

Keywords: aerial thermography, data processing, drone, low-cost, point cloud

Procedia PDF Downloads 129
11561 Topology and Shape Optimization of Macpherson Control Arm under Fatigue Loading

Authors: Abolfazl Hosseinpour, Javad Marzbanrad

Abstract:

In this research, the topology and shape optimization of a Macpherson control arm has been accomplished to achieve lighter weight. Present automotive market demands low cost and light weight component to meet the need of fuel efficient and cost effective vehicle. This in turn gives the rise to more effective use of materials for automotive parts which can reduce the mass of vehicle. Since automotive components are under dynamic loads which cause fatigue damage, considering fatigue criteria seems to be essential in designing automotive components. At first, in order to create severe loading condition for control arm, some rough roads are generated through power spectral density. Then, the most critical loading conditions are obtained through multibody dynamics analysis of a full vehicle model. Then, the topology optimization is performed based on fatigue life criterion using HyperMesh software, which resulted to 50 percent mass reduction. In the next step a CAD model is created using CATIA software and shape optimization is performed to achieve accurate dimensions with less mass.

Keywords: topology optimization, shape optimization, fatigue life, MacPherson control arm

Procedia PDF Downloads 305
11560 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor

Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes

Abstract:

In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.

Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data

Procedia PDF Downloads 131
11559 A Survey of WhatsApp as a Tool for Instructor-Learner Dialogue, Learner-Content Dialogue, and Learner-Learner Dialogue

Authors: Ebrahim Panah, Muhammad Yasir Babar

Abstract:

Thanks to the development of online technology and social networks, people are able to communicate as well as learn. WhatsApp is a popular social network which is growingly gaining popularity. This app can be used for communication as well as education. It can be used for instructor-learner, learner-learner, and learner-content interactions; however, very little knowledge is available on these potentials of WhatsApp. The current study was undertaken to investigate university students’ perceptions of WhatsApp used as a tool for instructor-learner dialogue, learner-content dialogue, and learner-learner dialogue. The study adopted a survey approach and distributed the questionnaire developed by Google Forms to 54 (11 males and 43 females) university students. The obtained data were analyzed using SPSS version 20. The result of data analysis indicates that students have positive attitudes towards WhatsApp as a tool for Instructor-Learner Dialogue: it easy to reach the lecturer (4.07), the instructor gives me valuable feedback on my assignment (4.02), the instructor is supportive during course discussion and offers continuous support with the class (4.00). Learner-Content Dialogue: WhatsApp allows me to academically engage with lecturers anytime, anywhere (4.00), it helps to send graphics such as pictures or charts directly to the students (3.98), it also provides out of class, extra learning materials and homework (3.96), and Learner-Learner Dialogue: WhatsApp is a good tool for sharing knowledge with others (4.09), WhatsApp allows me to academically engage with peers anytime, anywhere (4.07), and we can interact with others through the use of group discussion (4.02). It was also found that there are significant positive correlations between students’ perceptions of Instructor-Learner Dialogue (ILD), Learner-Content Dialogue (LCD), Learner-Learner Dialogue (LLD) and WhatsApp Application in classroom. The findings of the study have implications for lectures, policy makers and curriculum developers.

Keywords: instructor-learner dialogue, learners-contents dialogue, learner-learner dialogue, whatsapp application

Procedia PDF Downloads 146
11558 Modeling of Erosion and Sedimentation Impacts from off-Road Vehicles in Arid Regions

Authors: Abigail Rosenberg, Jennifer Duan, Michael Poteuck, Chunshui Yu

Abstract:

The Barry M. Goldwater Range, West in southwestern Arizona encompasses 2,808 square kilometers of Sonoran Desert. The hyper-arid range has an annual rainfall of less than 10 cm with an average high temperature of 41 degrees Celsius in July to an average low of 4 degrees Celsius in January. The range shares approximately 60 kilometers of the international border with Mexico. A majority of the range is open for recreational use, primarily off-highway vehicles. Because of its proximity to Mexico, the range is also heavily patrolled by U.S. Customs and Border Protection seeking to intercept and apprehend inadmissible people and illicit goods. Decades of off-roading and Border Patrol activities have negatively impacted this sensitive desert ecosystem. To assist the range program managers, this study is developing a model to identify erosion prone areas and calibrate the model’s parameters using the Automated Geospatial Watershed Assessment modeling tool.

Keywords: arid lands, automated geospatial watershed assessment, erosion modeling, sedimentation modeling, watershed modeling

Procedia PDF Downloads 358
11557 Climate Change and Landslide Risk Assessment in Thailand

Authors: Shotiros Protong

Abstract:

The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.

Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand

Procedia PDF Downloads 541
11556 Fuzzy Data, Random Drift, and a Theoretical Model for the Sequential Emergence of Religious Capacity in Genus Homo

Authors: Margaret Boone Rappaport, Christopher J. Corbally

Abstract:

The ancient ape ancestral population from which living great ape and human species evolved had demographic features affecting their evolution. The population was large, had great genetic variability, and natural selection was effective at honing adaptations. The emerging populations of chimpanzees and humans were affected more by founder effects and genetic drift because they were smaller. Natural selection did not disappear, but it was not as strong. Consequences of the 'population crash' and the human effective population size are introduced briefly. The history of the ancient apes is written in the genomes of living humans and great apes. The expansion of the brain began before the human line emerged. Coalescence times for some genes are very old – up to several million years, long before Homo sapiens. The mismatch between gene trees and species trees highlights the anthropoid speciation processes, and gives the human genome history a fuzzy, probabilistic quality. However, it suggests traits that might form a foundation for capacities emerging later. A theoretical model is presented in which the genomes of early ape populations provide the substructure for the emergence of religious capacity later on the human line. The model does not search for religion, but its foundations. It suggests a course by which an evolutionary line that began with prosimians eventually produced a human species with biologically based religious capacity. The model of the sequential emergence of religious capacity relies on cognitive science, neuroscience, paleoneurology, primate field studies, cognitive archaeology, genomics, and population genetics. And, it emphasizes five trait types: (1) Documented, positive selection of sensory capabilities on the human line may have favored survival, but also eventually enriched human religious experience. (2) The bonobo model suggests a possible down-regulation of aggression and increase in tolerance while feeding, as well as paedomorphism – but, in a human species that remains cognitively sharp (unlike the bonobo). The two species emerged from the same ancient ape population, so it is logical to search for shared traits. (3) An up-regulation of emotional sensitivity and compassion seems to have occurred on the human line. This finds support in modern genetic studies. (4) The authors’ published model of morality's emergence in Homo erectus encompasses a cognitively based, decision-making capacity that was hypothetically overtaken, in part, by religious capacity. Together, they produced a strong, variable, biocultural capability to support human sociability. (5) The full flowering of human religious capacity came with the parietal expansion and smaller face (klinorhynchy) found only in Homo sapiens. Details from paleoneurology suggest the stage was set for human theologies. Larger parietal lobes allowed humans to imagine inner spaces, processes, and beings, and, with the frontal lobe, led to the first theologies composed of structured and integrated theories of the relationships between humans and the supernatural. The model leads to the evolution of a small population of African hominins that was ready to emerge with religious capacity when the species Homo sapiens evolved two hundred thousand years ago. By 50-60,000 years ago, when human ancestors left Africa, they were fully enabled.

Keywords: genetic drift, genomics, parietal expansion, religious capacity

Procedia PDF Downloads 325
11555 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation

Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma

Abstract:

Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.

Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling

Procedia PDF Downloads 129
11554 Operational Challenges of Marine Fiber Reinforced Polymer Composite Structures Coupled with Piezoelectric Transducers

Authors: H. Ucar, U. Aridogan

Abstract:

Composite structures become intriguing for the design of aerospace, automotive and marine applications due to weight reduction, corrosion resistance and radar signature reduction demands and requirements. Studies on piezoelectric ceramic transducers (PZT) for diagnostics and health monitoring have gained attention for their sensing capabilities, however PZT structures are prone to fail in case of heavy operational loads. In this paper, we develop a piezo-based Glass Fiber Reinforced Polymer (GFRP) composite finite element (FE) model, validate with experimental setup, and identify the applicability and limitations of PZTs for a marine application. A case study is conducted to assess the piezo-based sensing capabilities in a representative marine composite structure. A FE model of the composite structure combined with PZT patches is developed, afterwards the response and functionality are investigated according to the sea conditions. Results of this study clearly indicate the blockers and critical aspects towards industrialization and wide-range use of PZTs for marine composite applications.

Keywords: FRP composite, operational challenges, piezoelectric transducers, FE modeling

Procedia PDF Downloads 164
11553 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 208
11552 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 124
11551 The Relationship Between Cyberbullying Victimization, Parent and Peer Attachment and Unconditional Self-Acceptance

Authors: Florina Magdalena Anichitoae, Anca Dobrean, Ionut Stelian Florean

Abstract:

Due to the fact that cyberbullying victimization is an increasing problem nowadays, affecting more and more children and adolescents around the world, we wanted to take a step forward analyzing this phenomenon. So, we took a look at some variables which haven't been studied together before, trying to develop another way to view cyberbullying victimization. We wanted to test the effects of the mother, father, and peer attachment on adolescent involvement in cyberbullying as victims through unconditional self acceptance. Furthermore, we analyzed each subscale of the IPPA-R, the instrument we have used for parents and peer attachment measurement, in regards to cyberbullying victimization through unconditional self acceptance. We have also analyzed if gender and age could be taken into consideration as moderators in this model. The analysis has been performed on 653 adolescents aged 11-17 years old from Romania. We used structural equation modeling, working in R program. For the fidelity analysis of the IPPA-R subscales, USAQ, and Cyberbullying Test, we have calculated the internal consistency index, which varies between .68-.91. We have created 2 models: the first model including peer alienation, peer trust, peer communication, self acceptance and cyberbullying victimization, having CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07, and the second model including parental alienation, parental trust, parental communication, self acceptance and cyberbullying victimization and had CFI=0.97, RMSEA=0.02, 90%CI [0.02, 0.03] and SRMR=0.07. Our results were interesting: on one hand, cyberbullying victimization is predicted by peer alienation and peer communication through unconditional self acceptance. Peer trust directly, significantly, and negatively predicted the implication in cyberbullying. In this regard, considering gender and age as moderators, we found that the relationship between unconditional self acceptance and cyberbullying victimization is stronger in girls, but age does not moderate the relationship between unconditional self acceptance and cyberbullying victimization. On the other hand, regarding the degree of cyberbullying victimization as being predicted through unconditional self acceptance by parental alienation, parental communication, and parental trust, this hypothesis was not supported. Still, we could identify a direct path to positively predict victimization through parental alienation and negatively through parental trust. There are also some limitations to this study, which we've discussed in the end.

Keywords: adolescent, attachment, cyberbullying victimization, parents, peers, unconditional self-acceptance

Procedia PDF Downloads 189
11550 N-Heptane as Model Molecule for Cracking Catalyst Evaluation to Improve the Yield of Ethylene and Propylene

Authors: Tony K. Joseph, Balasubramanian Vathilingam, Stephane Morin

Abstract:

Currently, the refiners around the world are more focused on improving the yield of light olefins (propylene and ethylene) as both of them are very prominent raw materials to produce wide spectrum of polymeric materials such as polyethylene and polypropylene. Henceforth, it is desirable to increase the yield of light olefins via selective cracking of heavy oil fractions. In this study, zeolite grown on SiC was used as the catalyst to do model cracking reaction of n-heptane. The catalytic cracking of n-heptane was performed in a fixed bed reactor (12 mm i.d.) at three different temperatures (425, 450 and 475 °C) and at atmospheric pressure. A carrier gas (N₂) was mixed with n-heptane with ratio of 90:10 (N₂:n-heptane), and the gaseous mixture was introduced into the fixed bed reactor. Various flow rate of reactants was tested to increase the yield of ethylene and propylene. For the comparison purpose, commercial zeolite was also tested in addition to Zeolite on SiC. The products were analyzed using an Agilent gas chromatograph (GC-9860) equipped with flame ionization detector (FID). The GC is connected online with the reactor and all the cracking tests were successfully reproduced. The entire catalytic evaluation results will be presented during the conference.

Keywords: cracking, catalyst, evaluation, ethylene, heptane, propylene

Procedia PDF Downloads 126
11549 On the Dwindling Supply of the Observable Cosmic Microwave Background Radiation

Authors: Jia-Chao Wang

Abstract:

The cosmic microwave background radiation (CMB) freed during the recombination era can be considered as a photon source of small duration; a one-time event happened everywhere in the universe simultaneously. If space is divided into concentric shells centered at an observer’s location, one can imagine that the CMB photons originated from the nearby shells would reach and pass the observer first, and those in shells farther away would follow as time goes forward. In the Big Bang model, space expands rapidly in a time-dependent manner as described by the scale factor. This expansion results in an event horizon coincident with one of the shells, and its radius can be calculated using cosmological calculators available online. Using Planck 2015 results, its value during the recombination era at cosmological time t = 0.379 million years (My) is calculated to be Revent = 56.95 million light-years (Mly). The event horizon sets a boundary beyond which the freed CMB photons will never reach the observer. The photons within the event horizon also exhibit a peculiar behavior. Calculated results show that the CMB observed today was freed in a shell located at 41.8 Mly away (inside the boundary set by Revent) at t = 0.379 My. These photons traveled 13.8 billion years (Gy) to reach here. Similarly, the CMB reaching the observer at t = 1, 5, 10, 20, 40, 60, 80, 100 and 120 Gy are calculated to be originated at shells of R = 16.98, 29.96, 37.79, 46.47, 53.66, 55.91, 56.62, 56.85 and 56.92 Mly, respectively. The results show that as time goes by, the R value approaches Revent = 56.95 Mly but never exceeds it, consistent with the earlier statement that beyond Revent the freed CMB photons will never reach the observer. The difference Revert - R can be used as a measure of the remaining observable CMB photons. Its value becomes smaller and smaller as R approaching Revent, indicating a dwindling supply of the observable CMB radiation. In this paper, detailed dwindling effects near the event horizon are analyzed with the help of online cosmological calculators based on the lambda cold dark matter (ΛCDM) model. It is demonstrated in the literature that assuming the CMB to be a blackbody at recombination (about 3000 K), then it will remain so over time under cosmological redshift and homogeneous expansion of space, but with the temperature lowered (2.725 K now). The present result suggests that the observable CMB photon density, besides changing with space expansion, can also be affected by the dwindling supply associated with the event horizon. This raises the question of whether the blackbody of CMB at recombination can remain so over time. Being able to explain the blackbody nature of the observed CMB is an import part of the success of the Big Bang model. The present results cast some doubts on that and suggest that the model may have an additional challenge to deal with.

Keywords: blackbody of CMB, CMB radiation, dwindling supply of CMB, event horizon

Procedia PDF Downloads 110
11548 Comparative Study on the Evaluation of Patient Safety in Malaysian Retail Pharmacy Setup

Authors: Palanisamy Sivanandy, Tan Tyng Wei, Tan Wee Loon, Lim Chong Yee

Abstract:

Background: Patient safety has become a major concern over recent years with elevated medication errors; particularly prescribing and dispensing errors. Meticulous prescription screening and diligent drug dispensing is therefore important to prevent drug-related adverse events from inflicting harm to patients. Hence, pharmacists play a significant role in this scenario. The evaluation of patient safety in a pharmacy setup is crucial to contemplate current practices, attitude and perception of pharmacists towards patient safety. Method: The questionnaire for Pharmacy Survey on Patient Safety Culture developed by the Agency for Healthcare and Research Quality (AHRQ) was used to assess patient safety. Main objectives of the study was to evaluate the attitude and perception of pharmacists towards patient safety in retail pharmacies setup in Malaysia. Results: 417 questionnaire were distributed via convenience sampling in three different states of Malaysia, where 390 participants were responded and the response rate was 93.52%. The overall positive response rate (PRR) was ranged from 31.20% to 87.43% and the average PRR was found to be 67%. The overall patient safety grade for our pharmacies was appreciable and it ranges from good to very good. The study found a significant difference in the perception of senior and junior pharmacists towards patient safety. The internal consistency of the questionnaire contents /dimensions was satisfactory (Cronbach’s alpha - 0.92). Conclusion: Our results reflect that there was positive attitude and perception of retail pharmacists towards patient safety. Despite this, various efforts can be implemented in the future to amplify patient safety in retail pharmacies setup.

Keywords: patient safety, attitude, perception, positive response rate, medication errors

Procedia PDF Downloads 309
11547 Development of a Context Specific Planning Model for Achieving a Sustainable Urban City

Authors: Jothilakshmy Nagammal

Abstract:

This research paper deals with the different case studies, where the Form-Based Codes are adopted in general and the different implementation methods in particular are discussed to develop a method for formulating a new planning model. The organizing principle of the Form-Based Codes, the transect is used to zone the city into various context specific transects. An approach is adopted to develop the new planning model, city Specific Planning Model (CSPM), as a tool to achieve sustainability for any city in general. A case study comparison method in terms of the planning tools used, the code process adopted and the various control regulations implemented in thirty two different cities are done. The analysis shows that there are a variety of ways to implement form-based zoning concepts: Specific plans, a parallel or optional form-based code, transect-based code /smart code, required form-based standards or design guidelines. The case studies describe the positive and negative results from based zoning, Where it is implemented. From the different case studies on the method of the FBC, it is understood that the scale for formulating the Form-Based Code varies from parts of the city to the whole city. The regulating plan is prepared with the organizing principle as the transect in most of the cases. The various implementation methods adopted in these case studies for the formulation of Form-Based Codes are special districts like the Transit Oriented Development (TOD), traditional Neighbourhood Development (TND), specific plan and Street based. The implementation methods vary from mandatory, integrated and floating. To attain sustainability the research takes the approach of developing a regulating plan, using the transect as the organizing principle for the entire area of the city in general in formulating the Form-Based Codes for the selected Special Districts in the study area in specific, street based. Planning is most powerful when it is embedded in the broader context of systemic change and improvement. Systemic is best thought of as holistic, contextualized and stake holder-owned, While systematic can be thought of more as linear, generalisable, and typically top-down or expert driven. The systemic approach is a process that is based on the system theory and system design principles, which are too often ill understood by the general population and policy makers. The system theory embraces the importance of a global perspective, multiple components, interdependencies and interconnections in any system. In addition, the recognition that a change in one part of a system necessarily alters the rest of the system is a cornerstone of the system theory. The proposed regulating plan taking the transect as an organizing principle and Form-Based Codes to achieve sustainability of the city has to be a hybrid code, which is to be integrated within the existing system - A Systemic Approach with a Systematic Process. This approach of introducing a few form based zones into a conventional code could be effective in the phased replacement of an existing code. It could also be an effective way of responding to the near-term pressure of physical change in “sensitive” areas of the community. With this approach and method the new Context Specific Planning Model is created towards achieving sustainability is explained in detail this research paper.

Keywords: context based planning model, form based code, transect, systemic approach

Procedia PDF Downloads 322
11546 A Low-Power Two-Stage Seismic Sensor Scheme for Earthquake Early Warning System

Authors: Arvind Srivastav, Tarun Kanti Bhattacharyya

Abstract:

The north-eastern, Himalayan, and Eastern Ghats Belt of India comprise of earthquake-prone, remote, and hilly terrains. Earthquakes have caused enormous damages in these regions in the past. A wireless sensor network based earthquake early warning system (EEWS) is being developed to mitigate the damages caused by earthquakes. It consists of sensor nodes, distributed over the region, that perform majority voting of the output of the seismic sensors in the vicinity, and relay a message to a base station to alert the residents when an earthquake is detected. At the heart of the EEWS is a low-power two-stage seismic sensor that continuously tracks seismic events from incoming three-axis accelerometer signal at the first-stage, and, in the presence of a seismic event, triggers the second-stage P-wave detector that detects the onset of P-wave in an earthquake event. The parameters of the P-wave detector have been optimized for minimizing detection time and maximizing the accuracy of detection.Working of the sensor scheme has been verified with seven earthquakes data retrieved from IRIS. In all test cases, the scheme detected the onset of P-wave accurately. Also, it has been established that the P-wave onset detection time reduces linearly with the sampling rate. It has been verified with test data; the detection time for data sampled at 10Hz was around 2 seconds which reduced to 0.3 second for the data sampled at 100Hz.

Keywords: earthquake early warning system, EEWS, STA/LTA, polarization, wavelet, event detector, P-wave detector

Procedia PDF Downloads 167
11545 Investigating (Im)Politeness Strategies in Email Communication: The Case Algerian PhD Supervisees and Irish Supervisors

Authors: Zehor Ktitni

Abstract:

In pragmatics, politeness is regarded as a feature of paramount importance to successful interpersonal relationships. On the other hand, emails have recently become one of the indispensable means of communication in educational settings. This research puts email communication at the core of the study and analyses it from a politeness perspective. More specifically, it endeavours to look closely at how the concept of (im)politeness is reflected through students’ emails. To this end, a corpus of Algerian supervisees’ email threads, exchanged with their Irish supervisors, was compiled. Leech’s model of politeness (2014) was selected as the main theoretical framework of this study, in addition to making reference to Brown and Levinson’s model (1987) as it is one of the most influential models in the area of pragmatic politeness. Further, some follow-up interviews are to be conducted with Algerian students to reinforce the results derived from the corpus. Initial findings suggest that Algerian Ph.D. students’ emails tend to include more politeness markers than impoliteness ones, they heavily make use of academic titles when addressing their supervisors (Dr. or Prof.), and they rely on hedging devices in order to sound polite.

Keywords: politeness, email communication, corpus pragmatics, Algerian PhD supervisees, Irish supervisors

Procedia PDF Downloads 51
11544 Applying Renowned Energy Simulation Engines to Neural Control System of Double Skin Façade

Authors: Zdravko Eškinja, Lovre Miljanić, Ognjen Kuljača

Abstract:

This paper is an overview of simulation tools used to model specific thermal dynamics that occurs while controlling double skin façade. Research has been conducted on simplified construction with single zone where one side is glazed. Heat flow and temperature responses are simulated in three different simulation tools: IDA-ICE, EnergyPlus and HAMBASE. The excitation of observed system, used in all simulations, was a temperature step of exterior environment. Air infiltration, insulation and other disturbances are excluded from this research. Although such isolated behaviour is not possible in reality, experiments are carried out to gain novel information about heat flow transients which are not observable under regular conditions. Results revealed new possibilities for adapting the parameters of the neural network regulator. Along numerical simulations, the same set-up has been also tested in a real-time experiment with a 1:18 scaled model and thermal chamber. The comparison analysis brings out interesting conclusion about simulation accuracy in this particular case.

Keywords: double skin façade, experimental tests, heat control, heat flow, simulated tests, simulation tools

Procedia PDF Downloads 220
11543 Investigating the Effect of the Psychoactive Substances Act 2016 on the Incidence of Adverse Medical Events in Her Majesty’s Prison (HMP) Leeds

Authors: Hayley Boal, Chloe Bromley, John Fairfield

Abstract:

Novel Psychoactive Substances (NPS) are synthetic compounds designed to reproduce effects of illicit drugs. Cheap, potent, and readily available on UK highstreets from so-called ‘head shops’, in recent years their use has surged and with it have emerged side effects including seizures, aggression, palpitations, coma, and death. Rapid development of new substances has vastly outpaced pre-existing drug legislation but the Psychoactive Substances Act 2016 rendered all but tobacco, alcohol, and amyl nitrates, illegal. Drug use has long been rife within prisons, but the absence of a reliable screening tool alongside the availability of NPS makes them ideal for prison use. Here we examine the occurrence of NPS-related adverse side effects within HMP Leeds, comparing May-September of 2015 and 2017 using daily reports distributed amongst prison staff summarising medical and behavioural incidents of the previous day. There was a statistically-significant rise of over 200% in the use of NPS between 2015 and 2017: 0.562 and 1.149 incidents per day respectively. In 2017, 38.46% incidents required ambulances, fallen from 51.02% in 2015. Although the most common descriptions in both years were ‘seizure’ and ‘unresponsive’, by 2017 ‘inhalation by staff’ had emerged. Patterns of NPS consumption mirrored the prison regime, peaking when cell doors opened, and prisoners could socialise. Despite limited data, the Psychoactive Substances Act has clearly been an insufficient deterrent to the prison population; more must be done to understand and address substance misuse in prison. NPS remains a significant risk to prisoners’ health and wellbeing.

Keywords: legislation, novel psychoactive substances, prison, spice

Procedia PDF Downloads 174
11542 Building a Composite Approach to Employees' Motivational Needs by Combining Cognitive Needs

Authors: Alexis Akinyemi, Laurene Houtin

Abstract:

Measures of employee motivation at work are often based on the theory of self-determined motivation, which implies that human resources departments and managers seek to motivate employees in the most self-determined way possible and use strategies to achieve this goal. In practice, they often tend to assess employee motivation and then adapt management to the most important source of motivation for their employees, for example by financially rewarding an employee who is extrinsically motivated, and by rewarding an intrinsically motivated employee with congratulations and recognition. Thus, the use of motivation measures contradicts theoretical positioning: theory does not provide for the promotion of extrinsically motivated behaviour. In addition, a corpus of social psychology linked to fundamental needs makes it possible to personally address a person’s different sources of motivation (need for cognition, need for uniqueness, need for effects and need for closure). By developing a composite measure of motivation based on these needs, we provide human resources professionals, and in particular occupational psychologists, with a tool that complements the assessment of self-determined motivation, making it possible to precisely address the objective of adapting work not to the self-determination of behaviours, but to the motivational traits of employees. To develop such a model, we gathered the French versions of the cognitive needs scales (need for cognition, need for uniqueness, need for effects, need for closure) and conducted a study with 645 employees of several French companies. On the basis of the data collected, we conducted a confirmatory factor analysis to validate the model, studied the correlations between the various needs, and highlighted the different reference groups that could be used to use these needs as a basis for interviews with employees (career, recruitment, etc.). The results showed a coherent model and the expected links between the different needs. Taken together, these results make it possible to propose a valid and theoretically adjusted tool to managers who wish to adapt their management to their employees’ current motivations, whether or not these motivations are self-determined.

Keywords: motivation, personality, work commitment, cognitive needs

Procedia PDF Downloads 105
11541 Evaluation of the Protective Effect of Pterocarpus mildbraedii Extract on Propanil-Induced Hepatotoxicity

Authors: Chiagoziem A. Otuechere, Ebenezer O. Farombi

Abstract:

The protective effect of dichloromethane: methanol extract of Pterocarpus mildbraedii (PME), a widely consumed Nigerian leafy vegetable, on the toxicity of propanil was investigated in male rats. Animals were distributed into eight groups of five each. Group 1 served as control and received normal saline while rats in groups 2, 3, and 4 received 100 mg/kg, 200 mg/kg, and 400 mg/kg extract doses respectively. Group 5 rats were orally administered 200 mg/kg propanil while groups 6, 7, and 8 rats were given propanil plus extract. Oral administration of propanil elicited a 14.8%, 5%, 122%, and 78% increase in the activity of serum enzymes; alanine aminotransferase (AST), alanine aminotransferase(ALT), Alkaline phoshatase (ALP) and Gamma glutamyl transferase (ﻻGT). There were also increase in Lactate dehydrogenase (LDH) activity, direct bilirubin and lipid peroxidation levels. Furthermore, PME significantly attenuated the marked hepatic oxidative damage that accompanied propanil treatment. The extract significantly decreased LDH activity and bilirubin levels following propanil treatment. Furthermore, propanil-induced alterations in the activities of antioxidant enzymes: Superoxide dismutase (SOD), catalase (CAT) and glutathione s-transferase (GST) in these rats were modulated by the extract. The percentage DPPH Radical Scavenging Activity of the extract was determined as 55% and compared to those of Gallic acid (49%). Hepatic histology examination further confirmed the damage to the liver as it revealed severe periportal cellular infiltration of the hepatocytes. These biochemical and morphological alterations were attenuated in rats pre-treated with 100 mg/kg and 200 mg/kg doses of the extract. These results suggest that PME possesses protective effect against propanil-induced hepatotoxicity.

Keywords: antioxidant, hepatoprotection, Pterocarpus mildbraedii, propanil

Procedia PDF Downloads 405
11540 Experimental Verification of On-Board Power Generation System for Vehicle Application

Authors: Manish Kumar, Krupa Shah

Abstract:

The usage of renewable energy sources is increased day by day to overcome the dependency on fossil fuels. The wind energy is considered as a prominent source of renewable energy. This paper presents an approach for utilizing wind energy obtained from moving the vehicle for cell-phone charging. The selection of wind turbine, blades, generator, etc. is done to have the most efficient system. The calculation procedure for power generated and drag force is shown to know the effectiveness of the proposal. The location of the turbine is selected such that the system remains symmetric, stable and has the maximum induced wind. The calculation of the generated power at different velocity is presented. The charging is achieved for the speed 30 km/h and the system works well till 60 km/h. The model proposed seems very useful for the people traveling long distances in the absence of mobile electricity. The model is very economical and easy to fabricate. It has very less weight and area that makes it portable and comfortable to carry along. The practical results are shown by implementing the portable wind turbine system on two-wheeler.

Keywords: cell-phone charging, on-board power generation, wind energy, vehicle

Procedia PDF Downloads 286
11539 Characterization of a Dentigerous Cyst Cell Line and Its Secretion of Metalloproteinases

Authors: Muñiz-Lino Marcos A.

Abstract:

The ectomesenchymal tissues involved in tooth development and their remnants are the origin of different odontogenic lesions, including tumors and cysts of the jaws, with a wide range of clinical behaviors. A dentigerous cyst (DC) represents approximately 20% of all cases of odontogenic cysts, and it has been demonstrated that it can develop benign and malignant odontogenic tumors. DC is characterized by bone destruction of the area surrounding the crown of a tooth that has not erupted and contains liquid. The treatment of odontogenic tumors and cysts usually involves a partial or total removal of the jaw, causing important secondary co-morbidities. However, molecules implicated in DC pathogenesis, as well as in its development into odontogenic tumors, remain unknown. A cellular model may be useful to study these molecules, but that model has not been established yet. Here, we reported the establishment of a cell culture derived from a dentigerous cyst. This cell line was named DeCy-1. In spite of its ectomesenchymal morphology, DeCy-1 cells express epithelial markers such as cytokeratins 5, 6, and 8. Furthermore, these cells express the ODAM protein, which is present in odontogenesis and in dental follicles, indicating that DeCy-1 cells are derived from odontogenic epithelium. Analysis by electron microscopy of this cell line showed that it has a high vesicular activity, suggesting that DeCy-1 could secrete molecules that may be involved in DC pathogenesis. Thus, secreted proteins were analyzed by PAGE-SDS where we observed approximately 11 bands. In addition, the capacity of these secretions to degrade proteins was analyzed by gelatin substrate zymography. A degradation band of about 62 kDa was found in these assays. Western blot assays suggested that the matrix metalloproteinase 2 (MMP-2) is responsible for this protease activity. Thus, our results indicate that the establishment of a cell line derived from DC is a useful in vitro model to study the biology of this odontogenic lesion and its participation in the development of odontogenic tumors.

Keywords: dentigerous cyst, ameloblastoma, MMP-2, odontogenic tumors

Procedia PDF Downloads 11
11538 Corrosion Response of Friction Stir Processed Mg-Zn-Zr-RE Alloy

Authors: Vasanth C. Shunmugasamy, Bilal Mansoor

Abstract:

Magnesium alloys are increasingly being considered for structural systems across different industrial sectors, including precision components of biomedical devices, owing to their high specific strength, stiffness and biodegradability. However, Mg alloys exhibit a high corrosion rate that restricts their application as a biomaterial. For safe use as biomaterial, it is essential to control their corrosion rates. Mg alloy corrosion is influenced by several factors, such as grain size, precipitates and texture. In Mg alloys, microgalvanic coupling between the α-Mg matrix and secondary precipitates can exist, which results in an increased corrosion rate. The present research addresses this challenge by engineering the microstructure of a biodegradable Mg–Zn–RE–Zr alloy by friction stir processing (FSP), a severe plastic deformation process. The FSP-processed Mg alloys showed improved corrosion resistance and mechanical properties. FSPed Mg alloy showed refined grains, a strong basal texture and broken and uniformly distributed secondary precipitates in the stir zone. Mg, alloy base material, exposed to In vitro corrosion medium showed micro galvanic coupling between precipitate and matrix, resulting in the unstable passive layer. However, FS processed alloy showed uniform corrosion owing to stable surface film formation. The stable surface film is attributed to refined grains, preferred texture and distribution of precipitates. The research results show promising potential for Mg alloy to be developed as a biomaterial.

Keywords: biomaterials, severe plastic deformation, magnesium alloys, corrosion

Procedia PDF Downloads 17