Search results for: assumed%20mode%20method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 861

Search results for: assumed%20mode%20method

351 Theoretical Analysis of Mechanical Vibration for Offshore Platform Structures

Authors: Saeed Asiri, Yousuf Z. AL-Zahrani

Abstract:

A new class of support structures, called periodic structures, is introduced in this paper as a viable means for isolating the vibration transmitted from the sea waves to offshore platform structures through its legs. A passive approach to reduce transmitted vibration generated by waves is presented. The approach utilizes the property of periodic structural components that creates stop and pass bands. The stop band regions can be tailored to correspond to regions of the frequency spectra that contain harmonics of the wave frequency, attenuating the response in those regions. A periodic structural component is comprised of a repeating array of cells, which are themselves an assembly of elements. The elements may have differing material properties as well as geometric variations. For the purpose of this research, only geometric and material variations are considered and each cell is assumed to be identical. A periodic leg is designed in order to reduce transmitted vibration of sea waves. The effectiveness of the periodicity on the vibration levels of platform will be demonstrated theoretically. The theory governing the operation of this class of periodic structures is introduced using the transfer matrix method. The unique filtering characteristics of periodic structures are demonstrated as functions of their design parameters for structures with geometrical and material discontinuities; and determine the propagation factor by using the spectral finite element analysis and the effectiveness of design on the leg structure by changing the ratio of step length and area interface between the materials is demonstrated in order to find the propagation factor and frequency response.

Keywords: vibrations, periodic structures, offshore, platforms, transfer matrix method

Procedia PDF Downloads 266
350 Elasticity Model for Easing Peak Hour Demand for Metrorail Transport System

Authors: P. K. Sarkar, Amit Kumar Jain

Abstract:

The demand for Urban transportation is characterised by a large scale temporal and spatial variations which causes heavy congestion inside metro trains in peak hours near Centre Business District (CBD) of the city. The conventional approach to address peak hour congestion, metro trains has been to increase the supply by way of introduction of more trains, increasing the length of the trains, optimising the time table to increase the capacity of the system. However, there is a limitation of supply side measures determined by the design capacity of the systems beyond which any addition in the capacity requires huge capital investments. The demand side interventions are essentially required to actually spread the demand across the time and space. In this study, an attempt has been made to identify the potential Transport Demand Management tools applicable to Urban Rail Transportation systems with a special focus on differential pricing. A conceptual price elasticity model has been developed to analyse the effect of various combinations of peak and nonpeak hoursfares on demands. The elasticity values for peak hour, nonpeak hour and cross elasticity have been assumed from the relevant literature available in the field. The conceptual price elasticity model so developed is based on assumptions which need to be validated with actual values of elasticities for different segments of passengers. Once validated, the model can be used to determine the peak and nonpeak hour fares with an objective to increase overall ridership, revenue, demand levelling and optimal utilisation of assets.

Keywords: urban transport, differential fares, congestion, transport demand management, elasticity

Procedia PDF Downloads 286
349 Portuguese City Reconstructed from Public Space: The Example of the Requalification of Cacém Central Area

Authors: Rodrigo Coelho

Abstract:

As several authors have pointed out (such as Jordi Borja, or Oriol Bohigas), the necessity to “make center” presents itself not only as a imperative response to deal with the processes of dissolution of peripheral urbanization, as it should be assumed, from the point of view its symbolic and functional meaning, as a key concept to think and act on the enlarged city. The notion of re-centralization (successfully applied in urban periphery recompositions, such as in Barcelona or Lyon), understood from the redefinition of mobility, the strengthening of core functions, and from the creation or consolidation of urban fabrics (always articulated with policies of creation and redevelopment of public spaces), seems to become one of the key strategies over the challenge of making the city on the “city periphery”. The question we want to address in this paper concerns, essentially, the importance of public space in the (re) construction of the contemporary "shapeless city” sectors (which, in general, we associate to urban peripheries). We will seek demonstrate, from the analysis of a Portuguese case study–The Cacém Central Area requalification, integrated in Polis Program (National Program for Urban Rehabilitation and Environmental Improvement of Cities, released in 1999 by the Portuguese government), the conditions under which the public space project can act, subsequently, in the urban areas of recent formation, where, in many situations, the public space did not have a structuring role in its urbanization, seeing its presence reduced to a residual character. More specifically, we intend to demonstrate with this example the methodological and urban design aspects that led to the regeneration of a disqualified and degraded urban area, by intervening consistently and profoundly in public space (with well defined objectives and criteria, and framed in a more comprehensive strategy, attentive to the various scales of urban design).

Keywords: public space, urban design, urban regeneration, urban and regional studies

Procedia PDF Downloads 541
348 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 99
347 Removal of Diesel by Soil Washing Technologies Using a Non-Ionic Surfactant

Authors: Carolina Guatemala, Josefina Barrera

Abstract:

A large number of soils highly polluted with recalcitrant hydrocarbons and the limitation of the current bioremediation methods continue being the drawback for an efficient recuperation of these under safe conditions. In this regard, soil washing by degradable surfactants is an alternative option knowing the capacity of surfactants to desorb oily organic compounds. The aim of this study was the establishment of the washing conditions of a soil polluted with diesel, using a nonionic surfactant. A soil polluted with diesel was used. This was collected near to a polluted railway station zone. The soil was dried at room temperature and sieved to a mesh size 10 for its physicochemical and biological characterization. Washing of the polluted soil was performed with surfactant solutions in a 1:5 ratio (5g of soil per 25 mL of the surfactant solution). This was carried out at 28±1 °C and 150 rpm for 72 hours. The factors tested were the Tween 80 surfactant concentration (1, 2, 5 and 10%) and the treatment time. Residual diesel concentration was determined every 24 h. The soil was of a sandy loam texture with a low concentration of organic matter (3.68%) and conductivity (0.016 dS.m- 1). The soil had a pH of 7.63 which was slightly alkaline and a Total Petroleum Hydrocarbon content (TPH) of 11,600 ± 1058.38 mg/kg. The high TPH content could explain the low microbial count of 1.1105 determined as UFC per gram of dried soil. Within the range of the surfactant concentration tested for washing the polluted soil under study, TPH removal increased proportionally with the surfactant concentration. 5080.8 ± 422.2 ppm (43.8 ± 3.64 %) was the maximal concentration of TPH removed after 72 h of contact with surfactant pollution at 10%. Despite the high percentage of hydrocarbons removed, it is assumed that a higher concentration of these could be removed if the washing process is extended or is carried out by stages. Soil washing through the use of surfactants as a desorbing agent was found to be a viable and effective technology for the rapid recovery of soils highly polluted with recalcitrant hydrocarbons.

Keywords: diesel, hydrocarbons, soil washing, tween 80

Procedia PDF Downloads 118
346 Predictions of Dynamic Behaviors for Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

A simulation scheme of rotational motions for predictions of bump-type gas foil bearings operating at steady-state is proposed; and, the scheme is based on multi-physics coupling computer aided engineering packages modularized with computational fluid dynamic model and structure elasticity model to numerically solve the dynamic equation of motions of a hydrodynamic loaded shaft supported by an elastic bump foil. The bump foil is assumed to be modelled as infinite number of Hookean springs mounted on stiff wall. Hence, the top foil stiffness is constant on the periphery of the bearing housing. The hydrodynamic pressure generated by the air film lubrication transfers to the top foil and induces elastic deformation needed to be solved by a finite element method program, whereas the pressure profile applied on the top foil must be solved by a finite element method program based on Reynolds Equation in lubrication theory. As a result, the equation of motions for the bearing shaft are iteratively solved via coupling of the two finite element method programs simultaneously. In conclusion, the two-dimensional center trajectory of the shaft plus the deformation map on top foil at constant rotational speed are calculated for comparisons with the experimental results.

Keywords: computational fluid dynamics, fluid structure interaction multi-physics simulations, gas foil bearing, load capacity

Procedia PDF Downloads 136
345 Effect of Spontaneous Ripening and Drying Techniques on the Bioactive Activities Peel of Plantain (Musa paradisiaca) Fruit

Authors: Famuwagun A. A., Abiona O. O., Gbadamosi S.O., Adeboye O. A., Adebooye O. C.

Abstract:

The need to provide more information on the perceived bioactive status of the peel of plantain fruit informed the design of this research. Matured Plantain fruits were harvested, and fruits were allowed to ripen spontaneously. Samples of plantain fruit were taken every fortnight, and the peels were removed. The peels were dried using two different drying techniques (Oven drying and sun drying) and milled into powdery forms. Other samples were picked and processed in a similar manner on the first, third, seventh and tenth day until the peels of the fruits were fully ripped, resulting in eight different samples. The anti-oxidative properties of the samples using different assays (DPPH, FRAP, MCA, HRSA, SRSA, ABTS, ORAC), inhibitory activities against enzymes related to diabetes (alpha-amylase and glucosidase) and inhibition against angiotensin-converting enzymes (ACE) were evaluated. The result showed that peels of plantain fruits on the 7th day of ripening and sundried exhibited greater inhibitions against free radicals, which enhanced its antioxidant activities, resulting in greater inhibitions against alpha-amylase and alpha-glucosidase enzymes. Also, oven oven-dried sample of the peel of plantain fruit on the 7th day of ripening had greater phenolic contents than the other samples, which also resulted in higher inhibition against angiotensin converting enzymes when compared with other samples. The results showed that even though the unripe peel of plantain fruit is assumed to contain excellent bioactive activities, consumption of the peel should be allowed to ripen for seven days after maturity and harvesting so as to derive maximum benefit from the peel.

Keywords: functional ingredient, diabetics, hypertension, functional foods

Procedia PDF Downloads 21
344 Economic Assessment of CO2-Based Methane, Methanol and Polyoxymethylene Production

Authors: Wieland Hoppe, Nadine Wachter, Stefan Bringezu

Abstract:

Carbon dioxide (CO2) utilization might be a promising way to substitute fossil raw materials like coal, oil or natural gas as carbon source of chemical production. While first life cycle assessments indicate a positive environmental performance of CO2-based process routes, a commercialization of CO2 is limited by several economic obstacles up to now. We, therefore, analyzed the economic performance of the three CO2-based chemicals methane and methanol as basic chemicals and polyoxymethylene as polymer on a cradle-to-gate basis. Our approach is oriented towards life cycle costing. The focus lies on the cost drivers of CO2-based technologies and options to stimulate a CO2-based economy by changing regulative factors. In this way, we analyze various modes of operation and give an outlook for the potentially cost-effective development in the next decades. Biogas, waste gases of a cement plant, and flue gases of a waste incineration plant are considered as CO2-sources. The energy needed to convert CO2 into hydrocarbons via electrolysis is assumed to be supplied by wind power, which is increasingly available in Germany. Economic data originates from both industrial processes and process simulations. The results indicate that CO2-based production technologies are not competitive with conventional production methods under present conditions. This is mainly due to high electricity generation costs and regulative factors like the German Renewable Energy Act (EEG). While the decrease in production costs of CO2-based chemicals might be limited in the next decades, a modification of relevant regulative factors could potentially promote an earlier commercialization.

Keywords: carbon capture and utilization (CCU), economic assessment, life cycle costing (LCC), power-to-X

Procedia PDF Downloads 265
343 Assessment of the Road Safety Performance in National Scale

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

The Assessment of the road safety performance is a challengeable issue. This is not only because of the ineffective and unreliability of road and traffic crash data system but also because of its systematic character. Recent strategic plans and interventions implemented in some of the developed countries where a significant decline in the rate of traffic and road crashes considers that the road safety is a system. This system consists of four main elements which are: road user, road infrastructure, vehicles and speed in addition to other supporting elements such as the institutional framework and post-crash care system. To assess the performance of a system, it is required to assess all its elements. To present an understandable results of the assessment, it is required to present a unique term representing the performance of the overall system. This paper aims to develop an overall performance indicator which may be used to assess the road safety system. The variables of this indicators are the main elements of the road safety system. The data regarding these variables will be collected from the World Health Organization report. Multi-criteria analysis method is used to aggregate the four sub-indicators for the four variables. Two weighting methods will be assumed, equal weights and different weights. For the different weights method, the factor analysis method is used. The weights then will be converting to scores. The total score will be the overall indicator for the road safety performance in a national scale. This indicator will be used to compare and rank countries according to their road safety performance indicator. The country with the higher score is the country which provides most sustainable and effective interventions for successful road safety system. These indicator will be tested by comparing them with the aggregate real crash rate for each country.

Keywords: factor analysis, Multi-criteria analysis, road safety assessment, safe system indicator

Procedia PDF Downloads 248
342 Quality Control of 99mTc-Labeled Radiopharmaceuticals Using the Chromatography Strips

Authors: Yasuyuki Takahashi, Akemi Yoshida, Hirotaka Shimada

Abstract:

99mTc-2-methoxy-isobutyl-isonitrile (MIBI) and 99mTcmercaptoacetylgylcylglycyl-glycine (MAG3 ) are heat to 368-372K and are labeled with 99mTc-pertechnetate. Quality control (QC) of 99mTc-labeled radiopharmaceuticals is performed at hospitals, using liquid chromatography, which is difficult to perform in general hospitals. We used chromatography strips to simplify QC and investigated the effects of the test procedures on quality control. In this study is 99mTc- MAG3. Solvent using chloroform + acetone + tetrahydrofuran, and the gamma counter was ARC-380CL. The changed conditions are as follows; heating temperature, resting time after labeled, and expiration year for use: which were 293, 313, 333, 353 and 372K; 15 min (293K and 372K) and 1 hour (293K); and 2011, 2012, 2013, 2014 and 2015 respectively were tested. Measurement time using the gamma counter was one minute. A nuclear medical clinician decided the quality of the preparation in judging the usability of the retest agent. Two people conducted the test procedure twice, in order to compare reproducibility. The percentage of radiochemical purity (% RCP) was approximately 50% under insufficient heat treatment, which improved as the temperature and heating time increased. Moreover, the % RCP improved with time even under low temperatures. Furthermore, there was no deterioration with time after the expiration date. The objective of these tests was to determine soluble 99mTc impurities, including 99mTc-pertechnetate and the hydrolyzed-reduced 99mTc. Therefore, we assumed that insufficient heating and heating to operational errors in the labeling. It is concluded that quality control is a necessary procedure in nuclear medicine to ensure safe scanning. It is suggested that labeling is necessary to identify specifications.

Keywords: quality control, tc-99m labeled radio-pharmaceutical, chromatography strip, nuclear medicine

Procedia PDF Downloads 292
341 Heat Transfer Analysis of a Multiphase Oxygen Reactor Heated by a Helical Tube in the Cu-Cl Cycle of a Hydrogen Production

Authors: Mohammed W. Abdulrahman

Abstract:

In the thermochemical water splitting process by Cu-Cl cycle, oxygen gas is produced by an endothermic thermolysis process at a temperature of 530oC. Oxygen production reactor is a three-phase reactor involving cuprous chloride molten salt, copper oxychloride solid reactant and oxygen gas. To perform optimal performance, the oxygen reactor requires accurate control of heat transfer to the molten salt and decomposing solid particles within the thermolysis reactor. In this paper, the scale up analysis of the oxygen reactor that is heated by an internal helical tube is performed from the perspective of heat transfer. A heat balance of the oxygen reactor is investigated to analyze the size of the reactor that provides the required heat input for different rates of hydrogen production. It is found that the helical tube wall and the service side constitute the largest thermal resistances of the oxygen reactor system. In the analysis of this paper, the Cu-Cl cycle is assumed to be heated by two types of nuclear reactor, which are HTGR and CANDU SCWR. It is concluded that using CANDU SCWR requires more heat transfer rate by 3-4 times than that when using HTGR. The effect of the reactor aspect ratio is also studied and it is found that increasing the aspect ratio decreases the number of reactors and the rate of decrease in the number of reactors decreases by increasing the aspect ratio. Comparisons between the results of this study and pervious results of material balances in the oxygen reactor show that the size of the oxygen reactor is dominated by the heat balance rather than the material balance.

Keywords: heat transfer, Cu-Cl cycle, hydrogen production, oxygen, clean energy

Procedia PDF Downloads 240
340 Assessment of Climate Change Impact on Meteorological Droughts

Authors: Alireza Nikbakht Shahbazi

Abstract:

There are various factors that affect climate changes; drought is one of those factors. Investigation of efficient methods for estimating climate change impacts on drought should be assumed. The aim of this paper is to investigate climate change impacts on drought in Karoon3 watershed located south-western Iran in the future periods. The atmospheric general circulation models (GCM) data under Intergovernmental Panel on Climate Change (IPCC) scenarios should be used for this purpose. In this study, watershed drought under climate change impacts will be simulated in future periods (2011 to 2099). Standard precipitation index (SPI) as a drought index was selected and calculated using mean monthly precipitation data in Karoon3 watershed. SPI was calculated in 6, 12 and 24 months periods. Statistical analysis on daily precipitation and minimum and maximum daily temperature was performed. LRAS-WG5 was used to determine the feasibility of future period's meteorological data production. Model calibration and verification was performed for the base year (1980-2007). Meteorological data simulation for future periods under General Circulation Models and climate change IPCC scenarios was performed and then the drought status using SPI under climate change effects analyzed. Results showed that differences between monthly maximum and minimum temperature will decrease under climate change and spring precipitation shall increase while summer and autumn rainfall shall decrease. The precipitation occurs mainly between January and May in future periods and summer or autumn precipitation decline and lead up to short term drought in the study region. Normal and wet SPI category is more frequent in B1 and A2 emissions scenarios than A1B.

Keywords: climate change impact, drought severity, drought frequency, Karoon3 watershed

Procedia PDF Downloads 216
339 Actresses as Eunuchs: The Versatility of Cross-Gendered Roles in Eighteenth-Century Orientalist Theatre

Authors: Anne Greenfield

Abstract:

Introductory Statement: During the eighteenth century in London, there were over two dozen theatrical productions that featured eunuchoid characters, most of which were set in 'Eastern' locales, including the Ottoman Empire, Persia, India, and China. These characters have gone largely overlooked by recent scholars, and more analysis is needed in order to illustrate the contemporary values and anxieties reflected in these popular and recurring figures at the time. Methodology: This paper adopts a New Historical and Cultural Studies approach to the subject of theatrical depictions of eunuchs, drawing insights from seventeenth- and eighteenth-century literary works, travel narratives, medical treatises, and histories of the age. Major Findings: As this paper demonstrates, there was a high degree of complexity, variety, and -at times- respect underlying orientalist theatrical depictions of eunuchs. Not only were eunuchoid characters represented in strikingly diverse ways in scripts, but these roles were also played by a heterogeneous group of actors and even actresses. More specifically, this paper looks closely at three actresses who took roles as eunuchs in tragedies: Mrs. Verbruggen (aka Mrs. Mountfort), Mrs. Rogers, and Mrs. Bicknell—all of whom were otherwise best known as comediennes. These casting choices provided an entertaining twist on the breeches roles these actresses often played. In fact, the staging and scripting of these roles, when analyzed through the lens of these cross-gendered roles, becomes ironic and comical in several scenes that are usually assumed (by recent scholars) to be thoroughly tragic. Conclusion: Ultimately, a careful look at the staging of eunuchoid characters sheds light on not only how these productions were performed and understood, but also on how writers and theatre managers navigated the Other, whether in gender identity or culture, during this era.

Keywords: eunuch, actress, literature, drama

Procedia PDF Downloads 107
338 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees

Authors: M. Eskandarighadi, C. R. McGann

Abstract:

It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.

Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation

Procedia PDF Downloads 122
337 The Application of a Neural Network in the Reworking of Accu-Chek to Wrist Bands to Monitor Blood Glucose in the Human Body

Authors: J. K Adedeji, O. H Olowomofe, C. O Alo, S.T Ijatuyi

Abstract:

The issue of high blood sugar level, the effects of which might end up as diabetes mellitus, is now becoming a rampant cardiovascular disorder in our community. In recent times, a lack of awareness among most people makes this disease a silent killer. The situation calls for urgency, hence the need to design a device that serves as a monitoring tool such as a wrist watch to give an alert of the danger a head of time to those living with high blood glucose, as well as to introduce a mechanism for checks and balances. The neural network architecture assumed 8-15-10 configuration with eight neurons at the input stage including a bias, 15 neurons at the hidden layer at the processing stage, and 10 neurons at the output stage indicating likely symptoms cases. The inputs are formed using the exclusive OR (XOR), with the expectation of getting an XOR output as the threshold value for diabetic symptom cases. The neural algorithm is coded in Java language with 1000 epoch runs to bring the errors into the barest minimum. The internal circuitry of the device comprises the compatible hardware requirement that matches the nature of each of the input neurons. The light emitting diodes (LED) of red, green, and yellow colors are used as the output for the neural network to show pattern recognition for severe cases, pre-hypertensive cases and normal without the traces of diabetes mellitus. The research concluded that neural network is an efficient Accu-Chek design tool for the proper monitoring of high glucose levels than the conventional methods of carrying out blood test.

Keywords: Accu-Check, diabetes, neural network, pattern recognition

Procedia PDF Downloads 127
336 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace

Authors: Paolo Lenzuni

Abstract:

Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.

Keywords: cost-effectiveness, noise, occupational exposure, treatment

Procedia PDF Downloads 300
335 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization

Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur

Abstract:

In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.

Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization

Procedia PDF Downloads 413
334 Proactive Competence Management for Employees: A Bottom-up Process Model for Developing Target Competence Profiles Based on the Employee's Tasks

Authors: Maximilian Cedzich, Ingo Dietz Von Bayer, Roland Jochem

Abstract:

In order for industrial companies to continue to succeed in dynamic, globalized markets, they must be able to train their employees in an agile manner and at short notice in line with the exogenous conditions that arise. For this purpose, it is indispensable to operate a proactive competence management system for employees that recognizes qualification needs timely in order to be able to address them promptly through qualification measures. However, there are hardly any approaches to be found in the literature that includes systematic, proactive competence management. In order to help close this gap, this publication presents a process model that systematically develops bottom-up, future-oriented target competence profiles based on the tasks of the employees. Concretely, in the first step, the tasks of the individual employees are examined for assumed future conditions. In other words, qualitative scenarios are considered for the individual tasks to determine how they are likely to change. In a second step, these scenario-based future tasks are translated into individual future-related target competencies of the employee using a matrix of generic task properties. The final step pursues the goal of validating the target competence profiles formed in this way within the framework of a management workshop. This process model provides industrial companies with a tool that they can use to determine the competencies required by their own employees in the future and compare them with the actual prevailing competencies. If gaps are identified between the target and the actual, these qualification requirements can be closed in the short term by means of qualification measures.

Keywords: dynamic globalized markets, employee competence management, industrial companies, knowledge management

Procedia PDF Downloads 174
333 Unpacking the Summarising Event in Trauma Emergencies: The Case of Pre-briefings

Authors: Professor Jo Angouri, Polina Mesinioti, Chris Turner

Abstract:

In order for a group of ad-hoc professional to perform as a team, a shared understanding of the problem at hand and an agreed action plan are necessary components. This is particularly significant in complex, time sensitive professional settings such as in trauma emergencies. In this context, team briefings prior to the patient arrival (pre-briefings) constitute a critical event for the performance of the team; they provide the necessary space for co-constructing a shared understanding of the situation through summarising information available to the team: yet the act of summarising is widely assumed in medical practice but not systematically researched. In the vast teamwork literature, terms such as ‘shared mental model’, ‘mental space’ and ‘cognate labelling’ are used extensively, and loosely, to denote the outcome of the summarising process, but how exactly this is done interactionally remains under researched. This paper reports on the forms and functions of pre-briefings in a major trauma centre in the UK. Taking an interactional approach, we draw on 30 simulated and real-life trauma emergencies (15 from each dataset) and zoom in on the use of pre-briefings, which we consider focal points in the management of trauma emergencies. We show how ad hoc teams negotiate sharedness of future orientation through summarising, synthesising information, and establishing common understanding of the situation. We illustrate the role, characteristics, and structure of pre-briefing sequences that have been evaluated as ‘efficient’ in our data and the impact (in)effective pre-briefings have on teamwork. Our work shows that the key roles in the event own the act of summarising and we problematise the implications for leadership in trauma emergencies. We close the paper with a model for pre-briefing and provide recommendations for clinical practice, arguing that effective pre-briefing practice is teachable.

Keywords: summarising, medical emergencies, interaction analysis, shared/mental models

Procedia PDF Downloads 66
332 Opioid Administration on Patients Hospitalized in the Emergency Department

Authors: Mani Mofidi, Neda Valizadeh, Ali Hashemaghaee, Mona Hashemaghaee, Soudabeh Shafiee Ardestani

Abstract:

Background: Acute pain and its management remained the most complaint of emergency service admission. Diagnostic and therapeutic procedures add to patients’ pain. Diminishing the pain increases the quality of patient’s feeling and improves the patient-physician relationship. Aim: The aim of this study was to evaluate the outcomes and side effects of opioid administration in emergency patients. Material and Methods: patients admitted to ward II emergency service of Imam Khomeini hospital, who received one of the opioids: morphine, pethidine, methadone or fentanyl as an analgesic were evaluated. Their vital signs and general condition were examined before and after drug injection. Also, patient’s pain experience were recorded as numerical rating score (NRS) before and after analgesic administration. Results: 268 patients were studied. 34 patients were addicted to opioid drugs. Morphine had the highest rate of prescription (86.2%), followed by pethidine (8.5%), methadone (3.3%) and fentanyl (1.68). While initial NRS did not show significant difference between addicted patients and non-addicted ones, NRS decline and its score after drug injection were significantly lower in addicted patients. All patients had slight but statistically significant lower respiratory rate, heart rate, blood pressure and O2 saturation. There was no significant difference between different kind of opioid prescription and its outcomes or side effects. Conclusion: Pain management should be always in physicians’ mind during emergency admissions. It should not be assumed that an addicted patient complaining of pain is malingering to receive drug. Titration of drug and close monitoring must be in the curriculum to prevent any hazardous side effects.

Keywords: numerical rating score, opioid, pain, emergency department

Procedia PDF Downloads 404
331 The Nexus between Downstream Supply Chain Losses and Food Security in Nigeria: Empirical Evidence from the Yam Industry

Authors: Alban Igwe, Ijeoma Kalu, Alloy Ezirim

Abstract:

Food insecurity is a global problem, and the search for food security has assumed a central stage in the global development agenda as the United Nations currently placed zero hunger as a goal number in its sustainable development goals. Nigeria currently ranks 107th out of 113 countries in the global food security index (GFSI), a metric that defines a country's ability to furnish its citizens with food and nutrients for healthy living. Paradoxically, Nigeria is a global leader in food production, ranking 1st in yam (over 70% of global output), beans (over 41% of global output), cassava (20% of global output) and shea nuts, where it commands 53% of global output. Furthermore, it ranks 2nd in millet, sweet potatoes, and cashew nuts. It is Africa's largest producer of rice. So, it is apparent that Nigeria's food insecurity woes must relate to a factor other than food production. We investigated the nexus between food security and downstream supply chain losses in the yam industry with secondary data from the Food and Agricultural Organization (FAOSTAT) and the National Bureau of Statics for the decade 2012-2021. In analyzing the data, multiple regression techniques were used, and findings reveal that downstream losses have a strong positive correlation with food security (r = .763*) and a 58.3% variation in food security is explainable by post-downstream supply chain food losses. The study discovered that yam supply chain losses within the period under review averaged 50.6%, suggestive of the fact that downstream supply chain losses are the drainpipe and the major source of food insecurity in Nigeria. Therefore, the study concluded that there is a significant relationship between downstream supply chain losses and food insecurity and recommended the establishment of food supply chain structures and policies to enhance food security in Nigeria.

Keywords: food security, downstream supply chain losses, yam, nigeria, supply chain

Procedia PDF Downloads 64
330 Meta Model for Optimum Design Objective Function of Steel Frames Subjected to Seismic Loads

Authors: Salah R. Al Zaidee, Ali S. Mahdi

Abstract:

Except for simple problems of statically determinate structures, optimum design problems in structural engineering have implicit objective functions where structural analysis and design are essential within each searching loop. With these implicit functions, the structural engineer is usually enforced to write his/her own computer code for analysis, design, and searching for optimum design among many feasible candidates and cannot take advantage of available software for structural analysis, design, and searching for the optimum solution. The meta-model is a regression model used to transform an implicit objective function into objective one and leads in turn to decouple the structural analysis and design processes from the optimum searching process. With the meta-model, well-known software for structural analysis and design can be used in sequence with optimum searching software. In this paper, the meta-model has been used to develop an explicit objective function for plane steel frames subjected to dead, live, and seismic forces. Frame topology is assumed as predefined based on architectural and functional requirements. Columns and beams sections and different connections details are the main design variables in this study. Columns and beams are grouped to reduce the number of design variables and to make the problem similar to that adopted in engineering practice. Data for the implicit objective function have been generated based on analysis and assessment for many design proposals with CSI SAP software. These data have been used later in SPSS software to develop a pure quadratic nonlinear regression model for the explicit objective function. Good correlations with a coefficient, R2, in the range from 0.88 to 0.99 have been noted between the original implicit functions and the corresponding explicit functions generated with meta-model.

Keywords: meta-modal, objective function, steel frames, seismic analysis, design

Procedia PDF Downloads 221
329 Collapse Load Analysis of Reinforced Concrete Pile Group in Liquefying Soils under Lateral Loading

Authors: Pavan K. Emani, Shashank Kothari, V. S. Phanikanth

Abstract:

The ultimate load analysis of RC pile groups has assumed a lot of significance under liquefying soil conditions, especially due to post-earthquake studies of 1964 Niigata, 1995 Kobe and 2001 Bhuj earthquakes. The present study reports the results of numerical simulations on pile groups subjected to monotonically increasing lateral loads under design amounts of pile axial loading. The soil liquefaction has been considered through the non-linear p-y relationship of the soil springs, which can vary along the depth/length of the pile. This variation again is related to the liquefaction potential of the site and the magnitude of the seismic shaking. As the piles in the group can reach their extreme deflections and rotations during increased amounts of lateral loading, a precise modeling of the inelastic behavior of the pile cross-section is done, considering the complete stress-strain behavior of concrete, with and without confinement, and reinforcing steel, including the strain-hardening portion. The possibility of the inelastic buckling of the individual piles is considered in the overall collapse modes. The model is analysed using Riks analysis in finite element software to check the post buckling behavior and plastic collapse of piles. The results confirm the kinds of failure modes predicted by centrifuge test results reported by researchers on pile group, although the pile material used is significantly different from that of the simulation model. The extension of the present work promises an important contribution to the design codes for pile groups in liquefying soils.

Keywords: collapse load analysis, inelastic buckling, liquefaction, pile group

Procedia PDF Downloads 136
328 Modelling of a Biomechanical Vertebral System for Seat Ejection in Aircrafts Using Lumped Mass Approach

Authors: R. Unnikrishnan, K. Shankar

Abstract:

In the case of high-speed fighter aircrafts, seat ejection is designed mainly for the safety of the pilot in case of an emergency. Strong windblast due to the high velocity of flight is one main difficulty in clearing the tail of the aircraft. Excessive G-forces generated, immobilizes the pilot from escape. In most of the cases, seats are ejected out of the aircrafts by explosives or by rocket motors attached to the bottom of the seat. Ejection forces are primarily in the vertical direction with the objective of attaining the maximum possible velocity in a specified period of time. The safe ejection parameters are studied to estimate the critical time of ejection for various geometries and velocities of flight. An equivalent analytical 2-dimensional biomechanical model of the human spine has been modelled consisting of vertebrae and intervertebral discs with a lumped mass approach. The 24 vertebrae, which consists of the cervical, thoracic and lumbar regions, in addition to the head mass and the pelvis has been designed as 26 rigid structures and the intervertebral discs are assumed as 25 flexible joint structures. The rigid structures are modelled as mass elements and the flexible joints as spring and damper elements. Here, the motions are restricted only in the mid-sagittal plane to form a 26 degree of freedom system. The equations of motions are derived for translational movement of the spinal column. An ejection force with a linearly increasing acceleration profile is applied as vertical base excitation on to the pelvis. The dynamic vibrational response of each vertebra in time-domain is estimated.

Keywords: biomechanical model, lumped mass, seat ejection, vibrational response

Procedia PDF Downloads 202
327 Multiple Institutional Logics and the Ability of Institutional Entrepreneurs: An Analysis in the Turkish Education Field

Authors: Miraç Savaş Turhan, Ali Danişman

Abstract:

Recently scholars of new institutional theory have used institutional logics perspective to explain the contradictory practices in modern western societies. Accordingly, distinct institutional logics are embedded in central institutions such as the market, state, democracy, family, and religion. They guide individual and organizational actors and constraint their behaviors in a particular organizational field. Through this perspective, actors are assumed to have a situated, embedded, boundedly intentional, and adaptive role against the structure in social, cultural and political context. On the other hand, over a decade, there is an emerging attempt focusing on the role of actors on creating, maintaining, and changing the institutions. Such attempts brought out the concept of institutional entrepreneurs to explain the role of individual actors in relation to institutions. Institutional entrepreneurs are individuals, groups of individuals, organizations or groups of organizations that are able to initiate some actions to build, maintain or change institutions. While recent studies on institutional logics perspective have attempted to explain roles of entrepreneurial actors who have resources and skills, little is known about the effects of multiple institutional logics on the ability of institutional entrepreneurs. In this study, we aim to find out that how multiple institutional logics affect the ability of institutional entrepreneurs during the process of institutional change. We examine this issue in the Turkish Education Field. While institutional logics were identified based on the previous studies in the education field, the actions taken by Turkish National Education Ministry from 2003 to 2013 was examined through content analysis The early results indicate that there are remarkable shift and contradictions in the ability of institutional entrepreneur in taking actions to change the field in relationship to balance of power shift among the carriers of institutional logics.

Keywords: institutional theory, institutional logics, institutional entrepreneurs, Turkish national education

Procedia PDF Downloads 329
326 Probabilistic Analysis of Bearing Capacity of Isolated Footing using Monte Carlo Simulation

Authors: Sameer Jung Karki, Gokhan Saygili

Abstract:

The allowable bearing capacity of foundation systems is determined by applying a factor of safety to the ultimate bearing capacity. Conventional ultimate bearing capacity calculations routines are based on deterministic input parameters where the nonuniformity and inhomogeneity of soil and site properties are not accounted for. Hence, the laws of mathematics like probability calculus and statistical analysis cannot be directly applied to foundation engineering. It’s assumed that the Factor of Safety, typically as high as 3.0, incorporates the uncertainty of the input parameters. This factor of safety is estimated based on subjective judgement rather than objective facts. It is an ambiguous term. Hence, a probabilistic analysis of the bearing capacity of an isolated footing on a clayey soil is carried out by using the Monte Carlo Simulation method. This simulated model was compared with the traditional discrete model. It was found out that the bearing capacity of soil was found higher for the simulated model compared with the discrete model. This was verified by doing the sensitivity analysis. As the number of simulations was increased, there was a significant % increase of the bearing capacity compared with discrete bearing capacity. The bearing capacity values obtained by simulation was found to follow a normal distribution. While using the traditional value of Factor of safety 3, the allowable bearing capacity had lower probability (0.03717) of occurring in the field compared to a higher probability (0.15866), while using the simulation derived factor of safety of 1.5. This means the traditional factor of safety is giving us bearing capacity that is less likely occurring/available in the field. This shows the subjective nature of factor of safety, and hence probability method is suggested to address the variability of the input parameters in bearing capacity equations.

Keywords: bearing capacity, factor of safety, isolated footing, montecarlo simulation

Procedia PDF Downloads 160
325 The Science of Successful Intimate Relationship in China: A Discourse Analytic Examination of Sex and Relationships Advice in Ayawawa’s Book

Authors: Hanlei Yang

Abstract:

As a kind of popular culture in modern China, advice book on intimate relationship is turning into an important and controversial site with conflicts among neoliberalism, authoritative socialism, market-oriented principles, the science of successful sex and relationship, cosmopolitan notions of nuclear families, and the revitalization of Confucian conservatism and patriarchy. Accelerated modernization and marketization has contributed to great changes in China’s culture and social relations, which accordingly reconceptualizes and reconstructs family structures and moral ethics, particularly urban middle-class nuclear families. To comprehend the meaning of advice book fad in moral and social order, this research proposes to (i) understand the implication of Ayawawa through discourse analysis and how she mobilizes rhetorical devices and cultural resources to present a persuasive and scientific method of managing intimate relationship, (ii) examine the critical role of neoliberalism, post-feminism, and Confucian patriarchy assumed by Ayawawa in her books, (iii) explore how Ayawawa and her fans engage in establishing a model of intimate relationship and sexual subjectivity ordered by neoliberalism, class identity and authoritative socialism. Finally, this research argues that such new fad of a cultural phenomenon is gradually completed in the process of cooperation and negotiation of the state, commercial institutions, and intellectual elite agents. It helps to further learn about (i) the routine life under the influence of neoliberalism and modern hegemony, (ii) the perplexing relationship between China's indigenous cultural forms, global socio-economic and cultural influences in the late modern era.

Keywords: cultural study, intimate relationship, culture sociology, gender study

Procedia PDF Downloads 121
324 Indigenous Dayak People’s Perceptions of Wildlife Loss and Gain Related to Oil Palm Development

Authors: A. Sunkar, A. Saraswati, Y. Santosa

Abstract:

Controversies surrounding the impacts of oil palm plantations have resulted in some heated debates, especially concerning biodiversity loss and indigenous people well-being. The indigenous people of Dayak generally used wildlife to fulfill their daily needs thus were assumed to have experienced negative impacts due to oil palm developments within and surrounding their settlement areas. This study was conducted to identify the characteristics of the Dayak community settled around an oil palm plantation, to determine their perceptions of wildlife loss or gain as the results of the development of oil palm plantations, and to identify the determinant characteristic of the perceptions. The research was conducted on March 2018 in Nanga Tayap and Tajok Kayong Villages, which were located around the oil palm plantation of NTYE of Ketapang, West Kalimantan-Indonesia. Data were collected through in depth-structured interview, using closed and semi-open questionnaires and three-scale Likert statements. Interviews were conducted with 74 respondents using accidental sampling, and categorized into respondents who were dependent on oil palm for their livelihoods and those who were not. Data were analyzed using quantitative statistics method, Likert Scale, Chi-Square Test, Spearman Test, and Mann-Whitney Test. The research found that the indigenous Dayak people were aware of wildlife species loss and gain since the establishment of the plantation. Nevertheless, wildlife loss did not affect their social, economic, and cultural needs since they could find substitutions. It was found that prior to the plantation’s development, the local Dayak communities were already slowly experiencing some livelihood transitions through local village development. The only determinant characteristic of the community that influenced their perceptions of wildlife loss/gain was level of education.

Keywords: wildlife, oil palm plantations, indigenous Dayak, biodiversity loss and gain

Procedia PDF Downloads 144
323 A Comparative Study of Indoor Radon Concentrations between Dwellings and Workplaces in the Ko Samui District, Surat Thani Province, Southern Thailand

Authors: Kanokkan Titipornpun, Tripob Bhongsuwan, Jan Gimsa

Abstract:

The Ko Samui district of Surat Thani province is located in the high amounts of equivalent uranium in the ground surface that is the source of radon. Our research in the Ko Samui district aimed at comparing the indoor radon concentrations between dwellings and workplaces. Measurements of indoor radon concentrations were carried out in 46 dwellings and 127 workplaces, using CR-39 alpha-track detectors in closed-cup. A total of 173 detectors were distributed in 7 sub-districts. The detectors were placed in bedrooms of dwellings and workrooms of workplaces. All detectors were exposed to airborne radon for 90 days. After exposure, the alpha tracks were made visible by chemical etching before they were manually counted under an optical microscope. The track densities were assumed to be correlated with the radon concentration levels. We found that the radon concentrations could be well described by a log-normal distribution. Most concentrations (37%) were found in the range between 16 and 30 Bq.m-3. The radon concentrations in dwellings and workplaces varied from a minimum of 11 Bq.m-3 to a maximum of 305 Bq.m-3. The minimum (11 Bq.m-3) and maximum (305 Bq.m-3) values of indoor radon concentrations were found in a workplace and a dwelling, respectively. Only for four samples (3%), the indoor radon concentrations were found to be higher than the reference level recommended by the WHO (100 Bq.m-3). The overall geometric mean in the surveyed area was 32.6±1.65 Bq.m-3, which was lower than the worldwide average (39 Bq.m-3). The statistic comparison of the geometric mean indoor radon concentrations between dwellings and workplaces showed that the geometric mean in dwellings (46.0±1.55 Bq.m-3) was significantly higher than in workplaces (28.8±1.58 Bq.m-3) at the 0.05 level. Moreover, our study found that the majority of the bedrooms in dwellings had a closed atmosphere, resulting in poorer ventilation than in most of the workplaces that had access to air flow through open doors and windows at daytime. We consider this to be the main reason for the higher geometric mean indoor radon concentration in dwellings compared to workplaces.

Keywords: CR-39 detector, indoor radon, radon in dwelling, radon in workplace

Procedia PDF Downloads 260
322 Dynamic Analysis of Functionally Graded Nano Composite Pipe with PZT Layers Subjected to Moving Load

Authors: Morteza Raminnia

Abstract:

In this study, dynamic analysis of functionally graded nano-composite pipe reinforced by single-walled carbon nano-tubes (SWCNTs) with simply supported boundary condition subjected to moving mechanical loads is investigated. The material properties of functionally graded carbon nano tube-reinforced composites (FG-CNTRCs) are assumed to be graded in the thickness direction and are estimated through a micro-mechanical model. In this paper polymeric matrix considered as isotropic material and for the CNTRC, uniform distribution (UD) and three types of FG distribution patterns of SWCNT reinforcements are considered. The system equation of motion is derived by using Hamilton's principle under the assumptions of first order shear deformation theory (FSDT).The thin piezoelectric layers embedded on inner and outer surfaces of FG-CNTRC layer are acted as distributed sensor and actuator to control dynamic characteristics of the FG-CNTRC laminated pipe. The modal analysis technique and Newmark's integration method are used to calculate the displacement and dynamic stress of the pipe subjected to moving loads. The effects of various material distribution and velocity of moving loads on dynamic behavior of the pipe is presented. This present approach is validated by comparing the numerical results with the published numerical results in literature. The results show that the above-mentioned effects play very important role on dynamic behavior of the pipe .This present work shows that some meaningful results that which are interest to scientific and engineering community in the field of FGM nano-structures.

Keywords: nano-composite, functionally garded material, moving load, active control, PZT layers

Procedia PDF Downloads 394