Search results for: construction of irrigation district
117 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track
Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink
Abstract:
The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges
Procedia PDF Downloads 164116 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials
Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner
Abstract:
Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.Keywords: CO₂ curing, carbonation, CCU, steel slag
Procedia PDF Downloads 104115 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 266114 Development and application of Humidity-Responsive Controlled Release Active Packaging Based on Electrospinning Nanofibers and In Situ Growth Polymeric Film in Food preservation
Authors: Jin Yue
Abstract:
Fresh produces especially fruits, vegetables, meats and aquatic products have limited shelf life and are highly susceptible to deterioration. Essential oils (EOs) extracted from plants have excellent antioxidant and broad-spectrum antibacterial activities, and they can play as natural food preservatives. But EOs are volatile, water insoluble, pungent, and easily decomposing under light and heat. Many approaches have been developed to improve the solubility and stability of EOs such as polymeric film, coating, nanoparticles, nano-emulsions and nanofibers. Construction of active packaging film which can incorporate EOs with high loading efficiency and controlled release of EOs has received great attention. It is still difficult to achieve accurate release of antibacterial compounds at specific target locations in active packaging. In this research, a relative humidity-responsive packaging material was designed, employing the electrospinning technique to fabricate a nanofibrous film loaded with a 4-terpineol/β-cyclodextrin inclusion complexes (4-TA/β-CD ICs). Functioning as an innovative food packaging material, the film demonstrated commendable attributes including pleasing appearance, thermal stability, mechanical properties, and effective barrier properties. The incorporation of inclusion complexes greatly enhanced the antioxidant and antibacterial activity of the film, particularly against Shewanella putrefaciens, with an inhibitory efficiency of up to 65%. Crucially, the film realized controlled release of 4-TA under 98% high relative humidity conditions by inducing the plasticization of polymers caused by water molecules, swelling of polymer chains, and destruction of hydrogen bonds within the cyclodextrin inclusion complex. This film with a long-term antimicrobial effect successfully extended the shelf life of Litopenaeus vannamei shrimp to 7 days at 4 °C. To further improve the loading efficiency and long-acting release of EOs, we synthesized the γ-cyclodextrin-metal organic frameworks (γ-CD-MOFs), and then efficiently anchored γ-CD-MOFs on chitosan-cellulose (CS-CEL) composite film by in situ growth method for controlled releasing of carvacrol (CAR). We found that the growth efficiency of γ-CD-MOFs was the highest when the concentration of CEL dispersion was 5%. The anchoring of γ-CD-MOFs on CS-CEL film significantly improved the surface area of CS-CEL film from 1.0294 m2/g to 43.3458 m2/g. The molecular docking and 1H NMR spectra indicated that γ-CD-MOF has better complexing and stabilizing ability for CAR molecules than γ-CD. In addition, the release of CAR reached 99.71±0.22% on the 10th day, while under 22% RH, the release pattern of CAR was a plateau with 14.71 ± 4.46%. The inhibition rate of this film against E. coli, S. aureus and B. cinerea was more than 99%, and extended the shelf life of strawberries to 7 days. By incorporating the merits of natural biopolymers and MOFs, this active packaging offers great potential as a substitute for traditional packaging materials.Keywords: active packaging, antibacterial activity, controlled release, essential oils, food quality control
Procedia PDF Downloads 64113 Vibration Based Structural Health Monitoring of Connections in Offshore Wind Turbines
Authors: Cristobal García
Abstract:
The visual inspection of bolted joints in wind turbines is dangerous, expensive, and impractical due to the non-possibility to access the platform by workboat in certain sea state conditions, as well as the high costs derived from the transportation of maintenance technicians to offshore platforms located far away from the coast, especially if helicopters are involved. Consequently, the wind turbine operators have the need for simpler and less demanding techniques for the analysis of the bolts tightening. Vibration-based structural health monitoring is one of the oldest and most widely-used means for monitoring the health of onshore and offshore wind turbines. The core of this work is to find out if the modal parameters can be efficiently used as a key performance indicator (KPIs) for the assessment of joint bolts in a 1:50 scale tower of a floating offshore wind turbine (12 MW). A non-destructive vibration test is used to extract the vibration signals of the towers with different damage statuses. The procedure can be summarized in three consecutive steps. First, an artificial excitation is introduced by means of a commercial shaker mounted on the top of the tower. Second, the vibration signals of the towers are recorded for 8 s at a sampling rate of 20 kHz using an array of commercial accelerometers (Endevco, 44A16-1032). Third, the natural frequencies, damping, and overall vibration mode shapes are calculated using the software Siemens LMS 16A. Experiments show that the natural frequencies, damping, and mode shapes of the tower are directly dependent on the fixing conditions of the towers, and therefore, the variations of both parameters are a good indicator for the estimation of the static axial force acting in the bolt. Thus, this vibration-based structural method proposed can be potentially used as a diagnostic tool to evaluate the tightening torques of the bolted joints with the advantages of being an economical, straightforward, and multidisciplinary approach that can be applied for different typologies of connections by operation and maintenance technicians. In conclusion, TSI, in collaboration with the consortium of the FIBREGY project, is conducting innovative research where vibrations are utilized for the estimation of the tightening torque of a 1:50 scale steel-based tower prototype. The findings of this research carried out in the context of FIBREGY possess multiple implications for the assessment of the bolted joint integrity in multiple types of connections such as tower-to-nacelle, modular, tower-to-column, tube-to-tube, etc. This research is contextualized in the framework of the FIBREGY project. The EU-funded FIBREGY project (H2020, grant number 952966) will evaluate the feasibility of the design and construction of a new generation of marine renewable energy platforms using lightweight FRP materials in certain structural elements (e.g., tower, floating platform). The FIBREGY consortium is composed of 11 partners specialized in the offshore renewable energy sector and funded partially by the H2020 program of the European Commission with an overall budget of 8 million Euros.Keywords: SHM, vibrations, connections, floating offshore platform
Procedia PDF Downloads 125112 Spatial Organization of Cells over the Process of Pellicle Formation by Pseudomonas alkylphenolica KL28
Authors: Kyoung Lee
Abstract:
Numerous aerobic bacteria have the ability to form multicellular communities on the surface layer of the air-liquid (A-L) interface as a biofilm called a pellicle. Pellicles occupied at the A-L interface will benefit from the utilization of oxygen from air and nutrient from liquid. Buoyancy of cells can be obtained by high surface tension at the A-L interface. Thus, formation of pellicles is an adaptive advantage in utilization of excess nutrients in the standing culture where oxygen depletion is easily set up due to rapid cell growth. In natural environments, pellicles are commonly observed on the surface of lake or pond contaminated with pollutants. Previously, we have shown that when cultured in standing LB media an alkylphenol-degrading bacteria Pseudomonas alkylphenolia KL28 forms pellicles in a diameter of 0.3-0.5 mm with a thickness of ca 40 µm. The pellicles have unique features for possessing flatness and unusual rigidity. In this study, the biogenesis of the circular pellicles has been investigated by observing the cell organization at early stages of pellicle formation and cell arrangements in pellicle, providing a clue for highly organized cellular arrangement to be adapted to the air-liquid niche. Here, we first monitored developmental patterns of pellicle from monolayer to multicellular organization. Pellicles were shaped by controlled growth of constituent cells which accumulate extracellular polymeric substance. The initial two-dimensional growth was transited to multilayers by a constraint force of accumulated self-produced extracellular polymeric substance. Experiments showed that pellicles are formed by clonal growth and even with knock-out of genes for flagella and pilus formation. In contrast, the mutants in the epm gene cluster for alginate-like polymer biosynthesis were incompetent in cell alignment for initial two-dimensional growth of pellicles. Electron microscopic and confocal laser scanning microscopic studies showed that the fully matured structures are highly packed by matrix-encased cells which have special arrangements. The cells on the surface of the pellicle lie relatively flat and inside longitudinally cross packed. HPLC analysis of the extrapolysaccharide (EPS) hydrolysate from the colonies from LB agar showed a composition with L-fucose, L-rhamnose, D-galactosamine, D-glucosamine, D-galactose, D-glucose, D-mannose. However, that from pellicles showed similar neutral and amino sugar profile but missing galactose. Furthermore, uronic acid analysis of EPS hydrolysates by HPLC showed that mannuronic acid was detected from pellicles not from colonies, indicating the epm-derived polymer is critical for pellicle formation as proved by the epm mutants. This study verified that for the circular pellicle architecture P. alkylphenolica KL28 cells utilized EPS building blocks different from that used for colony construction. These results indicate that P. alkylphenolica KL28 is a clever architect that dictates unique cell arrangements with selected EPS matrix material to construct sophisticated building, circular biofilm pellicles.Keywords: biofilm, matrix, pellicle, pseudomonas
Procedia PDF Downloads 152111 One Species into Five: Nucleo-Mito Barcoding Reveals Cryptic Species in 'Frankliniella Schultzei Complex': Vector for Tospoviruses
Authors: Vikas Kumar, Kailash Chandra, Kaomud Tyagi
Abstract:
The insect order Thysanoptera includes small insects commonly called thrips. As insect vectors, only thrips are capable of Tospoviruses transmission (genus Tospovirus, family Bunyaviridae) affecting various crops. Currently, fifteen species of subfamily Thripinae (Thripidae) have been reported as vectors for tospoviruses. Frankliniella schultzei, which is reported as act as a vector for at least five tospovirses, have been suspected to be a species complex with more than one species. It is one of the historical unresolved issues where, two species namely, F. schultzei Trybom and F. sulphurea Schmutz were erected from South Africa and Srilanaka respectively. These two species were considered to be valid until 1968 when sulphurea was treated as colour morph (pale form) and synonymised under schultzei (dark form) However, these two have been considered as valid species by some of the thrips workers. Parallel studies have indicated that brown form of schultzei is a vector for tospoviruses while yellow form is a non-vector. However, recent studies have shown that yellow populations have also been documented as vectors. In view of all these facts, it is highly important to have a clear understanding whether these colour forms represent true species or merely different populations with different vector carrying capacities and whether there is some hidden diversity in 'Frankliniella schultzei species complex'. In this study, we aim to study the 'Frankliniella schultzei species complex' with molecular spectacles with DNA data from India and Australia and Africa. A total of fifty-five specimens was collected from diverse locations in India and Australia. We generated molecular data using partial fragments of mitochondrial cytochrome c oxidase I gene (mtCOI) and 28S rRNA gene. For COI dataset, there were seventy-four sequences, out of which data on fifty-five was generated in the current study and others were retrieved from NCBI. All the four different tree construction methods: neighbor-joining, maximum parsimony, maximum likelihood and Bayesian analysis, yielded the same tree topology and produced five cryptic species with high genetic divergence. For, rDNA, there were forty-five sequences, out of which data on thirty-nine was generated in the current study and others were retrieved from NCBI. The four tree building methods yielded four cryptic species with high bootstrap support value/posterior probability. Here we could not retrieve one cryptic species from South Africa as we could not generate data on rDNA from South Africa and sequence for rDNA from African region were not available in the database. The results of multiple species delimitation methods (barcode index numbers, automatic barcode gap discovery, general mixed Yule-coalescent, and Poisson-tree-processes) also supported the phylogenetic data and produced 5 and 4 Molecular Operational Taxonomic Units (MOTUs) for mtCOI and 28S dataset respectively. These results of our study indicate the likelihood that F. sulphurea may be a valid species, however, more morphological and molecular data is required on specimens from type localities of these two species and comparison with type specimens.Keywords: DNA barcoding, species complex, thrips, species delimitation
Procedia PDF Downloads 128110 Improving the Efficiency of a High Pressure Turbine by Using Non-Axisymmetric Endwall: A Comparison of Two Optimization Algorithms
Authors: Abdul Rehman, Bo Liu
Abstract:
Axial flow turbines are commonly designed with high loads that generate strong secondary flows and result in high secondary losses. These losses contribute to almost 30% to 50% of the total losses. Non-axisymmetric endwall profiling is one of the passive control technique to reduce the secondary flow loss. In this paper, the non-axisymmetric endwall profile construction and optimization for the stator endwalls are presented to improve the efficiency of a high pressure turbine. The commercial code NUMECA Fine/ Design3D coupled with Fine/Turbo was used for the numerical investigation, design of experiments and the optimization. All the flow simulations were conducted by using steady RANS and Spalart-Allmaras as a turbulence model. The non-axisymmetric endwalls of stator hub and shroud were created by using the perturbation law based on Bezier Curves. Each cut having multiple control points was supposed to be created along the virtual streamlines in the blade channel. For the design of experiments, each sample was arbitrarily generated based on values automatically chosen for the control points defined during parameterization. The Optimization was achieved by using two algorithms i.e. the stochastic algorithm and gradient-based algorithm. For the stochastic algorithm, a genetic algorithm based on the artificial neural network was used as an optimization method in order to achieve the global optimum. The evaluation of the successive design iterations was performed using artificial neural network prior to the flow solver. For the second case, the conjugate gradient algorithm with a three dimensional CFD flow solver was used to systematically vary a free-form parameterization of the endwall. This method is efficient and less time to consume as it requires derivative information of the objective function. The objective function was to maximize the isentropic efficiency of the turbine by keeping the mass flow rate as constant. The performance was quantified by using a multi-objective function. Other than these two classifications of the optimization methods, there were four optimizations cases i.e. the hub only, the shroud only, and the combination of hub and shroud. For the fourth case, the shroud endwall was optimized by using the optimized hub endwall geometry. The hub optimization resulted in an increase in the efficiency due to more homogenous inlet conditions for the rotor. The adverse pressure gradient was reduced but the total pressure loss in the vicinity of the hub was increased. The shroud optimization resulted in an increase in efficiency, total pressure loss and entropy were reduced. The combination of hub and shroud did not show overwhelming results which were achieved for the individual cases of the hub and the shroud. This may be caused by fact that there were too many control variables. The fourth case of optimization showed the best result because optimized hub was used as an initial geometry to optimize the shroud. The efficiency was increased more than the individual cases of optimization with a mass flow rate equal to the baseline design of the turbine. The results of artificial neural network and conjugate gradient method were compared.Keywords: artificial neural network, axial turbine, conjugate gradient method, non-axisymmetric endwall, optimization
Procedia PDF Downloads 225109 Experimental Research of Canine Mandibular Defect Construction with the Controlled Meshy Titanium Alloy Scaffold Fabricated by Electron Beam Melting Combined with BMSCs-Encapsulating Chitosan Hydrogel
Authors: Wang Hong, Liu Chang Kui, Zhao Bing Jing, Hu Min
Abstract:
Objection We observed the repairment effection of canine mandibular defect with meshy Ti6Al4V scaffold fabricated by electron beam melting (EBM) combined with bone marrow mesenchymal stem cells (BMMSCs) encapsulated in chitosan hydrogel. Method Meshy titanium scaffolds were prepared by EBM of commercial Ti6Al4V power. The length of scaffolds was 24 mm, the width was 5 mm and height was 8mm. The pore size and porosity were evaluated by scanning electron microscopy (SEM). Chitosan /Bio-Oss hydrogel was prepared by chitosan, β- sodium glycerophosphate and Bio-Oss power. BMMSCs were harvested from canine iliac crests. BMMSCs were seeded in titanium scaffolds and encapsulated in Chitosan /Bio-Oss hydrogel. The validity of BMMSCs was evaluated by cell count kit-8 (CCK-8). The osteogenic differentiation ability was evaluated by alkaline phosphatase (ALP) activity and gene expression of OC, OPN and CoⅠ. Combination were performed by injecting BMMSCs/ Chitosan /Bio-Oss hydrogel into the meshy Ti6Al4V scaffolds and solidified. 24 mm long box-shaped bone defects were made at the mid-portion of mandible of adult beagles. The defects were randomly filled with BMMSCs/ Chitosan/Bio-Oss + titanium, Chitosan /Bio-Oss+titanium, titanium alone. Autogenous iliac crests graft as control group in 3 beagles. Radionuclide bone imaging was used to monitor the new bone tissue at 2, 4, 8 and 12 weeks after surgery. CT examination was made on the surgery day and 4 weeks, 12 weeks and 24 weeks after surgery. The animals were sacrificed in 4, 12 and 24 weeks after surgery. The bone formation were evaluated by histology and micro-CT. Results: The pores of the scaffolds was interconnected, the pore size was about 1 mm, the average porosity was about 76%. The pore size of the hydrogel was 50-200μm and the average porosity was approximately 90%. The hydrogel were solidified under the condition of 37℃in 10 minutes. The validity and the osteogenic differentiation ability of BMSCs were not affected by titanium scaffolds and hydrogel. Radionuclide bone imaging shown an increasing tendency of the revascularization and bone regeneration was observed in all the groups at 2, 4, 8 weeks after operation, and there were no changes at 12weeks.The tendency was more obvious in the BMMSCs/ Chitosan/Bio-Oss +titanium group and autogenous group. CT, Micro-CT and histology shown that new bone formed increasingly with the time extend. There were more new bone regenerated in BMMSCs/ Chitosan /Bio-Oss + titanium group and autogenous group than the other two groups. At 24 weeks, the autogenous group was achieved bone union. The BMSCs/ Chitosan /Bio-Oss group was seen extensive new bone formed around the scaffolds and more new bone inside of the central pores of scaffolds than Chitosan /Bio-Oss + titanium group and titanium group. The difference was significantly. Conclusion: The titanium scaffolds fabricated by EBM had controlled porous structure, good bone conduction and biocompatibility. Chitosan /Bio-Oss hydrogel had injectable plasticity, thermosensitive property and good biocompatibility. The meshy Ti6Al4V scaffold produced by EBM combined BMSCs encapsulated in chitosan hydrogel had good capacity on mandibular bone defect repair.Keywords: mandibular reconstruction, tissue engineering, electron beam melting, titanium alloy
Procedia PDF Downloads 445108 The Significance of Cultural Risks for Western Consultants Executing Gulf Cooperation Council Megaprojects
Authors: Alan Walsh, Peter Walker
Abstract:
Differences in commercial, professional and personal cultural traditions between western consultants and project sponsors in the Gulf Cooperation Council (GCC) region are potentially significant in the workplace, and this can impact on project outcomes. These cultural differences can, for example, result in conflict amongst senior managers, which can negatively impact the megaproject. New entrants to the GCC often experience ‘culture shock’ as they attempt to integrate into their unfamiliar environments. Megaprojects are unique ventures with individual project characteristics, which need to be considered when managing their associated risks. Megaproject research to date has mostly ignored the significance of the absence of cultural congruence in the GCC, which is surprising considering that there are large volumes of megaprojects in various stages of construction in the GCC. An initial step to dealing with cultural issues is to acknowledge culture as a significant risk factor (SRF). This paper seeks to understand the criticality for western consultants to address these risks. It considers the cultural barriers that exist between GCC sponsors and western consultants and examines the cultural distance between the key actors. Initial findings suggest the presence to a certain extent of ethnocentricity. Other cultural clashes arise out of a lack of appreciation of the customs, practices and traditions of ‘the Other’, such as the need for avoiding public humiliation and the hierarchal significance rankings. The concept and significance of cultural shock as part of the integration process for new arrivals are considered. Culture shock describes the state of anxiety and frustration resulting from the immersion in a culture distinctly different from one's own. There are potentially substantial project risks associated with underestimating the process of cultural integration. This paper examines two distinct but intertwined issues: the societal and professional culture differences associated with expatriate assignments. A case study examines the cultural congruences between GCC sponsors and American, British and German consultants, over a ten-year cycle. This provides indicators as to which nationalities encountered the most profound cultural issues and the nature of these. GCC megaprojects are typically intensive fast track demanding ventures, where consultant turnover is high. The study finds that building trust-filled relationships is key to successful project team integration and therefore, to successful megaproject execution. Findings indicate that both professional and social inclusion processes have steep learning curves. Traditional risk management practice is to approach any uncertainty in a structured way to mitigate the potential impact on project outcomes. This research highlights cultural risk as a significant factor in the management of GCC megaprojects. These risks arising from high staff turnover typically include loss of project knowledge, delays to the project, cost and disruption in replacing staff. This paper calls for cultural risk to be recognised as an SRF, as the first step to developing risk management strategies, and to reduce staff turnover for western consultants in GCC megaprojects.Keywords: western consultants in megaprojects, national culture impacts on GCC megaprojects, significant risk factors in megaprojects, professional culture in megaprojects
Procedia PDF Downloads 133107 European Electromagnetic Compatibility Directive Applied to Astronomical Observatories
Authors: Oibar Martinez, Clara Oliver
Abstract:
The Cherenkov Telescope Array Project (CTA) aims to build two different observatories of Cherenkov Telescopes, located in Cerro del Paranal, Chile, and La Palma, Spain. These facilities are used in this paper as a case study to investigate how to apply standard Directives on Electromagnetic Compatibility to astronomical observatories. Cherenkov Telescopes are able to provide valuable information from both Galactic and Extragalactic sources by measuring Cherenkov radiation, which is produced by particles which travel faster than light in the atmosphere. The construction requirements demand compliance with the European Electromagnetic Compatibility Directive. The largest telescopes of these observatories, called Large Scale Telescopes (LSTs), are high precision instruments with advanced photomultipliers able to detect the faint sub-nanosecond blue light pulses produced by Cherenkov Radiation. They have a 23-meter parabolic reflective surface. This surface focuses the radiation on a camera composed of an array of high-speed photosensors which are highly sensitive to the radio spectrum pollution. The camera has a field of view of about 4.5 degrees and has been designed for maximum compactness and lowest weight, cost and power consumption. Each pixel incorporates a photo-sensor able to discriminate single photons and the corresponding readout electronics. The first LST is already commissioned and intends to be operated as a service to Scientific Community. Because of this, it must comply with a series of reliability and functional requirements and must have a Conformité Européen (CE) marking. This demands compliance with Directive 2014/30/EU on electromagnetic compatibility. The main difficulty of accomplishing this goal resides on the fact that Conformité Européen marking setups and procedures were implemented for industrial products, whereas no clear protocols have been defined for scientific installations. In this paper, we aim to give an answer to the question on how the directive should be applied to our installation to guarantee the fulfillment of all the requirements and the proper functioning of the telescope itself. Experts in Optics and Electromagnetism were both needed to make these kinds of decisions and match tests which were designed to be made over the equipment of limited dimensions on large scientific plants. An analysis of the elements and configurations most likely to be affected by external interferences and those that are most likely to cause the maximum disturbances was also performed. Obtaining the Conformité Européen mark requires knowing what the harmonized standards are and how the elaboration of the specific requirement is defined. For this type of large installations, one needs to adapt and develop the tests to be carried out. In addition, throughout this process, certification entities and notified bodies play a key role in preparing and agreeing the required technical documentation. We have focused our attention mostly on the technical aspects of each point. We believe that this contribution will be of interest for other scientists involved in applying industrial quality assurance standards to large scientific plant.Keywords: CE marking, electromagnetic compatibility, european directive, scientific installations
Procedia PDF Downloads 110106 Challenges in Self-Managing Vitality: A Qualitative Study about Staying Vital at Work among Dutch Office Workers
Authors: Violet Petit-Steeghs, Jochem J. R. Van Roon, Jacqueline E. W. Broerse
Abstract:
Last decennia the retirement age in Europe is gradually increasing. As a result, people have to continue working for a longer period of time. Health problems due to increased sedentary behavior and mental conditions like burn-out, pose a threat in fulfilling employees’ working life. In order to stimulate the ability and willingness to work in the present and future, it is important to stay vital. Vitality is regarded in literature as a sense of energy, motivation and resilience. It is assumed that by increasing their vitality, employees will stay healthier and be more satisfied with their job, leading to a more sustainable employment and less absenteeism in the future. The aim of this project is to obtain insights into the experiences and barriers of employees, and specifically office workers, with regard to their vitality. These insights are essential in order to develop appropriate measures in the future. To get more insights in the experiences of office workers on their vitality, 8 focus group discussions were organized with 6-10 office workers from 4 different employers (an university, a national construction company and a large juridical and care service organization) in the Netherlands. The discussions were transcribed and analyzed via open coding. This project is part of a larger consortium project Provita2, and conducted in collaboration with University of Technology Eindhoven. Results showed that a range of interdependent factors form a complex network that influences office workers’ vitality. These factors can be divided in three overarching groups: (1) personal (2) organizational and (3) environmental factors. Personal intrinsic factors, relating to the office worker, comprise someone’s physical health, coping style, life style, needs, and private life. Organizational factors, relating to the employer, are the workload, management style and the structure, vision and culture of the organization. Lastly, environmental factors consist of the air, light, temperature at the workplace and whether the workplace is inspiring and workable. Office workers experienced barriers to improve their own vitality due to a lack of autonomy. On the one hand, because most factors were not only intrinsic but extrinsic, like work atmosphere or the temperature in the room. On the other hand, office workers were restricted in adapting both intrinsic as well as extrinsic factors. Restrictions to for instance the flexibility of working times and the workload, can set limitations for improving vitality through personal factors like physical activity and mental relaxation. In conclusion, a large range of interdependent factors influence the vitality of office workers. Office workers are often regarded to have a responsibility to improve their vitality, but are limitedly autonomous in adapting these factors. Measures to improve vitality should therefore not only focus on increasing awareness among office workers, but also on empowering them to fulfill this responsibility. A holistic approach that takes the complex mutual dependencies between the different factors and actors (like managers, employees and HR personnel) into account is highly recommended.Keywords: occupational health, perspectives office workers, sustainable employment, vitality at work, work & wellbeing
Procedia PDF Downloads 138105 Diversity in the Community - The Disability Perspective
Authors: Sarah Reker, Christiane H. Kellner
Abstract:
From the perspective of people with disabilities, inequalities can also emerge from spatial segregation, the lack of social contacts or limited economic resources. In order to reduce or even eliminate these disadvantages and increase general well-being, community-based participation as well as decentralisation efforts within exclusively residential homes is essential. Therefore, the new research project “Index for participation development and quality of life for persons with disabilities”(TeLe-Index, 2014-2016), which is anchored at the Technische Universität München in Munich and at a large residential complex and service provider for persons with disabilities in the outskirts of Munich aims to assist the development of community-based living environments. People with disabilities should be able to participate in social life beyond the confines of the institution. Since a diverse society is a society in which different individual needs and wishes can emerge and be catered to, the ultimate goal of the project is to create an environment for all citizens–regardless of disability, age or ethnic background–that accommodates their daily activities and requirements. The UN-Convention on the Rights of Persons with Disabilities, which Germany also ratified, postulates the necessity of user-centered design, especially when it comes to evaluating the individual needs and wishes of all citizens. Therefore, a multidimensional approach is required. Based on this insight, the structure of the town-like center will be remodeled to open up the community to all people. This strategy should lead to more equal opportunities and open the way for a much more diverse community. Therefore, macro-level research questions were inspired by quality of life theory and were formulated as follows for different dimensions: •The user dimension: what needs and necessities can we identify? Are needs person-related? Are there any options to choose from? What type of quality of life can we identify? The economic dimension: what resources (both material and staff-related) are available in the region? (How) are they used? What costs (can) arise and what effects do they entail? •The environment dimension: what “environmental factors” such as access (mobility and absence of barriers) prove beneficial or impedimental? In this context, we have provided academic supervision and support for three projects (the construction of a new school, inclusive housing for children and teenagers with disabilities and the professionalization of employees with person-centered thinking). Since we cannot present all the issues of the umbrella-project within the conference framework, we will be focusing on one project more in-depth, namely “Outpatient Housing Options for Children and Teenagers with Disabilities”. The insights we have obtained until now will enable us to present the intermediary results of our evaluation. The most central questions pertaining to this part of the research were the following: •How have the existing network relations been designed? •What meaning (or significance) does the existing service offers and structures have for the everyday life of an external residential group? These issues underpinned the environmental analyses as well as the qualitative guided interviews and qualitative network analyses we carried out.Keywords: decentralisation, environmental analyses, outpatient housing options for children and teenagers with disabilities, qualitative network analyses
Procedia PDF Downloads 365104 The Development of Assessment Criteria Framework for Sustainable Healthcare Buildings in China
Authors: Chenyao Shen, Jie Shen
Abstract:
The rating system provides an effective framework for assessing building environmental performance and integrating sustainable development into building and construction processes; as it can be used as a design tool by developing appropriate sustainable design strategies and determining performance measures to guide the sustainable design and decision-making processes. Healthcare buildings are resource (water, energy, etc.) intensive. To maintain high-cost operations and complex medical facilities, they require a great deal of hazardous and non-hazardous materials, stringent control of environmental parameters, and are responsible for producing polluting emission. Compared with other types of buildings, the impact of healthcare buildings on the full cycle of the environment is particularly large. With broad recognition among designers and operators that energy use can be reduced substantially, many countries have set up their own green rating systems for healthcare buildings. There are four main green healthcare building evaluation systems widely acknowledged in the world - Green Guide for Health Care (GGHC), which was jointly organized by the United States HCWH and CMPBS in 2003; BREEAM Healthcare, issued by the British Academy of Building Research (BRE) in 2008; the Green Star-Healthcare v1 tool, released by the Green Building Council of Australia (GBCA) in 2009; and LEED Healthcare 2009, released by the United States Green Building Council (USGBC) in 2011. In addition, the German Association of Sustainable Building (DGNB) has also been developing the German Sustainable Building Evaluation Criteria (DGNB HC). In China, more and more scholars and policy makers have recognized the importance of assessment of sustainable development, and have adapted some tools and frameworks. China’s first comprehensive assessment standard for green building (the GBTs) was issued in 2006 (lately updated in 2014), promoting sustainability in the built-environment and raise awareness of environmental issues among architects, engineers, contractors as well as the public. However, healthcare building was not involved in the evaluation system of GBTs because of its complex medical procedures, strict requirements of indoor/outdoor environment and energy consumption of various functional rooms. Learn from advanced experience of GGHC, BREEAM, and LEED HC above, China’s first assessment criteria for green hospital/healthcare buildings was finally released in December 2015. Combined with both quantitative and qualitative assessment criteria, the standard highlight the differences between healthcare and other public buildings in meeting the functional needs for medical facilities and special groups. This paper has focused on the assessment criteria framework for sustainable healthcare buildings, for which the comparison of different rating systems is rather essential. Descriptive analysis is conducted together with the cross-matrix analysis to reveal rich information on green assessment criteria in a coherent manner. The research intends to know whether the green elements for healthcare buildings in China are different from those conducted in other countries, and how to improve its assessment criteria framework.Keywords: assessment criteria framework, green building design, healthcare building, building performance rating tool
Procedia PDF Downloads 146103 Visco-Hyperelastic Finite Element Analysis for Diagnosis of Knee Joint Injury Caused by Meniscal Tearing
Authors: Eiji Nakamachi, Tsuyoshi Eguchi, Sayo Yamamoto, Yusuke Morita, H. Sakamoto
Abstract:
In this study, we aim to reveal the relationship between the meniscal tearing and the articular cartilage injury of knee joint by using the dynamic explicit finite element (FE) method. Meniscal injuries reduce its functional ability and consequently increase the load on the articular cartilage of knee joint. In order to prevent the induction of osteoarthritis (OA) caused by meniscal injuries, many medical treatment techniques, such as artificial meniscus replacement and meniscal regeneration, have been developed. However, it is reported that these treatments are not the comprehensive methods. In order to reveal the fundamental mechanism of OA induction, the mechanical characterization of meniscus under the condition of normal and injured states is carried out by using FE analyses. At first, a FE model of the human knee joint in the case of normal state – ‘intact’ - was constructed by using the magnetron resonance (MR) tomography images and the image construction code, Materialize Mimics. Next, two types of meniscal injury models with the radial tears of medial and lateral menisci were constructed. In FE analyses, the linear elastic constitutive law was adopted for the femur and tibia bones, the visco-hyperelastic constitutive law for the articular cartilage, and the visco-anisotropic hyperelastic constitutive law for the meniscus, respectively. Material properties of articular cartilage and meniscus were identified using the stress-strain curves obtained by our compressive and the tensile tests. The numerical results under the normal walking condition revealed how and where the maximum compressive stress occurred on the articular cartilage. The maximum compressive stress and its occurrence point were varied in the intact and two meniscal tear models. These compressive stress values can be used to establish the threshold value to cause the pathological change for the diagnosis. In this study, FE analyses of knee joint were carried out to reveal the influence of meniscal injuries on the cartilage injury. The following conclusions are obtained. 1. 3D FE model, which consists femur, tibia, articular cartilage and meniscus was constructed based on MR images of human knee joint. The image processing code, Materialize Mimics was used by using the tetrahedral FE elements. 2. Visco-anisotropic hyperelastic constitutive equation was formulated by adopting the generalized Kelvin model. The material properties of meniscus and articular cartilage were determined by curve fitting with experimental results. 3. Stresses on the articular cartilage and menisci were obtained in cases of the intact and two radial tears of medial and lateral menisci. Through comparison with the case of intact knee joint, two tear models show almost same stress value and higher value than the intact one. It was shown that both meniscal tears induce the stress localization in both medial and lateral regions. It is confirmed that our newly developed FE analysis code has a potential to be a new diagnostic system to evaluate the meniscal damage on the articular cartilage through the mechanical functional assessment.Keywords: finite element analysis, hyperelastic constitutive law, knee joint injury, meniscal tear, stress concentration
Procedia PDF Downloads 246102 The Effect of Ionic Liquid Anion Type on the Properties of TiO2 Particles
Authors: Marta Paszkiewicz, Justyna Łuczak, Martyna Marchelek, Adriana Zaleska-Medynska
Abstract:
In recent years, photocatalytical processes have been intensively investigated for destruction of pollutants, hydrogen evolution, disinfection of water, air and surfaces, for the construction of self-cleaning materials (tiles, glass, fibres, etc.). Titanium dioxide (TiO2) is the most popular material used in heterogeneous photocatalysis due to its excellent properties, such as high stability, chemical inertness, non-toxicity and low cost. It is well known that morphology and microstructure of TiO2 significantly influence the photocatalytic activity. This characteristics as well as other physical and structural properties of photocatalysts, i.e., specific surface area or density of crystalline defects, could be controlled by preparation route. In this regard, TiO2 particles can be obtained by sol-gel, hydrothermal, sonochemical methods, chemical vapour deposition and alternatively, by ionothermal synthesis using ionic liquids (ILs). In the TiO2 particles synthesis ILs may play a role of a solvent, soft template, reagent, agent promoting reduction of the precursor or particles stabilizer during synthesis of inorganic materials. In this work, the effect of the ILs anion type on morphology and photoactivity of TiO2 is presented. The preparation of TiO2 microparticles with spherical structure was successfully achieved by solvothermal method, using tetra-tert-butyl orthotitatane (TBOT) as the precursor. The reaction process was assisted by an ionic liquids 1-butyl-3-methylimidazolium bromide [BMIM][Br], 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4] and 1-butyl-3-methylimidazolium haxafluorophosphate [BMIM][PF6]. Various molar ratios of all ILs to TBOT (IL:TBOT) were chosen. For comparison, reference TiO2 was prepared using the same method without IL addition. Scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), Brenauer-Emmett-Teller surface area (BET), NCHS analysis, and FTIR spectroscopy were used to characterize the surface properties of the samples. The photocatalytic activity was investigated by means of phenol photodegradation in the aqueous phase as a model pollutant, as well as formation of hydroxyl radicals based on detection of fluorescent product of coumarine hydroxylation. The analysis results showed that the TiO2 microspheres had spherical structure with the diameters ranging from 1 to 6 µm. The TEM micrographs gave a bright observation of the samples in which the particles were comprised of inter-aggregated crystals. It could be also observed that the IL-assisted TiO2 microspheres are not hollow, which provides additional information about possible formation mechanism. Application of the ILs results in rise of the photocatalytic activity as well as BET surface area of TiO2 as compared to pure TiO2. The results of the formation of 7-hydroxycoumarin indicated that the increased amount of ·OH produced at the surface of excited TiO2 for samples TiO2_ILs well correlated with more efficient degradation of phenol. NCHS analysis showed that ionic liquids remained on the TiO2 surface confirming structure directing role of that compounds.Keywords: heterogeneous photocatalysis, IL-assisted synthesis, ionic liquids, TiO2
Procedia PDF Downloads 267101 Rapid Building Detection in Population-Dense Regions with Overfitted Machine Learning Models
Authors: V. Mantey, N. Findlay, I. Maddox
Abstract:
The quality and quantity of global satellite data have been increasing exponentially in recent years as spaceborne systems become more affordable and the sensors themselves become more sophisticated. This is a valuable resource for many applications, including disaster management and relief. However, while more information can be valuable, the volume of data available is impossible to manually examine. Therefore, the question becomes how to extract as much information as possible from the data with limited manpower. Buildings are a key feature of interest in satellite imagery with applications including telecommunications, population models, and disaster relief. Machine learning tools are fast becoming one of the key resources to solve this problem, and models have been developed to detect buildings in optical satellite imagery. However, by and large, most models focus on affluent regions where buildings are generally larger and constructed further apart. This work is focused on the more difficult problem of detection in populated regions. The primary challenge with detecting small buildings in densely populated regions is both the spatial and spectral resolution of the optical sensor. Densely packed buildings with similar construction materials will be difficult to separate due to a similarity in color and because the physical separation between structures is either non-existent or smaller than the spatial resolution. This study finds that training models until they are overfitting the input sample can perform better in these areas than a more robust, generalized model. An overfitted model takes less time to fine-tune from a generalized pre-trained model and requires fewer input data. The model developed for this study has also been fine-tuned using existing, open-source, building vector datasets. This is particularly valuable in the context of disaster relief, where information is required in a very short time span. Leveraging existing datasets means that little to no manpower or time is required to collect data in the region of interest. The training period itself is also shorter for smaller datasets. Requiring less data means that only a few quality areas are necessary, and so any weaknesses or underpopulated regions in the data can be skipped over in favor of areas with higher quality vectors. In this study, a landcover classification model was developed in conjunction with the building detection tool to provide a secondary source to quality check the detected buildings. This has greatly reduced the false positive rate. The proposed methodologies have been implemented and integrated into a configurable production environment and have been employed for a number of large-scale commercial projects, including continent-wide DEM production, where the extracted building footprints are being used to enhance digital elevation models. Overfitted machine learning models are often considered too specific to have any predictive capacity. However, this study demonstrates that, in cases where input data is scarce, overfitted models can be judiciously applied to solve time-sensitive problems.Keywords: building detection, disaster relief, mask-RCNN, satellite mapping
Procedia PDF Downloads 169100 Socio-Economic Insight of the Secondary Housing Market in Colombo Suburbs: Seller’s Point of Views
Authors: R. G. Ariyawansa, M. A. N. R. M. Perera
Abstract:
“House” is a powerful symbol of socio-economic background of individuals and families. In fact, housing provides all types of needs/wants from basic needs to self-actualization needs. This phenomenon can be realized only having analyzed hidden motives of buyers and sellers of the housing market. Hence, the aim of this study is to examine the socio-economic insight of the secondary housing market in Colombo suburbs. This broader aim was achieved via analyzing the general pattern of the secondary housing market, identifying socio-economic motives of sellers of the secondary housing market, and reviewing sellers’ experience of buyer behavior. A purposive sample of 50 sellers from popular residential areas in Colombo such as Maharagama, Kottawa, Piliyandala, Punnipitiya, and Nugegoda was used to collect primary data instead of relevant secondary data from published and unpublished reports. The sample was limited to selling price ranging from Rs15 million to Rs25 million, which apparently falls into middle and upper-middle income houses in the context. Participatory observation and semi-structured interviews were adopted as key data collection tools. Data were descriptively analyzed. This study found that the market is mainly handled by informal agents who are unqualified and unorganized. People such as taxi/tree-wheel drivers, boutique venders, security personals etc. are engaged in housing brokerage as a part time career. Few fulltime and formally organized agents were found but they were also not professionally qualified. As far as housing quality is concerned, it was observed that 90% of houses was poorly maintained and illegally modified. They are situated in poorly maintained neighborhoods as well. Among the observed houses, 2% was moderately maintained and 8% was well maintained and modified. Major socio-economic motives of sellers were “migrating foreign countries for education and employment” (80% and 10% respectively), “family problems” (4%), and “social status” (3%). Other motives were “health” and “environmental/neighborhood problems” (3%). This study further noted that the secondary middle income housing market in the area directly related with the migrants who motivated for education in foreign countries, mainly Australia, UK and USA. As per the literature, families motivated for education tend to migrate Colombo suburbs from remote areas of the country. They are seeking temporary accommodation in lower middle income housing. However, the secondary middle income housing market relates with the migration from Colombo to major global cities. Therefore, final transaction price of this market may depend on migration related dates such as university deadlines, visa and other agreements. Hence, it creates a buyers’ market lowering the selling price. Also it was revealed that the buyers tend to trust more on this market as far as the quality of construction of houses is concerned than brand new houses which are built for selling purpose.Keywords: informal housing market, hidden motives of buyers and sellers, secondary housing market, socio-economic insight
Procedia PDF Downloads 16899 Participatory Monitoring Strategy to Address Stakeholder Engagement Impact in Co-creation of NBS Related Project: The OPERANDUM Case
Authors: Teresa Carlone, Matteo Mannocchi
Abstract:
In the last decade, a growing number of International Organizations are pushing toward green solutions for adaptation to climate change. This is particularly true in the field of Disaster Risk Reduction (DRR) and land planning, where Nature-Based Solutions (NBS) had been sponsored through funding programs and planning tools. Stakeholder engagement and co-creation of NBS is growing as a practice and research field in environmental projects, fostering the consolidation of a multidisciplinary socio-ecological approach in addressing hydro-meteorological risk. Even thou research and financial interests are constantly spread, the NBS mainstreaming process is still at an early stage as innovative concepts and practices make it difficult to be fully accepted and adopted by a multitude of different actors to produce wide scale societal change. The monitoring and impact evaluation of stakeholders’ participation in these processes represent a crucial aspect and should be seen as a continuous and integral element of the co-creation approach. However, setting up a fit for purpose-monitoring strategy for different contexts is not an easy task, and multiple challenges emerge. In this scenario, the Horizon 2020 OPERANDUM project, designed to address the major hydro-meteorological risks that negatively affect European rural and natural territories through the co-design, co-deployment, and assessment of Nature-based Solution, represents a valid case study to test a monitoring strategy from which set a broader, general and scalable monitoring framework. Applying a participative monitoring methodology, based on selected indicators list that combines quantitative and qualitative data developed within the activity of the project, the paper proposes an experimental in-depth analysis of the stakeholder engagement impact in the co-creation process of NBS. The main focus will be to spot and analyze which factors increase knowledge, social acceptance, and mainstreaming of NBS, promoting also a base-experience guideline to could be integrated with the stakeholder engagement strategy in current and future similar strongly collaborative approach-based environmental projects, such as OPERANDUM. Measurement will be carried out through survey submitted at a different timescale to the same sample (stakeholder: policy makers, business, researchers, interest groups). Changes will be recorded and analyzed through focus groups in order to highlight causal explanation and to assess the proposed list of indicators to steer the conduction of similar activities in other projects and/or contexts. The idea of the paper is to contribute to the construction of a more structured and shared corpus of indicators that can support the evaluation of the activities of involvement and participation of various levels of stakeholders in the co-production, planning, and implementation of NBS to address climate change challenges.Keywords: co-creation and collaborative planning, monitoring, nature-based solution, participation & inclusion, stakeholder engagement
Procedia PDF Downloads 11498 Digital Image Correlation Based Mechanical Response Characterization of Thin-Walled Composite Cylindrical Shells
Authors: Sthanu Mahadev, Wen Chan, Melanie Lim
Abstract:
Anisotropy dominated continuous-fiber composite materials have garnered attention in numerous mechanical and aerospace structural applications. Tailored mechanical properties in advanced composites can exhibit superiority in terms of stiffness-to-weight ratio, strength-to-weight ratio, low-density characteristics, coupled with significant improvements in fatigue resistance as opposed to metal structure counterparts. Extensive research has demonstrated their core potential as more than just mere lightweight substitutes to conventional materials. Prior work done by Mahadev and Chan focused on formulating a modified composite shell theory based prognosis methodology for investigating the structural response of thin-walled circular cylindrical shell type composite configurations under in-plane mechanical loads respectively. The prime motivation to develop this theory stemmed from its capability to generate simple yet accurate closed-form analytical results that can efficiently characterize circular composite shell construction. It showcased the development of a novel mathematical framework to analytically identify the location of the centroid for thin-walled, open cross-section, curved composite shells that were characterized by circumferential arc angle, thickness-to-mean radius ratio, and total laminate thickness. Ply stress variations for curved cylindrical shells were analytically examined under the application of centric tensile and bending loading. This work presents a cost-effective, small-platform experimental methodology by taking advantage of the full-field measurement capability of digital image correlation (DIC) for an accurate assessment of key mechanical parameters such as in-plane mechanical stresses and strains, centroid location etc. Mechanical property measurement of advanced composite materials can become challenging due to their anisotropy and complex failure mechanisms. Full-field displacement measurements are well suited for characterizing the mechanical properties of composite materials because of the complexity of their deformation. This work encompasses the fabrication of a set of curved cylindrical shell coupons, the design and development of a novel test-fixture design and an innovative experimental methodology that demonstrates the capability to very accurately predict the location of centroid in such curved composite cylindrical strips via employing a DIC based strain measurement technique. Error percentage difference between experimental centroid measurements and previously estimated analytical centroid results are observed to be in good agreement. The developed analytical modified-shell theory provides the capability to understand the fundamental behavior of thin-walled cylindrical shells and offers the potential to generate novel avenues to understand the physics of such structures at a laminate level.Keywords: anisotropy, composites, curved cylindrical shells, digital image correlation
Procedia PDF Downloads 31697 Evaluation of Alternative Approaches for Additional Damping in Dynamic Calculations of Railway Bridges under High-Speed Traffic
Authors: Lara Bettinelli, Bernhard Glatz, Josef Fink
Abstract:
Planning engineers and researchers use various calculation models with different levels of complexity, calculation efficiency and accuracy in dynamic calculations of railway bridges under high-speed traffic. When choosing a vehicle model to depict the dynamic loading on the bridge structure caused by passing high-speed trains, different goals are pursued: On the one hand, the selected vehicle models should allow the calculation of a bridge’s vibrations as realistic as possible. On the other hand, the computational efficiency and manageability of the models should be preferably high to enable a wide range of applications. The commonly adopted and straightforward vehicle model is the moving load model (MLM), which simplifies the train to a sequence of static axle loads moving at a constant speed over the structure. However, the MLM can significantly overestimate the structure vibrations, especially when resonance events occur. More complex vehicle models, which depict the train as a system of oscillating and coupled masses, can reproduce the interaction dynamics between the vehicle and the bridge superstructure to some extent and enable the calculation of more realistic bridge accelerations. At the same time, such multi-body models require significantly greater processing capacities and precise knowledge of various vehicle properties. The European standards allow for applying the so-called additional damping method when simple load models, such as the MLM, are used in dynamic calculations. An additional damping factor depending on the bridge span, which should take into account the vibration-reducing benefits of the vehicle-bridge interaction, is assigned to the supporting structure in the calculations. However, numerous studies show that when the current standard specifications are applied, the calculation results for the bridge accelerations are in many cases still too high compared to the measured bridge accelerations, while in other cases, they are not on the safe side. A proposal to calculate the additional damping based on extensive dynamic calculations for a parametric field of simply supported bridges with a ballasted track was developed to address this issue. In this contribution, several different approaches to determine the additional damping of the supporting structure considering the vehicle-bridge interaction when using the MLM are compared with one another. Besides the standard specifications, this includes the approach mentioned above and two additional recently published alternative formulations derived from analytical approaches. For a bridge catalogue of 65 existing bridges in Austria in steel, concrete or composite construction, calculations are carried out with the MLM for two different high-speed trains and the different approaches for additional damping. The results are compared with the calculation results obtained by applying a more sophisticated multi-body model of the trains used. The evaluation and comparison of the results allow assessing the benefits of different calculation concepts for the additional damping regarding their accuracy and possible applications. The evaluation shows that by applying one of the recently published redesigned additional damping methods, the calculation results can reflect the influence of the vehicle-bridge interaction on the design-relevant structural accelerations considerably more reliable than by using normative specifications.Keywords: Additional Damping Method, Bridge Dynamics, High-Speed Railway Traffic, Vehicle-Bridge-Interaction
Procedia PDF Downloads 16196 Multi-Criteria Geographic Information System Analysis of the Costs and Environmental Impacts of Improved Overland Tourist Access to Kaieteur National Park, Guyana
Authors: Mark R. Leipnik, Dahlia Durga, Linda Johnson-Bhola
Abstract:
Kaieteur is the most iconic National Park in the rainforest-clad nation of Guyana in South America. However, the magnificent 226-meter-high waterfall at its center is virtually inaccessible by surface transportation, and the occasional charter flights to the small airstrip in the park are too expensive for many tourists and residents. Thus, the largest waterfall in all of Amazonia, where the Potaro River plunges over a single free drop twice as high as Victoria Falls, remains preserved in splendid isolation inside a 57,000-hectare National Park established by the British in 1929, in the deepest recesses of a remote jungle canyon. Kaieteur Falls are largely unseen firsthand, but images of the falls are depicted on the Guyanese twenty dollar note, in every Guyanese tourist promotion, and on many items in the national capital of Georgetown. Georgetown is only 223-241 kilometers away from the falls. The lack of a single mileage figure demonstrates there is no single overland route. Any journey, except by air, involves changes of vehicles, a ferry ride, and a boat ride up a jungle river. It also entails hiking for many hours to view the falls. Surface access from Georgetown (or any city) is thus a 3-5 day-long adventure; even in the dry season, during the two wet seasons, travel is a particularly sticky proposition. This journey was made overland by the paper's co-author Dahlia Durga. This paper focuses on potential ways to improve overland tourist access to Kaieteur National Park from Georgetown. This is primarily a GIS-based analysis, using multiple criteria to determine the least cost means of creating all-weather road access to the area near the base of the falls while minimizing distance and elevation changes. Critically, it also involves minimizing the number of new bridges required to be built while utilizing the one existing ferry crossings of a major river. Cost estimates are based on data from road and bridge construction engineers operating currently in the interior of Guyana. The paper contains original maps generated with ArcGIS of the potential routes for such an overland connection, including the one deemed optimal. Other factors, such as the impact on endangered species habitats and Indigenous populations, are considered. This proposed infrastructure development is taking place at a time when Guyana is undergoing the largest boom in its history due to revenues from offshore oil and gas development. Thus, better access to the most important tourist attraction in the country is likely to happen eventually in some manner. But the questions of the most environmentally sustainable and least costly alternatives for such access remain. This paper addresses those questions and others related to access to this magnificent natural treasure and the tradeoffs such access will have on the preservation of the currently pristine natural environment of Kaieteur Falls.Keywords: nature tourism, GIS, Amazonia, national parks
Procedia PDF Downloads 16695 The Côa Valley Ecosystem (Douro, Portugal) as a Cultural Landscape. Approach to the Management Challenges
Authors: Mariana Durana Pinto, Thierry Aubry, Eduarda Vieira
Abstract:
The Côa River is one of the tributaries of the Douro River, which in turn connects two Portuguese regions: Beira-Alta (Serra das Mesas, Sabugal) and Trás-os-Montes (Douro River, Vila Nova de Foz Côa). The river, which is approximately 140 kilometres in length, is surrounded by characteristic Northern-Estearn Portugal landscape. The dominant flora in the region includes olive and almond trees and vines, which provide habitat for a diverse range of native species. These include mammals such as the lynx and Iberian wolf, as well as birds of prey such as the Egyptian vulture and the griffon vulture. Additionally, herbivorous species such as red deer and roe deer also inhabit the region. However, the Vale Côa is inextricably linked with the rocky outcrops bearing the emblematic open-air Upper Palaeolithic rock art, indeed, it houses the world's largest collection of prehistoric open-air rock art, inscribed on the World Heritage list by UNESCO in 1998. From the initial discovery of the first engravings in 1991 to the present day, approximally 1,500 panels with rock art, mostly engravings and carving, but also some paintings, have been discovered, inventoried and recorded spanning from earlu Upper Paleolithic to the 20th century. The study and interpretation of the engravings and its geoarchaeological context, allow the construction of a chronological timeline of the human occupation and graphical production in this region. The area has been inhabited since the Early Palaeolithic, with human communities exploiting the diversity of the natural resources of the environment and adapting it to their needs. This led to the creation of an archaeological and historical cultural landscape.The region is currently inhabited by rural communities whose primary source of income is derived from agricultural activities, with a particular focus on olive oil and wine production, including the emblematic Vinho do Porto. Additionally, the region is distinguished by activities such as stone exploration and extraction (e.g. schist and granite quarries) and tourism. The latter has progressively assumed a role in the promotion and development of the region, primarily due to the engravings of the Côa Valley itself, as well as the Alto Douro Wine Region. Furthermore, this cultural landscape has been inscribed in the UNESCO World Heritage Site in 2001. The aforementioned factors give rise to a series of challenges and issues pertaining to the management and safeguarding of rock art on a daily basis. These include: I) the management of conflicts between cultural heritage and economic activity (between Rock art and vineyards, both classified as World Heritage Sites); II) the management of land-use planning in areas where the engravings are located (since the areas with engravings are larger than those identified as buffer zones by UNESCO); III) the absence of the legal figure of an 'archaeological park' and the need to solve this issue; IV) the management of tourist pressure and unauthorised visits; and V) the management of vandalism (as a consequence of misinformation and denial).Keywords: Douro and Côa Valleys, archaeological cultural landscapes, rock art, Douro wine, conservation challenges
Procedia PDF Downloads 1094 Against the Philosophical-Scientific Racial Project of Biologizing Race
Authors: Anthony F. Peressini
Abstract:
The concept of race has recently come prominently back into discussion in the context of medicine and medical science, along with renewed effort to biologize racial concepts. This paper argues that this renewed effort to biologize race by way of medicine and population genetics fail on their own terms, and more importantly, that the philosophical project of biologizing race ought to be recognized for what it is—a retrograde racial project—and abandoned. There is clear agreement that standard racial categories and concepts cannot be grounded in the old way of racial naturalism, which understand race as a real, interest-independent biological/metaphysical category in which its members share “physical, moral, intellectual, and cultural characteristics.” But equally clear is the very real and pervasive presence of racial concepts in individual and collective consciousness and behavior, and so it remains a pressing area in which to seek deeper understanding. Recent philosophical work has endeavored to reconcile these two observations by developing a “thin” conception of race, grounded in scientific concepts but without the moral and metaphysical content. Such “thin,” science-based analyses take the “commonsense” or “folk” sense of race as it functions in contemporary society as the starting point for their philosophic-scientific projects to biologize racial concepts. A “philosophic-scientific analysis” is a special case of the cornerstone of analytic philosophy: a conceptual analysis. That is, a rendering of a concept into the more perspicuous concepts that constitute it. Thus a philosophic-scientific account of a concept is an attempt to work out an analysis of a concept that makes use of empirical science's insights to ground, legitimate and explicate the target concept in terms of clearer concepts informed by empirical results. The focus in this paper is on three recent philosophic-scientific cases for retaining “race” that all share this general analytic schema, but that make use of “medical necessity,” population genetics, and human genetic clustering, respectively. After arguing that each of these three approaches suffers from internal difficulties, the paper considers the general analytic schema employed by such biologizations of race. While such endeavors are inevitably prefaced with the disclaimer that the theory to follow is non-essentialist and non-racialist, the case will be made that such efforts are not neutral scientific or philosophical projects but rather are what sociologists call a racial project, that is, one of many competing efforts that conjoin a representation of what race means to specific efforts to determine social and institutional arrangements of power, resources, authority, etc. Accordingly, philosophic-scientific biologizations of race, since they begin from and condition their analyses on “folk” conceptions, cannot pretend to be “prior to” other disciplinary insights, nor to transcend the social-political dynamics involved in formulating theories of race. As a result, such traditional philosophical efforts can be seen to be disciplinarily parochial and to address only a caricature of a large and important human problem—and thereby further contributing to the unfortunate isolation of philosophical thinking about race from other disciplines.Keywords: population genetics, ontology of race, race-based medicine, racial formation theory, racial projects, racism, social construction
Procedia PDF Downloads 27393 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing
Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl
Abstract:
This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization
Procedia PDF Downloads 15992 A Semi-supervised Classification Approach for Trend Following Investment Strategy
Authors: Rodrigo Arnaldo Scarpel
Abstract:
Trend following is a widely accepted investment strategy that adopts a rule-based trading mechanism that rather than striving to predict market direction or on information gathering to decide when to buy and when to sell a stock. Thus, in trend following one must respond to market’s movements that has recently happen and what is currently happening, rather than on what will happen. Optimally, in trend following strategy, is to catch a bull market at its early stage, ride the trend, and liquidate the position at the first evidence of the subsequent bear market. For applying the trend following strategy one needs to find the trend and identify trade signals. In order to avoid false signals, i.e., identify fluctuations of short, mid and long terms and to separate noise from real changes in the trend, most academic works rely on moving averages and other technical analysis indicators, such as the moving average convergence divergence (MACD) and the relative strength index (RSI) to uncover intelligible stock trading rules following trend following strategy philosophy. Recently, some works has applied machine learning techniques for trade rules discovery. In those works, the process of rule construction is based on evolutionary learning which aims to adapt the rules to the current environment and searches for the global optimum rules in the search space. In this work, instead of focusing on the usage of machine learning techniques for creating trading rules, a time series trend classification employing a semi-supervised approach was used to early identify both the beginning and the end of upward and downward trends. Such classification model can be employed to identify trade signals and the decision-making procedure is that if an up-trend (down-trend) is identified, a buy (sell) signal is generated. Semi-supervised learning is used for model training when only part of the data is labeled and Semi-supervised classification aims to train a classifier from both the labeled and unlabeled data, such that it is better than the supervised classifier trained only on the labeled data. For illustrating the proposed approach, it was employed daily trade information, including the open, high, low and closing values and volume from January 1, 2000 to December 31, 2022, of the São Paulo Exchange Composite index (IBOVESPA). Through this time period it was visually identified consistent changes in price, upwards or downwards, for assigning labels and leaving the rest of the days (when there is not a consistent change in price) unlabeled. For training the classification model, a pseudo-label semi-supervised learning strategy was used employing different technical analysis indicators. In this learning strategy, the core is to use unlabeled data to generate a pseudo-label for supervised training. For evaluating the achieved results, it was considered the annualized return and excess return, the Sortino and the Sharpe indicators. Through the evaluated time period, the obtained results were very consistent and can be considered promising for generating the intended trading signals.Keywords: evolutionary learning, semi-supervised classification, time series data, trading signals generation
Procedia PDF Downloads 8991 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis
Authors: Iman Farasat, Howard M. Salis
Abstract:
Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement
Procedia PDF Downloads 47390 Simulation Research of Innovative Ignition System of ASz62IR Radial Aircraft Engine
Authors: Miroslaw Wendeker, Piotr Kacejko, Mariusz Duk, Pawel Karpinski
Abstract:
The research in the field of aircraft internal combustion engines is currently driven by the needs of decreasing fuel consumption and CO2 emissions, while fulfilling the level of safety. Currently, reciprocating aircraft engines are found in sports, emergency, agricultural and recreation aviation. Technically, they are most at a pre-war knowledge of the theory of operation, design and manufacturing technology, especially if compared to that high level of development of automotive engines. Typically, these engines are driven by carburetors of a quite primitive construction. At present, due to environmental requirements and dealing with a climate change, it is beneficial to develop aircraft piston engines and adopt the achievements of automotive engineering such as computer-controlled low-pressure injection, electronic ignition control and biofuels. The paper describes simulation research of the innovative power and control systems for the aircraft radial engine of high power. Installing an electronic ignition system in the radial aircraft engine is a fundamental innovative idea of this solution. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. In this framework, this research work focuses on describing a methodology for optimizing the electronically controlled ignition system. This attempt can reduce emissions of toxic compounds as a result of lowered fuel consumption, optimized combustion and engine capability of efficient combustion of ecological fuels. New, redundant elements of the control system can improve the safety of aircraft. Consequently, the required level of safety and better functionality as compared to the today’s plug system can be guaranteed. The simulation research aimed to determine the vulnerability of the values measured (they were planned as the quantities measured by the measurement systems) to determining the optimal ignition angle (the angle of maximum torque at a given operating point). The described results covered: a) research in steady states; b) velocity ranging from 1500 to 2200 rpm (every 100 rpm); c) loading ranging from propeller power to maximum power; d) altitude ranging according to the International Standard Atmosphere from 0 to 8000 m (every 1000 m); e) fuel: automotive gasoline ES95. The three models of different types of ignition coil (different energy discharge) were studied. The analysis aimed at the optimization of the design of the innovative ignition system for an aircraft engine. The optimization involved: a) the optimization of the measurement systems; b) the optimization of actuator systems. The studies enabled the research on the vulnerability of the signals to the control of the ignition timing. Accordingly, the number and type of sensors were determined for the ignition system to achieve its optimal performance. The results confirmed the limited benefits, in terms of fuel consumption. Thus, including spark management in the optimization is mandatory to significantly decrease the fuel consumption. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.Keywords: piston engine, radial engine, ignition system, CFD model, engine optimization
Procedia PDF Downloads 38689 Sedimentation and Morphology of the Kura River-Deltaic System in the Southern Caucasus under Anthropogenic and Sea-Level Controls
Authors: Elmira Aliyeva, Dadash Huseynov, Robert Hoogendoorn, Salomon Kroonenberg
Abstract:
The Kura River is the major water artery in the Southern Caucasus; it is a third river in the Caspian Sea basin in terms of length and size of the catchment area, the second in terms of the water budget, and the first in the volume of sediment load. Understanding of major controls on the Kura fluvial- deltaic system is valuable for efficient management of the highly populated river basin and coastal zone. We have studied grain size of sediments accumulated in the river channels and delta and dated by 210Pb method, astrophotographs, old topographic and geological maps, and archive data. At present time sediments are supplied by the Kura River to the Caspian Sea through three distributary channels oriented north-east, south-east, and south-west. The river is dominated by the suspended load - mud, silt, very fine sand. Coarse sediments are accumulated in the distributaries, levees, point bar, and delta front. The annual suspended sediment budget in the time period 1934-1952 before construction of the Mingechavir water reservoir in 1953 in the Kura River midstream area was 36 mln.t/yr. From 1953 to 1964, the suspended load has dropped to 12 mln.t/yr. After regulation of the Kura River discharge the volume of suspended load transported via north-eastern channel reduced from 35% of the total sediment amount to 4%, and through the main south-eastern channel increased from 65% to 96% with further fall to 56% due to creation of new south-western channel in 1964. Between 1967-1976 the annual sediment budget of the Kura River reached 22,5 mln. t/yr. From 1977 to 1986, the sediment load carried by the Kura River dropped to 17,6 mln.t/yr. The historical data show that between 1860 and 1907, during relatively stable Caspian Sea level two channels - N and SE, appear to have distributed an equal amount of sediments as seen from the bilateral geometry of the delta. In the time period 1907-1929, two new channels - E and NE, appeared. The growth of three delta lobes - N, NE, and SE, and rapid progradation of the delta has occurred on the background of the Caspian Sea level rise as a result of very high sediment supply. Since 1929 the Caspian Sea level decline was followed by the progradation of the delta occurring along the SE channel. The eastern and northern channels have been silted up. The slow rate of progradation at its initial stage was caused by the artificial reduction in the sediment budget. However, the continuous sea-level fall has brought to this river bed gradient increase, high erosional rate, increase in the sediment supply, and more rapid progradation. During the subsequent sea-level rise after 1977 accompanied by the decrease in the sediment budget, the southern part of the delta has turned into a complex of small, shallow channels oriented to the south. The data demonstrate that behaviour of the Kura fluvial – deltaic system and variations in the sediment budget besides anthropogenic regulation are strongly governed by the Caspian Sea level very rapid changes.Keywords: anthropogenic control on sediment budget, Caspian sea-level variations, Kura river sediment load, morphology of the Kura river delta, sedimentation in the Kura river delta
Procedia PDF Downloads 15488 Method of Nursing Education: History Review
Authors: Cristina Maria Mendoza Sanchez, Maria Angeles Navarro Perán
Abstract:
Introduction: Nursing as a profession, from its initial formation and after its development in practice, has been built and identified mainly from its technical competence and professionalization within the positivist approach of the XIX century that provides a conception of the disease built on the basis of to the biomedical paradigm, where the care provided is more focused on the physiological processes and the disease than on the suffering person understood as a whole. The main issue that is in need of study here is a review of the nursing profession's history to get to know how the nursing profession was before the XIX century. It is unclear if there were organizations or people with knowledge about looking after others or if many people survived by chance. The holistic care, in which the appearance of the disease directly affects all its dimensions: physical, emotional, cognitive, social and spiritual. It is not a concept from the 21st century. It is common practice, most probably since established life in this world, with the final purpose of covering all these perspectives through quality care. Objective: In this paper, we describe and analyze the history of education in nursing learning in terms of reviewing and analysing theoretical foundations of clinical teaching and learning in nursing, with the final purpose of determining and describing the development of the nursing profession along the history. Method: We have done a descriptive systematic review study, doing a systematically searched of manuscripts and articles in the following health science databases: Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL. The selection of articles has been made according to PRISMA criteria, doing a critical reading of the full text using the CASPe method. A compliment to this, we have read a range of historical and contemporary sources to support the review, such as manuals of Florence Nightingale and John of God as primary manuscripts to establish the origin of modern nursing and her professionalization. We have considered and applied ethical considerations of data processing. Results: After applying inclusion and exclusion criteria in our search, in Pubmed, Scopus, Web of Science, Temperamentvm and CINAHL, we have obtained 51 research articles. We have analyzed them in such a way that we have distinguished them by year of publication and the type of study. With the articles obtained, we can see the importance of our background as a profession before modern times in public health and as a review of our past to face challenges in the near future. Discussion: The important influence of key figures other than Nightingale has been overlooked and it emerges that nursing management and development of the professional body has a longer and more complex history than is generally accepted. Conclusions: There is a paucity of studies on the subject of the review to be able to extract very precise evidence and recommendations about nursing before modern times. But even so, as more representative data, an increase in research about nursing history has been observed. In light of the aspects analyzed, the need for new research in the history of nursing emerges from this perspective; in order to germinate studies of the historical construction of care before the XIX century and theories created then. We can assure that pieces of knowledge and ways of care were taught before the XIX century, but they were not called theories, as these concepts were created in modern times.Keywords: nursing history, nursing theory, Saint John of God, Florence Nightingale, learning, nursing education
Procedia PDF Downloads 113