Search results for: above ground biomass
254 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 401253 Selected Macrophyte Populations Promotes Coupled Nitrification and Denitrification Function in Eutrophic Urban Wetland Ecosystem
Authors: Rupak Kumar Sarma, Ratul Saikia
Abstract:
Macrophytes encompass major functional group in eutrophic wetland ecosystems. As a key functional element of freshwater lakes, they play a crucial role in regulating various wetland biogeochemical cycles, as well as maintain the biodiversity at the ecosystem level. The high carbon-rich underground biomass of macrophyte populations may harbour diverse microbial community having significant potential in maintaining different biogeochemical cycles. The present investigation was designed to study the macrophyte-microbe interaction in coupled nitrification and denitrification, considering Deepor Beel Lake (a Ramsar conservation site) of North East India as a model eutrophic system. Highly eutrophic sites of Deepor Beel were selected based on sediment oxygen demand and inorganic phosphorus and nitrogen (P&N) concentration. Sediment redox potential and depth of the lake was chosen as the benchmark for collecting the plant and sediment samples. The average highest depth in winter (January 2016) and summer (July 2016) were recorded as 20ft (6.096m) and 35ft (10.668m) respectively. Both sampling depth and sampling seasons had the distinct effect on variation in macrophyte community composition. Overall, the dominant macrophytic populations in the lake were Nymphaea alba, Hydrilla verticillata, Utricularia flexuosa, Vallisneria spiralis, Najas indica, Monochoria hastaefolia, Trapa bispinosa, Ipomea fistulosa, Hygrorhiza aristata, Polygonum hydropiper, Eichhornia crassipes and Euryale ferox. There was a distinct correlation in the variation of major sediment physicochemical parameters with change in macrophyte community compositions. Quantitative estimation revealed an almost even accumulation of nitrate and nitrite in the sediment samples dominated by the plant species Eichhornia crassipes, Nymphaea alba, Hydrilla verticillata, Vallisneria spiralis, Euryale ferox and Monochoria hastaefolia, which might have signified a stable nitrification and denitrification process in the sites dominated by the selected aquatic plants. This was further examined by a systematic analysis of microbial populations through culture dependent and independent approach. Culture-dependent bacterial community study revealed the higher population of nitrifiers and denitrifiers in the sediment samples dominated by the six macrophyte species. However, culture-independent study with bacterial 16S rDNA V3-V4 metagenome sequencing revealed the overall similar type of bacterial phylum in all the sediment samples collected during the study. Thus, there might be the possibility of uneven distribution of nitrifying and denitrifying molecular markers among the sediment samples collected during the investigation. The diversity and abundance of the nitrifying and denitrifying molecular markers in the sediment samples are under investigation. Thus, the role of different aquatic plant functional types in microorganism mediated nitrogen cycle coupling could be screened out further from the present initial investigation.Keywords: denitrification, macrophyte, metagenome, microorganism, nitrification
Procedia PDF Downloads 174252 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems
Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia
Abstract:
The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.Keywords: cloud computing, data management, multi-tenancy, requirements, security
Procedia PDF Downloads 156251 Topographic Coast Monitoring Using UAV Photogrammetry: A Case Study in Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, Jorge Enrique Baños-Illana, Arturo Gómez-Barrero, José Isaac Ramírez-Macías, Erik Omar Paredes-JuáRez, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga
Abstract:
Topographical changes in coastal areas are usually assessed with airborne LIDAR and conventional photogrammetry. In recent times Unmanned Aerial Vehicles (UAV) have been used several in photogrammetric applications including coastline evolution. However, its use goes further by using the points cloud associated to generate beach Digital Elevation Models (DEM). We present a methodology for monitoring coastal topographic changes along a 50 km coastline in Veracruz, Mexico using high-resolution images (less than 10 cm ground resolution) and dense points cloud captured with an UAV. This monitoring develops in the context of the port of Veracruz expansion project which construction began in 2015 and intends to characterize coast evolution and prevent and mitigate project impacts on coastal environments. The monitoring began with a historical coastline reconstruction since 1979 to 2015 using aerial photography and Landsat imagery. We could define some patterns: the northern part of the study area showed accretion while the southern part of the study area showed erosion. Since the study area is located off the port of Veracruz, a touristic and economical Mexican urban city, where coastal development structures have been built since 1979 in a continuous way, the local beaches of the touristic area are been refilled constantly. Those areas were not described as accretion since every month sand-filled trucks refill the sand beaches located in front of the hotel area. The construction of marinas and the comitial port of Veracruz, the old and the new expansion were made in the erosion part of the area. Northward from the City of Veracruz the beaches were described as accretion areas while southward from the city, the beaches were described as erosion areas. One of the problems is the expansion of the new development in the southern area of the city using the beach view as an incentive to buy front beach houses. We assessed coastal changes between seasons using high-resolution images and also points clouds during 2016 and preliminary results confirm that UAVs can be used in permanent coast monitoring programs with excellent performance and detail.Keywords: digital elevation model, high-resolution images, topographic coast monitoring, unmanned aerial vehicle
Procedia PDF Downloads 270250 Tourism Related Activities and Floating Garden in Inle Lake, Myanmar
Authors: Thel Phyu Phyu Soe
Abstract:
Myanmar started its new political movement in 2011, opening up to trade, encouraging foreign investment, deepening its financial sectors. The tourism is one of the key sectors to make reform process from the perspective of green economy and green growth. The Inle Lake, second largest lake, famous for broad diversity of cultural and natural assets, become one of the country’s main tourism destination. In the study area, local livelihoods are based on a combination of farming (mainly floating garden) wage labor, tourism, and small business. But the Inle lake water body or water surface area decreased by 96.44 km² within 20 years, from 67.98 km² in 1990 to 56.63 km² in 2010. Floating garden cultivation (hydro phonic farm) is a distinguished characteristic of Inle Lake. Two adjacent villages (A and B) were selected to compare the relationship between tourism access and agricultural production. Ground truthing, focus group discussion, and in-depth questionnaires with floating gardeners were carried out. In A village, 57% of the respondents relied tourism as their major income sources, while almost all the households in B village relied floating gardens as major livelihood. Both satellite image interpretation and community studies highlighted that around 80% of the floating garden become fallow after severe drought in 2010 and easy income access to tourism related activities. The villagers can get 20-30 US$ for round trip guiding to major tourist attraction places.Even though tourism is the major livelihood options for the A village, the poorest households (less than 1500 US$ per year) are those who didn’t own transportation property for tourism related activities. In B village, more than 70% of the households relied floating gardens as their major income sources and less participated in tourism related activities because they don’t have motorboat stand connected to the major tourist attraction areas. Access to tourism related activities (having boat stand where they can guide tourists by boat and sell local products and souvenirs) have much impacted on changes in local people livelihood options. However, tourism may have impacts that are beneficial for one group of a society, but which are negative for another. Income inequality and negative impacts can only be managed effectively if they have been identified, measured and evaluated. The severe drought in 2010, instability of lake water level, high expenses for agriculture assisted the local people to participate in easy access tourism related activities.Keywords: diminishing, floating garden, livelihood, tourism-related income
Procedia PDF Downloads 129249 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator
Authors: Siva K. Bathina, Sudheer Siddapureddy
Abstract:
Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis
Procedia PDF Downloads 196248 System Analysis on Compact Heat Storage in the Built Environment
Authors: Wilko Planje, Remco Pollé, Frank van Buuren
Abstract:
An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.Keywords: compact thermal storage, thermochemical material, built environment, renewable energy
Procedia PDF Downloads 244247 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer
Authors: Ke Zhang, Yongli Zhu
Abstract:
Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm
Procedia PDF Downloads 263246 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments
Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers
Abstract:
Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture
Procedia PDF Downloads 100245 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment
Authors: Rouzbeh Jafari, Joe Nava
Abstract:
This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy
Procedia PDF Downloads 110244 Urinary Incontinence and Performance in Elite Athletes
Authors: María Barbaño Acevedo Gómez, Elena Sonsoles Rodríguez López, Sofía Olivia Calvo Moreno, Ángel Basas García, Christophe RamíRez Parenteau
Abstract:
Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5% –46.8%) and varies as it seems to depend on the intensity of exercise, movements and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Although physical exercise reduces the risk of suffering from many diseases the mentality of an elite athlete is not to optimize their health, achieving their goals can put their health at risk. Furthermore, feeling or suffering from any discomfort during training seems to be normal within the elite sport demands. Objective: The main objective of the present study was to know the effects of UI in sports performance in athletes. Methods: This was an observational study conducted in 754 elite athletes. After collecting questions about pelvic floor, UI and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short- Form (ICIQ-SF) and ISI (index of incontinence severity). Results: 48.8% of the athletes declare having losses also in rest, preseason and / or competition (χ2 [3] = 3.64; p = 0.302), being the competition period (29.1%) the most frequent where suffer from urine leakage. Of the elite athletes surveyed, 33% had UI according ICIQ-SF (mean age 23.75 ± 7.74 years). Elite athletes with UI (5.31 ± 1.07 days) dedicate significantly more days per week to training [M = 0.28; 95% CI = 0.08-0.48; t (752) = 2.78; p = 0.005] than those without UI. Regarding frequency, 59.7% lose urine once a week, 25.6% lose urine more than 3 times a week, and 14.7% daily. Based on the amount, approximately 15% claim to lose a moderate and abundant. Athletes with the highest number of urine leaks during their training, the UI affects them more in their daily life (r = 0.259; p = 0.001), they present a greater number of losses in their day to day (r = 0.341; p <0.001 ) and greater severity of UI (r = 0.341; p <0.001). Conclusions: Athletes consider that UI affects them negatively in their daily routine, 30.9% affirm having a severity between moderate and severe in their daily routine, and 29.1% loss urine in competition period. An interesting fact is that more than half of the samples collected were elite athletes who compete at the highest level (Olympic Games, World and European Championship), the dedication to sport occupies a big piece in their life. The most frequent period where athletes suffers urine leakage is in competition and there are many emotions that athletes manage to get their best performance, if we add urine losses in that moments it is possible that their performance could be affected.Keywords: athletes, performance, prevalence, sport, training, urinary incontinence
Procedia PDF Downloads 131243 Static Charge Control Plan for High-Density Electronics Centers
Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda
Abstract:
Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.Keywords: electrostatics, ESD protocols, HBM, static charge control
Procedia PDF Downloads 129242 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143241 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air
Procedia PDF Downloads 104240 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit
Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier
Abstract:
Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability
Procedia PDF Downloads 155239 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 89238 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations
Authors: Rosni Lakandri
Abstract:
With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.Keywords: China, foreign policy, India, media, public opinion
Procedia PDF Downloads 151237 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 305236 Chemicals to Remove and Prevent Biofilm
Authors: Cynthia K. Burzell
Abstract:
Aequor's Founder, a Marine and Medical Microbiologist, discovered novel, non-toxic chemicals in the ocean that uniquely remove biofilm in minutes and prevent its formation for days. These chemicals and over 70 synthesized analogs that Aequor developed can replace thousands of toxic biocides used in consumer and industrial products and, as new drug candidates, kill biofilm-forming bacteria and fungi Superbugs -the antimicrobial-resistant (AMR) pathogens for which there is no cure. Cynthia Burzell, PhD., is a Marine and Medical Microbiologist studying natural mechanisms that inhibit biofilm formation on surfaces in contact with water. In 2002, she discovered a new genus and several new species of marine microbes that produce small molecules that remove biofilm in minutes and prevent its formation for days. The molecules include new antimicrobials that can replace thousands of toxic biocides used in consumer and industrial products and can be developed into new drug candidates to kill the biofilm-forming bacteria and fungi -- including the antimicrobial-resistant (AMR) Superbugs for which there is no cure. Today, Aequor has over 70 chemicals that are divided into categories: (1) Novel natural chemicals. Lonza validated that the primary natural chemical removed biofilm in minutes and stated: "Nothing else known can do this at non-toxic doses." (2) Specialty chemicals. 25 of these structural analogs are already approved under the U.S. Environmental Protection Agency (EPA)'s Toxic Substances Control Act, certified as "green" and available for immediate sale. These have been validated for the following agro-industrial verticals: (a) Surface cleaners: The U.S. Department of Agriculture validated that low concentrations of Aequor's formulations provide deep cleaning of inert, nano and organic surfaces and materials; (b) Water treatments: NASA validated that one dose of Aequor's treatment in the International Space Station's water reuse/recycling system lasted 15 months without replenishment. DOE validated that our treatments lower energy consumption by over 10% in buildings and industrial processes. Future validations include pilot projects with the EPA to test efficacy in hospital plumbing systems. (c) Algae cultivation and yeast fermentation: The U.S. Department of Energy (DOE) validated that Aequor's treatment boosted biomass of renewable feedstocks by 40% in half the time -- increasing the profitability of biofuels and biobased co-products. DOE also validated increased yields and crop protection of algae under cultivation in open ponds. A private oil and gas company validated decontamination of oilfield water. (3) New structural analogs. These kill Gram-negative and Gram-positive bacteria and fungi alone, in combinations with each other, and in combination with low doses of existing, ineffective antibiotics (including Penicillin), "potentiating" them to kill AMR pathogens at doses too low to trigger resistance. Both the U.S. National Institutes for Health (NIH) and Department of Defense (DOD) has executed contracts with Aequor to provide the pre-clinical trials needed for these new drug candidates to enter the regulatory approval pipelines. Aequor seeks partners/licensees to commercialize its specialty chemicals and support to evaluate the optimal methods to scale-up of several new structural analogs via activity-guided fractionation and/or biosynthesis in order to initiate the NIH and DOD pre-clinical trials.Keywords: biofilm, potentiation, prevention, removal
Procedia PDF Downloads 99235 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 297234 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 162233 Thermal Comfort and Outdoor Urban Spaces in the Hot Dry City of Damascus, Syria
Authors: Lujain Khraiba
Abstract:
Recently, there is a broad recognition that micro-climate conditions contribute to the quality of life in urban spaces outdoors, both from economical and social viewpoints. The consideration of urban micro-climate and outdoor thermal comfort in urban design and planning processes has become one of the important aspects in current related studies. However, these aspects are so far not considered in urban planning regulations in practice and these regulations are often poorly adapted to the local climate and culture. Therefore, there is a huge need to adapt the existing planning regulations to the local climate especially in cities that have extremely hot weather conditions. The overall aim of this study is to point out the complexity of the relationship between urban planning regulations, urban design, micro-climate and outdoor thermal comfort in the hot dry city of Damascus, Syria. The main aim is to investigate the temporal and spatial effects of micro-climate on urban surface temperatures and outdoor thermal comfort in different urban design patterns as a result of urban planning regulations during the extreme summer conditions. In addition, studying different alternatives of how to mitigate the surface temperature and thermal stress is also a part of the aim. The novelty of this study is to highlight the combined effect of urban surface materials and vegetation to develop the thermal environment. This study is based on micro-climate simulations using ENVI-met 3.1. The input data is calibrated according to a micro-climate fieldwork that has been conducted in different urban zones in Damascus. Different urban forms and geometries including the old and the modern parts of Damascus are thermally evaluated. The Physiological Equivalent Temperature (PET) index is used as an indicator for outdoor thermal comfort analysis. The study highlights the shortcomings of existing planning regulations in terms of solar protection especially at street levels. The results show that the surface temperatures in Old Damascus are lower than in the modern part. This is basically due to the difference in urban geometries that prevent the solar radiation in Old Damascus to reach the ground and heat up the surface whereas in modern Damascus, the streets are prescribed as wide spaces with high values of Sky View Factor (SVF is about 0.7). Moreover, the canyons in the old part are paved in cobblestones whereas the asphalt is the main material used in the streets of modern Damascus. Furthermore, Old Damascus is less stressful than the modern part (the difference in PET index is about 10 °C). The thermal situation is enhanced when different vegetation are considered (an improvement of 13 °C in the surface temperature is recorded in modern Damascus). The study recommends considering a detailed landscape code at street levels to be integrated in urban regulations of Damascus in order to achieve a better urban development in harmony with micro-climate and comfort. Such strategy will be very useful to decrease the urban warming in the city.Keywords: micro-climate, outdoor thermal comfort, urban planning regulations, urban spaces
Procedia PDF Downloads 485232 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)
Authors: Arlene López-Sampson, Tony Page, Betsy Jackes
Abstract:
Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.Keywords: 13C, petiole length, specific leaf area, tree growth
Procedia PDF Downloads 509231 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 132230 Date Palm Wastes Turning into Biochars for Phosphorus Recovery from Aqueous Solutions: Static and Dynamic Investigations
Authors: Salah Jellali, Nusiba Suliman, Yassine Charabi, Jamal Al-Sabahi, Ahmed Al Raeesi, Malik Al-Wardy, Mejdi Jeguirim
Abstract:
Huge amounts of agricultural biomasses are worldwide produced. At the same time, large quantities of phosphorus are annually discharged into water bodies with possible serious effects onto the environment quality. The main objective of this work is to turn a local Omani biomass (date palm fronds wastes: DPFW) into an effective material for phosphorus recovery from aqueous and the reuse of this P-loaded material in agriculture as ecofriendly amendment. For this aim, the raw DPFW were firstly impregnated with 1 M salt separated solutions of CaCl₂, MgCl₂, FeCl₃, AlCl₃, and a mixture of MgCl₂/AlCl₃ for 24 h, and then pyrolyzed under N2 flow at 500 °C for 2 hours by using an adapted tubular furnace (Carbolite, UK). The synthetized biochars were deeply characterized through specific analyses concerning their morphology, structure, texture, and surface chemistry. These analyses included the use of a scanning electron microscope (SEM) coupled with an energy-dispersive X-Ray spectrometer (EDS), X-Ray diffraction (XRD), Fourier Transform Infrared (FTIR), sorption micrometrics, and X-ray Fluorescence (XRF) apparatus. Then, their efficiency in recovering phosphorus was investigated in batch mode for various contact times (1 min to 3 h), aqueous pH values (from 3 to 11), initial phosphorus concentrations (10-100 mg/L), presence of anions (nitrates, sulfates, and chlorides). In a second step, dynamic assays, by using laboratory columns (height of 30 cm and diameter of 3 cm), were performed in order to investigate the recovery of phosphorus by the modified biochar with a mixture of Mg/Al. The effect of the initial P concentration (25-100 mg/L), the bed depth height (3 to 8 g), and the flow rate (10-30 mL/min) was assessed. Experimental results showed that the biochars physico-chemical properties were very dependent on the type of the used modifying salt. The main affected parameters concerned the specific surface area, microporosity area, and the surface chemistry (pH of zero-point charge and available functional groups). These characteristics have significantly affected the phosphorus recovery efficiency from aqueous solutions. Indeed, the P removal efficiency in batch mode varies from about 5 mg/g for the Fe-modified biochar to more than 13 mg/g for the biochar functionalized with Mg/Al layered double hydroxides. Moreover, the P recovery seems to be a time dependent process and significantly affected by the pH of the aqueous media and the presence of foreign anions due to competition phenomenon. The laboratory column study of phosphorus recovery by the biochar functionalized with Mg/Al layered double hydroxides showed that this process is affected by the used phosphorus concentration, the flow rate, and especially the column bed depth height. Indeed, the phosphorus recovered amount increased from about 4.9 to more than 9.3 mg/g used biochar mass of 3 and 8 g, respectively. This work proved that salt-modified palm fronds-derived biochars could be considered as attractive and promising materials for phosphorus recovery from aqueous solutions even under dynamic conditions. The valorization of these P-loaded-modified biochars as eco-friendly amendment for agricultural soils is necessary will promote sustainability and circular economy concepts in the management of both liquid and solid wastes.Keywords: date palm wastes, Mg/Al double-layered hydroxides functionalized biochars, phosphorus, recovery, sustainability, circular economy
Procedia PDF Downloads 81229 High Purity Lignin for Asphalt Applications: Using the Dawn Technology™ Wood Fractionation Process
Authors: Ed de Jong
Abstract:
Avantium is a leading technology development company and a frontrunner in renewable chemistry. Avantium develops disruptive technologies that enable the production of sustainable high value products from renewable materials and actively seek out collaborations and partnerships with like-minded companies and academic institutions globally, to speed up introductions of chemical innovations in the marketplace. In addition, Avantium helps companies to accelerate their catalysis R&D to improve efficiencies and deliver increased sustainability, growth, and profits, by providing proprietary systems and services to this regard. Many chemical building blocks and materials can be produced from biomass, nowadays mainly from 1st generation based carbohydrates, but potential for competition with the human food chain leads brand-owners to look for strategies to transition from 1st to 2nd generation feedstock. The use of non-edible lignocellulosic feedstock is an equally attractive source to produce chemical intermediates and an important part of the solution addressing these global issues (Paris targets). Avantium’s Dawn Technology™ separates the glucose, mixed sugars, and lignin available in non-food agricultural and forestry residues such as wood chips, wheat straw, bagasse, empty fruit bunches or corn stover. The resulting very pure lignin is dense in energy and can be used for energy generation. However, such a material might preferably be deployed in higher added value applications. Bitumen, which is fossil based, are mostly used for paving applications. Traditional hot mix asphalt emits large quantities of the GHG’s CO₂, CH₄, and N₂O, which is unfavorable for obvious environmental reasons. Another challenge for the bitumen industry is that the petrochemical industry is becoming more and more efficient in breaking down higher chain hydrocarbons to lower chain hydrocarbons with higher added value than bitumen. This has a negative effect on the availability of bitumen. The asphalt market, as well as governments, are looking for alternatives with higher sustainability in terms of GHG emission. The usage of alternative sustainable binders, which can (partly) replace the bitumen, contributes to reduce GHG emissions and at the same time broadens the availability of binders. As lignin is a major component (around 25-30%) of lignocellulosic material, which includes terrestrial plants (e.g., trees, bushes, and grass) and agricultural residues (e.g., empty fruit bunches, corn stover, sugarcane bagasse, straw, etc.), it is globally highly available. The chemical structure shows resemblance with the structure of bitumen and could, therefore, be used as an alternative for bitumen in applications like roofing or asphalt. Applications such as the use of lignin in asphalt need both fundamental research as well as practical proof under relevant use conditions. From a fundamental point of view, rheological aspects, as well as mixing, are key criteria. From a practical point of view, behavior in real road conditions is key (how easy can the asphalt be prepared, how easy can it be applied on the road, what is the durability, etc.). The paper will discuss the fundamentals of the use of lignin as bitumen replacement as well as the status of the different demonstration projects in Europe using lignin as a partial bitumen replacement in asphalts and will especially present the results of using Dawn Technology™ lignin as partial replacement of bitumen.Keywords: biorefinery, wood fractionation, lignin, asphalt, bitumen, sustainability
Procedia PDF Downloads 154228 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle
Authors: Aloke Bapli, Debabrata Seth
Abstract:
Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation
Procedia PDF Downloads 155227 Implementation of Learning Disability Annual Review Clinics to Ensure Good Patient Care, Safety, and Equality in Covid-19: A Two Pass Audit in General Practice
Authors: Liam Martin, Martha Watson
Abstract:
Patients with learning disabilities (LD) are at increased risk of physical and mental illness due to health inequality. To address this, NICE recommends that people from the age of 14 with a learning disability should have an annual LD health check. This consultation should include a holistic review of the patient’s physical, mental and social health needs with a view of creating an action plan to support the patient’s care. The expected standard set by the Quality and Outcomes Framework (QOF) is that each general practice should review at least 75% of their LD patients annually. During COVID-19, there have been barriers to primary care, including health anxiety, the shift to online general practice and the increase in GP workloads. A surgery in North London wanted to assess whether they were falling short of the expected standard for LD patient annual reviews in order to optimize care post Covid-19. A baseline audit was completed to assess how many LD patients were receiving their annual reviews over the period of 29th September 2020 to 29th September 2021. This information was accessed using EMIS Web Health Care System (EMIS). Patients included were aged 14 and over as per QOF standards. Doctors were not notified of this audit taking place. Following the results of this audit, the creation of learning disability clinics was recommended. These clinics were recommended to be on the ground floor and should be a dedicated time for LD reviews. A re-audit was performed via the same process 6 months later in March 2022. At the time of the baseline audit, there were 71 patients aged 14 and over that were on the LD register. 54% of these LD patients were found to have documentation of an annual LD review within the last 12 months. None of the LD patients between the ages of 14-18 years old had received their annual review. The results were discussed with the practice, and dedicated clinics were set up to review their LD patients. A second pass of the audit was completed 6 months later. This showed an improvement, with 84% of the LD patients registered at the surgery now having a documented annual review within the last 12 months. 78% of the patients between the ages of 14-18 years old had now been reviewed. The baseline audit revealed that the practice was not meeting the expected standard for LD patient’s annual health checks as outlined by QOF, with the most neglected patients being between the ages of 14-18. Identification and awareness of this vulnerable cohort is important to ensure measures can be put into place to support their physical, mental and social wellbeing. Other practices could consider an audit of their annual LD health checks to make sure they are practicing within QOF standards, and if there is a shortfall, they could consider implementing similar actions as used here; dedicated clinics for LD patient reviews.Keywords: COVID-19, learning disability, learning disability health review, quality and outcomes framework
Procedia PDF Downloads 85226 Exploring the Practices of Global Citizenship Education in Finland and Scotland
Authors: Elisavet Anastasiadou
Abstract:
Global citizenship refers to an economic, social, political, and cultural interconnectedness, and it is inextricably intertwined with social justice, respect for human rights, peace, and a sense of responsibility to act on a local and global level. It aims to be transformative, enhance critical thinking and participation with pedagogical approaches based on social justice and democracy. The purpose of this study is to explore how Global Citizenship Education (GCE) is presented and implemented in two educational contexts, specifically in the curricula and pedagogical practices of primary education in Finland and Scotland. The impact of GCE is recognized as means for further development by institution such as and Finnish and Scottish curricula acknowledge the significance of GCE, emphasizing the student's ability to act and succeed in diverse and global communities. This comparative study should provide a good basis for further developing teaching practices based on informed understanding of how GCE is constrained or enabled from two different perspectives, extend the methodological applications of Practice Architectures and provide critical insights into GCE as a theoretical notion adopted by national and international educational policy. The study is directly connected with global citizenship aiming at future and societal change. The empirical work employs a multiple case study approach, including interviews and analysis of existing documents (textbook, curriculum). The data consists of the Finnish and Scottish curriculum. A systematic analysis of the curriculum in relation to GCE will offer insights into how the aims of GCE are presented and framed within the two contexts. This will be achieved using the theory of Practice Architectures. Curricula are official policy documentations (texts) that frame and envisage pedagogical practices. Practices, according to the theory of practice architectures, consist of sayings, doings, and relatings. Hence, even if the text analysis includes the semantic space (sayings) that are prefigured by the cultural-discursive arrangements and the relating prefigured by the socio-political arrangements, they will inevitably reveal information on the (doings) prefigured by the material-economic arrangements, as they hang together in practices. The results will assist educators in making changes to their teaching and enhance their self-conscious understanding of the history-making significance of their practices. It will also have a potential reform and focus on educationally relevant to such issues. Thus, the study will be able to open the ground for interventions and further research while it will consider the societal demands of a world in change.Keywords: citizenhsip, curriculum, democracy, practices
Procedia PDF Downloads 207225 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet
Authors: Bello Muhammad Dogon Kade
Abstract:
The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein
Procedia PDF Downloads 67