Search results for: ground vibration
252 The Residential Subdivision under the Influence of the Unfinished Densification, Case Study for Subdivisions in Setif, Algeria
Authors: Lacheheb Dhia Eddine Zakaria, Ballout Amor
Abstract:
Today, it is necessary to be thrifty for its planet, on one hand the space being a rare, nonrenewable resource, and on the other hand the ecological, economic and social cost of the urban sprawl. It is commonly asserted that the promotion of a more compact and dense city has a positive effect in terms of public costs of investment, functioning and costs for the citizens and the users of the city. It is clear that the modes urban development management have to evolve profoundly, in particular towards a densification favourable to the raising of the urban quality through an ideal urban density on the scale of the individual housing estate. The lot as an individual housing estate was adopted as an alternative development model to the collective housing, thought in an anthropocentric perspective to emerge as a quality model where the density plays an important role, by being included in operations of a global coherence, in an optimal organization without forgetting the main importance of the deadlines of construction and the finalization of the works. The image of eternal construction site inflicted to our cities explains the renewed interest for the application of the regulatory framework and the completion of these limited operations without global coherence, which are summed up in our case to a ground cut in plots of land, sold then built independently without being finished, and support the relevance of the essential question of the improvement of the outside aspect bound to the appearance which can be revealed as a so important factor for a better use and a better acceptance of its housing environment, that the ratio of a number of houses on a plot of land or the number of square meters by house. To demonstrate the impact of the completion degree of the subdivision dwellings, roads system and urban public utilities on the density or the densification and therefore on the urban quality, we studied two residential subdivisions, the private subdivision Sellam and the subdivision El Imane with a common situation, and a different land surface, density and cutting, being occupied by various social classes, with different needs and different household average size. The approach of this work is based on the typo morphological analysis to reveal the differences in the degrees of completions of the subdivision’s built environment and on the investigation, by a household’s survey, to demonstrate importance of the degree of completion and to reveal the conditions of qualitative densification favourable and convenient to a better subdivision’s appropriation.Keywords: subdivision, degree of completion, densification, urban quality
Procedia PDF Downloads 372251 Design Charts for Strip Footing on Untreated and Cement Treated Sand Mat over Underlying Natural Soft Clay
Authors: Sharifullah Ahmed, Sarwar Jahan Md. Yasin
Abstract:
Shallow foundations on unimproved soft natural soils can undergo a high consolidation and secondary settlement. For low and medium rise building projects on such soil condition, pile foundation may not be cost effective. In such cases an alternative to pile foundations may be shallow strip footings placed on a double layered improved soil system soil. The upper layer of this system is untreated or cement treated compacted sand and underlying layer is natural soft clay. This system will reduce the settlement to an allowable limit. The current research has been conducted with the settlement of a rigid plane-strain strip footing of 2.5 m width placed on the surface of a soil consisting of an untreated or cement treated sand layer overlying a bed of homogeneous soft clay. The settlement of the mentioned shallow foundation has been studied considering both cases with the thicknesses of the sand layer are 0.3 to 0.9 times the width of footing. The response of the clay layer is assumed as undrained for plastic loading stages and drained during consolidation stages. The response of the sand layer is drained during all loading stages. FEM analysis was done using PLAXIS 2D Version 8.0. A natural clay deposit of 15 m thickness and 18 m width has been modeled using Hardening Soil Model, Soft Soil Model, Soft Soil Creep Model, and upper improvement layer has been modeled using only Hardening Soil Model. The groundwater level is at the top level of the clay deposit that made the system fully saturated. Parametric study has been conducted to determine the effect of thickness, density, cementation of the sand mat and density, shear strength of the soft clay layer on the settlement of strip foundation under the uniformly distributed vertical load of varying value. A set of the chart has been established for designing shallow strip footing on the sand mat over thick, soft clay deposit through obtaining the particular thickness of sand mat for particular subsoil parameter to ensure no punching shear failure and no settlement beyond allowable level. Design guideline in the form of non-dimensional charts has been developed for footing pressure equivalent to medium-rise residential or commercial building foundation with strip footing on soft inorganic Normally Consolidated (NC) soil of Bangladesh having void ratio from 1.0 to 1.45.Keywords: design charts, ground improvement, PLAXIS 2D, primary and secondary settlement, sand mat, soft clay
Procedia PDF Downloads 123250 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 133249 Pelvic Floor Training in Elite Athletes: Fact or Fiction
Authors: Maria Barbano Acevedo-Gomez, Elena Sonsoles Rodriguez-Lopez, Sofia Olivia Calvo-Moreno, Angel Basas-Garcia, Cristophe Ramirez
Abstract:
Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5%-46.8%) and varies as it seems to depend on the intensity of exercise, movements, and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Even though the emphasis of this research is on female athletes, all women should perform pelvic floor muscle exercises as a part of their general physical exercise. Pelvic floor exercises are generally considered the first treatment against urinary incontinence. Objective: The main objective of the present study was to determine the knowledge of the pelvic floor and of the UI in elite athletes and know if they incorporate pelvic floor strengthening in their training. Methods: This was an observational study conducted on 754 elite athletes. After collecting questions about the pelvic floor, UI, and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short-Form (ICIQ-SF). Results: 57.3% of the athletes reflect not having knowledge of their pelvic floor, 48.3% do not know what strengthening exercises are, and around 90% have never practiced them. 78.1% (n=589) of all elite athletes do not include pelvic floor exercises in their training. Of the elite athletes surveyed, 33% had UI according to ICIQ-SF (mean age 23.75 ± 7.74 years). In response to the question 'Do you think you have or have had UI?', Only 9% of the 754 elite athletes admitted they presently had UI, and 13.3% indicated they had had UI at some time. However, 22.7% (n=171) reported they had experienced urine leakage while training. Of the athletes who indicated they did not have UI in the ICIQ-SF, 25.7% stated they did experience urine leakage during training (χ² [1] = 265.56; p < 0.001). Further, 12.3% of the athletes who considered they did not have UI and 60% of those who admitted they had had UI on some occasion stated they had suffered some urine leakage in the past 3 months (χ² [1] = 287.59; p < 0.001). Conclusions: There is a lack of knowledge about UI in sport. Through the use of validated questionnaires, we observed a UI prevalence of 33%, and 22.7% reported they experienced urine leakage while training. These figures contrast with only 9% of athletes who reported they had or had in the past had UI. This discrepancy could reflect the great lack of knowledge about UI in sports and that sometimes an athlete may consider that urine leakage is normal and a consequence of the demands of training. These data support the idea that coaches, physiotherapists, and other professionals involved in maximizing the performance of athletes should include pelvic floor muscle exercises in their training programs. Measures such as this could help to prevent UI during training and could be a starting point for future studies designed to develop adequate prevention and treatment strategies for this embarrassing problem affecting young athletes, both male and female.Keywords: athletes, pelvic floor, performance, prevalence, sport, training, urinary incontinence
Procedia PDF Downloads 126248 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars
Authors: Ankit Khurana
Abstract:
The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum
Procedia PDF Downloads 408247 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 401246 Specification of Requirements to Ensure Proper Implementation of Security Policies in Cloud-Based Multi-Tenant Systems
Authors: Rebecca Zahra, Joseph G. Vella, Ernest Cachia
Abstract:
The notion of cloud computing is rapidly gaining ground in the IT industry and is appealing mostly due to making computing more adaptable and expedient whilst diminishing the total cost of ownership. This paper focuses on the software as a service (SaaS) architecture of cloud computing which is used for the outsourcing of databases with their associated business processes. One approach for offering SaaS is basing the system’s architecture on multi-tenancy. Multi-tenancy allows multiple tenants (users) to make use of the same single application instance. Their requests and configurations might then differ according to specific requirements met through tenant customisation through the software. Despite the known advantages, companies still feel uneasy to opt for the multi-tenancy with data security being a principle concern. The fact that multiple tenants, possibly competitors, would have their data located on the same server process and share the same database tables heighten the fear of unauthorised access. Security is a vital aspect which needs to be considered by application developers, database administrators, data owners and end users. This is further complicated in cloud-based multi-tenant system where boundaries must be established between tenants and additional access control models must be in place to prevent unauthorised cross-tenant access to data. Moreover, when altering the database state, the transactions need to strictly adhere to the tenant’s known business processes. This paper focuses on the fact that security in cloud databases should not be considered as an isolated issue. Rather it should be included in the initial phases of the database design and monitored continuously throughout the whole development process. This paper aims to identify a number of the most common security risks and threats specifically in the area of multi-tenant cloud systems. Issues and bottlenecks relating to security risks in cloud databases are surveyed. Some techniques which might be utilised to overcome them are then listed and evaluated. After a description and evaluation of the main security threats, this paper produces a list of software requirements to ensure that proper security policies are implemented by a software development team when designing and implementing a multi-tenant based SaaS. This would then assist the cloud service providers to define, implement, and manage security policies as per tenant customisation requirements whilst assuring security for the customers’ data.Keywords: cloud computing, data management, multi-tenancy, requirements, security
Procedia PDF Downloads 156245 Topographic Coast Monitoring Using UAV Photogrammetry: A Case Study in Port of Veracruz Expansion Project
Authors: Francisco Liaño-Carrera, Jorge Enrique Baños-Illana, Arturo Gómez-Barrero, José Isaac Ramírez-Macías, Erik Omar Paredes-JuáRez, David Salas-Monreal, Mayra Lorena Riveron-Enzastiga
Abstract:
Topographical changes in coastal areas are usually assessed with airborne LIDAR and conventional photogrammetry. In recent times Unmanned Aerial Vehicles (UAV) have been used several in photogrammetric applications including coastline evolution. However, its use goes further by using the points cloud associated to generate beach Digital Elevation Models (DEM). We present a methodology for monitoring coastal topographic changes along a 50 km coastline in Veracruz, Mexico using high-resolution images (less than 10 cm ground resolution) and dense points cloud captured with an UAV. This monitoring develops in the context of the port of Veracruz expansion project which construction began in 2015 and intends to characterize coast evolution and prevent and mitigate project impacts on coastal environments. The monitoring began with a historical coastline reconstruction since 1979 to 2015 using aerial photography and Landsat imagery. We could define some patterns: the northern part of the study area showed accretion while the southern part of the study area showed erosion. Since the study area is located off the port of Veracruz, a touristic and economical Mexican urban city, where coastal development structures have been built since 1979 in a continuous way, the local beaches of the touristic area are been refilled constantly. Those areas were not described as accretion since every month sand-filled trucks refill the sand beaches located in front of the hotel area. The construction of marinas and the comitial port of Veracruz, the old and the new expansion were made in the erosion part of the area. Northward from the City of Veracruz the beaches were described as accretion areas while southward from the city, the beaches were described as erosion areas. One of the problems is the expansion of the new development in the southern area of the city using the beach view as an incentive to buy front beach houses. We assessed coastal changes between seasons using high-resolution images and also points clouds during 2016 and preliminary results confirm that UAVs can be used in permanent coast monitoring programs with excellent performance and detail.Keywords: digital elevation model, high-resolution images, topographic coast monitoring, unmanned aerial vehicle
Procedia PDF Downloads 270244 Tourism Related Activities and Floating Garden in Inle Lake, Myanmar
Authors: Thel Phyu Phyu Soe
Abstract:
Myanmar started its new political movement in 2011, opening up to trade, encouraging foreign investment, deepening its financial sectors. The tourism is one of the key sectors to make reform process from the perspective of green economy and green growth. The Inle Lake, second largest lake, famous for broad diversity of cultural and natural assets, become one of the country’s main tourism destination. In the study area, local livelihoods are based on a combination of farming (mainly floating garden) wage labor, tourism, and small business. But the Inle lake water body or water surface area decreased by 96.44 km² within 20 years, from 67.98 km² in 1990 to 56.63 km² in 2010. Floating garden cultivation (hydro phonic farm) is a distinguished characteristic of Inle Lake. Two adjacent villages (A and B) were selected to compare the relationship between tourism access and agricultural production. Ground truthing, focus group discussion, and in-depth questionnaires with floating gardeners were carried out. In A village, 57% of the respondents relied tourism as their major income sources, while almost all the households in B village relied floating gardens as major livelihood. Both satellite image interpretation and community studies highlighted that around 80% of the floating garden become fallow after severe drought in 2010 and easy income access to tourism related activities. The villagers can get 20-30 US$ for round trip guiding to major tourist attraction places.Even though tourism is the major livelihood options for the A village, the poorest households (less than 1500 US$ per year) are those who didn’t own transportation property for tourism related activities. In B village, more than 70% of the households relied floating gardens as their major income sources and less participated in tourism related activities because they don’t have motorboat stand connected to the major tourist attraction areas. Access to tourism related activities (having boat stand where they can guide tourists by boat and sell local products and souvenirs) have much impacted on changes in local people livelihood options. However, tourism may have impacts that are beneficial for one group of a society, but which are negative for another. Income inequality and negative impacts can only be managed effectively if they have been identified, measured and evaluated. The severe drought in 2010, instability of lake water level, high expenses for agriculture assisted the local people to participate in easy access tourism related activities.Keywords: diminishing, floating garden, livelihood, tourism-related income
Procedia PDF Downloads 129243 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator
Authors: Siva K. Bathina, Sudheer Siddapureddy
Abstract:
Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis
Procedia PDF Downloads 196242 System Analysis on Compact Heat Storage in the Built Environment
Authors: Wilko Planje, Remco Pollé, Frank van Buuren
Abstract:
An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.Keywords: compact thermal storage, thermochemical material, built environment, renewable energy
Procedia PDF Downloads 244241 Improved Traveling Wave Method Based Fault Location Algorithm for Multi-Terminal Transmission System of Wind Farm with Grounding Transformer
Authors: Ke Zhang, Yongli Zhu
Abstract:
Due to rapid load growths in today’s highly electrified societies and the requirement for green energy sources, large-scale wind farm power transmission system is constantly developing. This system is a typical multi-terminal power supply system, whose structure of the network topology of transmission lines is complex. What’s more, it locates in the complex terrain of mountains and grasslands, thus increasing the possibility of transmission line faults and finding the fault location with difficulty after the faults and resulting in an extremely serious phenomenon of abandoning the wind. In order to solve these problems, a fault location method for multi-terminal transmission line based on wind farm characteristics and improved single-ended traveling wave positioning method is proposed. Through studying the zero sequence current characteristics by using the characteristics of the grounding transformer(GT) in the existing large-scale wind farms, it is obtained that the criterion for judging the fault interval of the multi-terminal transmission line. When a ground short-circuit fault occurs, there is only zero sequence current on the path between GT and the fault point. Therefore, the interval where the fault point exists is obtained by determining the path of the zero sequence current. After determining the fault interval, The location of the short-circuit fault point is calculated by the traveling wave method. However, this article uses an improved traveling wave method. It makes the positioning accuracy more accurate by combining the single-ended traveling wave method with double-ended electrical data. What’s more, a method of calculating the traveling wave velocity is deduced according to the above improvements (it is the actual wave velocity in theory). The improvement of the traveling wave velocity calculation method further improves the positioning accuracy. Compared with the traditional positioning method, the average positioning error of this method is reduced by 30%.This method overcomes the shortcomings of the traditional method in poor fault location of wind farm transmission lines. In addition, it is more accurate than the traditional fixed wave velocity method in the calculation of the traveling wave velocity. It can calculate the wave velocity in real time according to the scene and solve the traveling wave velocity can’t be updated with the environment and real-time update. The method is verified in PSCAD/EMTDC.Keywords: grounding transformer, multi-terminal transmission line, short circuit fault location, traveling wave velocity, wind farm
Procedia PDF Downloads 263240 Changes in Heavy Metals Bioavailability in Manure-Derived Digestates and Subsequent Hydrochars to Be Used as Soil Amendments
Authors: Hellen L. De Castro e Silva, Ana A. Robles Aguilar, Erik Meers
Abstract:
Digestates are residual by-products, rich in nutrients and trace elements, which can be used as organic fertilisers on soils. However, due to the non-digestibility of these elements and reduced dry matter during the anaerobic digestion process, metal concentrations are higher in digestates than in feedstocks, which might hamper their use as fertilisers according to the threshold values of some country policies. Furthermore, there is uncertainty regarding the required assimilated amount of these elements by some crops, which might result in their bioaccumulation. Therefore, further processing of the digestate to obtain safe fertilizing products has been recommended. This research aims to analyze the effect of applying the hydrothermal carbonization process to manure-derived digestates as a thermal treatment to reduce the bioavailability of heavy metals in mono and co-digestates derived from pig manure and maize from contaminated land in France. This study examined pig manure collected from a novel stable system (VeDoWs, province of East Flanders, Belgium) that separates the collection of pig urine and feces, resulting in a solid fraction of manure with high up-concentration of heavy metals and nutrients. Mono-digestion and co-digestion processes were conducted in semi-continuous reactors for 45 days at mesophilic conditions, in which the digestates were dried at 105 °C for 24 hours. Then, hydrothermal carbonization was applied to a 1:10 solid/water ratio to guarantee controlled experimental conditions in different temperatures (180, 200, and 220 °C) and residence times (2 h and 4 h). During the process, the pressure was generated autogenously, and the reactor was cooled down after completing the treatments. The solid and liquid phases were separated through vacuum filtration, in which the solid phase of each treatment -hydrochar- was dried and ground for chemical characterization. Different fractions (exchangeable / adsorbed fraction - F1, carbonates-bound fraction - F2, organic matter-bound fraction - F3, and residual fraction – F4) of some heavy metals (Cd, Cr, Ni, and Cr) have been determined in digestates and derived hydrochars using the modified Community Bureau of Reference (BCR) sequential extraction procedure. The main results indicated a difference in the heavy metals fractionation between digestates and their derived hydrochars; however, the hydrothermal carbonization operating conditions didn’t have remarkable effects on heavy metals partitioning between the hydrochars of the proposed treatments. Based on the estimated potential ecological risk assessment, there was one level decrease (considerate to moderate) when comparing the HMs partitioning in digestates and derived hydrochars.Keywords: heavy metals, bioavailability, hydrothermal treatment, bio-based fertilisers, agriculture
Procedia PDF Downloads 100239 Urinary Incontinence and Performance in Elite Athletes
Authors: María Barbaño Acevedo Gómez, Elena Sonsoles Rodríguez López, Sofía Olivia Calvo Moreno, Ángel Basas García, Christophe RamíRez Parenteau
Abstract:
Introduction: Urinary incontinence (UI) is defined as the involuntary leakage of urine. In persons who practice sport, its prevalence is 36.1% (95% CI 26.5% –46.8%) and varies as it seems to depend on the intensity of exercise, movements and impact on the ground. Such high impact sports are likely to generate higher intra-abdominal pressures and leading to pelvic floor muscle weakness. Although physical exercise reduces the risk of suffering from many diseases the mentality of an elite athlete is not to optimize their health, achieving their goals can put their health at risk. Furthermore, feeling or suffering from any discomfort during training seems to be normal within the elite sport demands. Objective: The main objective of the present study was to know the effects of UI in sports performance in athletes. Methods: This was an observational study conducted in 754 elite athletes. After collecting questions about pelvic floor, UI and sport-related data, participants completed the questionnaire International Consultation on Incontinence Questionnaire-UI Short- Form (ICIQ-SF) and ISI (index of incontinence severity). Results: 48.8% of the athletes declare having losses also in rest, preseason and / or competition (χ2 [3] = 3.64; p = 0.302), being the competition period (29.1%) the most frequent where suffer from urine leakage. Of the elite athletes surveyed, 33% had UI according ICIQ-SF (mean age 23.75 ± 7.74 years). Elite athletes with UI (5.31 ± 1.07 days) dedicate significantly more days per week to training [M = 0.28; 95% CI = 0.08-0.48; t (752) = 2.78; p = 0.005] than those without UI. Regarding frequency, 59.7% lose urine once a week, 25.6% lose urine more than 3 times a week, and 14.7% daily. Based on the amount, approximately 15% claim to lose a moderate and abundant. Athletes with the highest number of urine leaks during their training, the UI affects them more in their daily life (r = 0.259; p = 0.001), they present a greater number of losses in their day to day (r = 0.341; p <0.001 ) and greater severity of UI (r = 0.341; p <0.001). Conclusions: Athletes consider that UI affects them negatively in their daily routine, 30.9% affirm having a severity between moderate and severe in their daily routine, and 29.1% loss urine in competition period. An interesting fact is that more than half of the samples collected were elite athletes who compete at the highest level (Olympic Games, World and European Championship), the dedication to sport occupies a big piece in their life. The most frequent period where athletes suffers urine leakage is in competition and there are many emotions that athletes manage to get their best performance, if we add urine losses in that moments it is possible that their performance could be affected.Keywords: athletes, performance, prevalence, sport, training, urinary incontinence
Procedia PDF Downloads 131238 Static Charge Control Plan for High-Density Electronics Centers
Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda
Abstract:
Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.Keywords: electrostatics, ESD protocols, HBM, static charge control
Procedia PDF Downloads 129237 Data-Driven Surrogate Models for Damage Prediction of Steel Liquid Storage Tanks under Seismic Hazard
Authors: Laura Micheli, Majd Hijazi, Mahmoud Faytarouni
Abstract:
The damage reported by oil and gas industrial facilities revealed the utmost vulnerability of steel liquid storage tanks to seismic events. The failure of steel storage tanks may yield devastating and long-lasting consequences on built and natural environments, including the release of hazardous substances, uncontrolled fires, and soil contamination with hazardous materials. It is, therefore, fundamental to reliably predict the damage that steel liquid storage tanks will likely experience under future seismic hazard events. The seismic performance of steel liquid storage tanks is usually assessed using vulnerability curves obtained from the numerical simulation of a tank under different hazard scenarios. However, the computational demand of high-fidelity numerical simulation models, such as finite element models, makes the vulnerability assessment of liquid storage tanks time-consuming and often impractical. As a solution, this paper presents a surrogate model-based strategy for predicting seismic-induced damage in steel liquid storage tanks. In the proposed strategy, the surrogate model is leveraged to reduce the computational demand of time-consuming numerical simulations. To create the data set for training the surrogate model, field damage data from past earthquakes reconnaissance surveys and reports are collected. Features representative of steel liquid storage tank characteristics (e.g., diameter, height, liquid level, yielding stress) and seismic excitation parameters (e.g., peak ground acceleration, magnitude) are extracted from the field damage data. The collected data are then utilized to train a surrogate model that maps the relationship between tank characteristics, seismic hazard parameters, and seismic-induced damage via a data-driven surrogate model. Different types of surrogate algorithms, including naïve Bayes, k-nearest neighbors, decision tree, and random forest, are investigated, and results in terms of accuracy are reported. The model that yields the most accurate predictions is employed to predict future damage as a function of tank characteristics and seismic hazard intensity level. Results show that the proposed approach can be used to estimate the extent of damage in steel liquid storage tanks, where the use of data-driven surrogates represents a viable alternative to computationally expensive numerical simulation models.Keywords: damage prediction , data-driven model, seismic performance, steel liquid storage tanks, surrogate model
Procedia PDF Downloads 143236 Greenhouse Gasses’ Effect on Atmospheric Temperature Increase and the Observable Effects on Ecosystems
Authors: Alexander J. Severinsky
Abstract:
Radiative forces of greenhouse gases (GHG) increase the temperature of the Earth's surface, more on land, and less in oceans, due to their thermal capacities. Given this inertia, the temperature increase is delayed over time. Air temperature, however, is not delayed as air thermal capacity is much lower. In this study, through analysis and synthesis of multidisciplinary science and data, an estimate of atmospheric temperature increase is made. Then, this estimate is used to shed light on current observations of ice and snow loss, desertification and forest fires, and increased extreme air disturbances. The reason for this inquiry is due to the author’s skepticism that current changes cannot be explained by a "~1 oC" global average surface temperature rise within the last 50-60 years. The only other plausible cause to explore for understanding is that of atmospheric temperature rise. The study utilizes an analysis of air temperature rise from three different scientific disciplines: thermodynamics, climate science experiments, and climactic historical studies. The results coming from these diverse disciplines are nearly the same, within ± 1.6%. The direct radiative force of GHGs with a high level of scientific understanding is near 4.7 W/m2 on average over the Earth’s entire surface in 2018, as compared to one in pre-Industrial time in the mid-1700s. The additional radiative force of fast feedbacks coming from various forms of water gives approximately an additional ~15 W/m2. In 2018, these radiative forces heated the atmosphere by approximately 5.1 oC, which will create a thermal equilibrium average ground surface temperature increase of 4.6 oC to 4.8 oC by the end of this century. After 2018, the temperature will continue to rise without any additional increases in the concentration of the GHGs, primarily of carbon dioxide and methane. These findings of the radiative force of GHGs in 2018 were applied to estimates of effects on major Earth ecosystems. This additional force of nearly 20 W/m2 causes an increase in ice melting by an additional rate of over 90 cm/year, green leaves temperature increase by nearly 5 oC, and a work energy increase of air by approximately 40 Joules/mole. This explains the observed high rates of ice melting at all altitudes and latitudes, the spread of deserts and increases in forest fires, as well as increased energy of tornadoes, typhoons, hurricanes, and extreme weather, much more plausibly than the 1.5 oC increase in average global surface temperature in the same time interval. Planned mitigation and adaptation measures might prove to be much more effective when directed toward the reduction of existing GHGs in the atmosphere.Keywords: greenhouse radiative force, greenhouse air temperature, greenhouse thermodynamics, greenhouse historical, greenhouse radiative force on ice, greenhouse radiative force on plants, greenhouse radiative force in air
Procedia PDF Downloads 104235 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit
Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier
Abstract:
Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability
Procedia PDF Downloads 155234 A Concept in Addressing the Singularity of the Emerging Universe
Authors: Mahmoud Reza Hosseini
Abstract:
The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times has been studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing an energy conversion mechanism. This is accomplished by establishing a state of energy called a “neutral state”, with an energy level which is referred to as “base energy” capable of converting into other states. Although it follows the same principles, the unique quanta state of the base energy allows it to be distinguishable from other states and have a uniform distribution at the ground level. Although the concept of base energy can be utilized to address the singularity issue, to establish a complete picture, the origin of the base energy should be also identified. This matter is the subject of the first study in the series “A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing” which is discussed in detail. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.Keywords: big bang, cosmic inflation, birth of universe, energy creation
Procedia PDF Downloads 89233 Media Impression and Its Impact on Foreign Policy Making: A Study of India-China Relations
Authors: Rosni Lakandri
Abstract:
With the development of science and technology, there has been a complete transformation in the domain of information technology. Particularly after the Second World War and Cold War period, the role of media and communication technology in shaping the political, economic, socio-cultural proceedings across the world has been tremendous. It performs as a channel between the governing bodies of the state and the general masses. As we have seen the international community constantly talking about the onset of Asian Century, India and China happens to be the major player in this. Both have the civilization history, both are neighboring countries, both are witnessing a huge economic growth and, important of all, both are considered the rising powers of Asia. Not negating the fact that both countries have gone to war with each other in 1962 and the common people and even the policy makers of both the sides view each other till now from this prism. A huge contribution to this perception of people goes to the media coverage of both sides, even if there are spaces of cooperation which they share, the negative impacts of media has tended to influence the people’s opinion and government’s perception about each other. Therefore, analysis of media’s impression in both the countries becomes important in order to know their effect on the larger implications of foreign policy towards each other. It is usually said that media not only acts as the information provider but also acts as ombudsman to the government. They provide a kind of check and balance to the governments in taking proper decisions for the people of the country but in attempting to answer this hypothesis we have to analyze does the media really helps in shaping the political landscape of any country? Therefore, this study rests on the following questions; 1.How do China and India depict each other through their respective News media? 2.How much and what influences they make on the policy making process of each country? How do they shape the public opinion in both the countries? In order to address these enquiries, the study employs both primary and secondary sources available, and in generating data and other statistical information, primary sources like reports, government documents, and cartography, agreements between the governments have been used. Secondary sources like books, articles and other writings collected from various sources and opinion from visual media sources like news clippings, videos in this topic are also included as a source of on ground information as this study is not based on field study. As the findings suggest in case of China and India, media has certainly affected people’s knowledge about the political and diplomatic issues at the same time has affected the foreign policy making of both the countries. They have considerable impact on the foreign policy formulation and we can say there is some mediatization happening in foreign policy issues in both the countries.Keywords: China, foreign policy, India, media, public opinion
Procedia PDF Downloads 151232 High-Speed Particle Image Velocimetry of the Flow around a Moving Train Model with Boundary Layer Control Elements
Authors: Alexander Buhr, Klaus Ehrenfried
Abstract:
Trackside induced airflow velocities, also known as slipstream velocities, are an important criterion for the design of high-speed trains. The maximum permitted values are given by the Technical Specifications for Interoperability (TSI) and have to be checked in the approval process. For train manufactures it is of great interest to know in advance, how new train geometries would perform in TSI tests. The Reynolds number in moving model experiments is lower compared to full-scale. Especially the limited model length leads to a thinner boundary layer at the rear end. The hypothesis is that the boundary layer rolls up to characteristic flow structures in the train wake, in which the maximum flow velocities can be observed. The idea is to enlarge the boundary layer using roughness elements at the train model head so that the ratio between the boundary layer thickness and the car width at the rear end is comparable to a full-scale train. This may lead to similar flow structures in the wake and better prediction accuracy for TSI tests. In this case, the design of the roughness elements is limited by the moving model rig. Small rectangular roughness shapes are used to get a sufficient effect on the boundary layer, while the elements are robust enough to withstand the high accelerating and decelerating forces during the test runs. For this investigation, High-Speed Particle Image Velocimetry (HS-PIV) measurements on an ICE3 train model have been realized in the moving model rig of the DLR in Göttingen, the so called tunnel simulation facility Göttingen (TSG). The flow velocities within the boundary layer are analysed in a plain parallel to the ground. The height of the plane corresponds to a test position in the EN standard (TSI). Three different shapes of roughness elements are tested. The boundary layer thickness and displacement thickness as well as the momentum thickness and the form factor are calculated along the train model. Conditional sampling is used to analyse the size and dynamics of the flow structures at the time of maximum velocity in the train wake behind the train. As expected, larger roughness elements increase the boundary layer thickness and lead to larger flow velocities in the boundary layer and in the wake flow structures. The boundary layer thickness, displacement thickness and momentum thickness are increased by using larger roughness especially when applied in the height close to the measuring plane. The roughness elements also cause high fluctuations in the form factors of the boundary layer. Behind the roughness elements, the form factors rapidly are approaching toward constant values. This indicates that the boundary layer, while growing slowly along the second half of the train model, has reached a state of equilibrium.Keywords: boundary layer, high-speed PIV, ICE3, moving train model, roughness elements
Procedia PDF Downloads 305231 Mikrophonie I (1964) by Karlheinz Stockhausen - Between Idea and Auditory Image
Authors: Justyna Humięcka-Jakubowska
Abstract:
1. Background in music analysis. Traditionally, when we think about a composer’s sketches, the chances are that we are thinking in terms of the working out of detail, rather than the evolution of an overall concept. Since music is a “time art’, it follows that questions of a form cannot be entirely detached from considerations of time. One could say that composers tend to regard time either as a place gradually and partially intuitively filled, or they can look for a specific strategy to occupy it. In my opinion, one thing that sheds light on Stockhausen's compositional thinking is his frequent use of 'form schemas', that is often a single-page representation of the entire structure of a piece. 2. Background in music technology. Sonic Visualiser is a program used to study a musical recording. It is an open source application for viewing, analysing, and annotating music audio files. It contains a number of visualisation tools, which are designed with useful default parameters for musical analysis. Additionally, the Vamp plugin format of SV supports to provide analysis such as for example structural segmentation. 3. Aims. The aim of my paper is to show how SV may be used to obtain a better understanding of the specific musical work, and how the compositional strategy does impact on musical structures and musical surfaces. I want to show that ‘traditional” music analytic methods don’t allow to indicate interrelationships between musical surface (which is perceived) and underlying musical/acoustical structure. 4. Main Contribution. Stockhausen had dealt with the most diverse musical problems by the most varied methods. A characteristic which he had never ceased to be placed at the center of his thought and works, it was the quest for a new balance founded upon an acute connection between speculation and intuition. In the case with Mikrophonie I (1964) for tam-tam and 6 players Stockhausen makes a distinction between the "connection scheme", which indicates the ground rules underlying all versions, and the form scheme, which is associated with a particular version. The preface to the published score includes both the connection scheme, and a single instance of a "form scheme", which is what one can hear on the CD recording. In the current study, the insight into the compositional strategy chosen by Stockhausen was been compared with auditory image, that is, with the perceived musical surface. Stockhausen's musical work is analyzed both in terms of melodic/voice and timbre evolution. 5. Implications The current study shows how musical structures have determined of musical surface. My general assumption is this, that while listening to music we can extract basic kinds of musical information from musical surfaces. It is shown that an interactive strategies of musical structure analysis can offer a very fruitful way of looking directly into certain structural features of music.Keywords: automated analysis, composer's strategy, mikrophonie I, musical surface, stockhausen
Procedia PDF Downloads 297230 Deep Learning for Image Correction in Sparse-View Computed Tomography
Authors: Shubham Gogri, Lucia Florescu
Abstract:
Medical diagnosis and radiotherapy treatment planning using Computed Tomography (CT) rely on the quantitative accuracy and quality of the CT images. At the same time, requirements for CT imaging include reducing the radiation dose exposure to patients and minimizing scanning time. A solution to this is the sparse-view CT technique, based on a reduced number of projection views. This, however, introduces a new problem— the incomplete projection data results in lower quality of the reconstructed images. To tackle this issue, deep learning methods have been applied to enhance the quality of the sparse-view CT images. A first approach involved employing Mir-Net, a dedicated deep neural network designed for image enhancement. This showed promise, utilizing an intricate architecture comprising encoder and decoder networks, along with the incorporation of the Charbonnier Loss. However, this approach was computationally demanding. Subsequently, a specialized Generative Adversarial Network (GAN) architecture, rooted in the Pix2Pix framework, was implemented. This GAN framework involves a U-Net-based Generator and a Discriminator based on Convolutional Neural Networks. To bolster the GAN's performance, both Charbonnier and Wasserstein loss functions were introduced, collectively focusing on capturing minute details while ensuring training stability. The integration of the perceptual loss, calculated based on feature vectors extracted from the VGG16 network pretrained on the ImageNet dataset, further enhanced the network's ability to synthesize relevant images. A series of comprehensive experiments with clinical CT data were conducted, exploring various GAN loss functions, including Wasserstein, Charbonnier, and perceptual loss. The outcomes demonstrated significant image quality improvements, confirmed through pertinent metrics such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) between the corrected images and the ground truth. Furthermore, learning curves and qualitative comparisons added evidence of the enhanced image quality and the network's increased stability, while preserving pixel value intensity. The experiments underscored the potential of deep learning frameworks in enhancing the visual interpretation of CT scans, achieving outcomes with SSIM values close to one and PSNR values reaching up to 76.Keywords: generative adversarial networks, sparse view computed tomography, CT image correction, Mir-Net
Procedia PDF Downloads 161229 Thermal Comfort and Outdoor Urban Spaces in the Hot Dry City of Damascus, Syria
Authors: Lujain Khraiba
Abstract:
Recently, there is a broad recognition that micro-climate conditions contribute to the quality of life in urban spaces outdoors, both from economical and social viewpoints. The consideration of urban micro-climate and outdoor thermal comfort in urban design and planning processes has become one of the important aspects in current related studies. However, these aspects are so far not considered in urban planning regulations in practice and these regulations are often poorly adapted to the local climate and culture. Therefore, there is a huge need to adapt the existing planning regulations to the local climate especially in cities that have extremely hot weather conditions. The overall aim of this study is to point out the complexity of the relationship between urban planning regulations, urban design, micro-climate and outdoor thermal comfort in the hot dry city of Damascus, Syria. The main aim is to investigate the temporal and spatial effects of micro-climate on urban surface temperatures and outdoor thermal comfort in different urban design patterns as a result of urban planning regulations during the extreme summer conditions. In addition, studying different alternatives of how to mitigate the surface temperature and thermal stress is also a part of the aim. The novelty of this study is to highlight the combined effect of urban surface materials and vegetation to develop the thermal environment. This study is based on micro-climate simulations using ENVI-met 3.1. The input data is calibrated according to a micro-climate fieldwork that has been conducted in different urban zones in Damascus. Different urban forms and geometries including the old and the modern parts of Damascus are thermally evaluated. The Physiological Equivalent Temperature (PET) index is used as an indicator for outdoor thermal comfort analysis. The study highlights the shortcomings of existing planning regulations in terms of solar protection especially at street levels. The results show that the surface temperatures in Old Damascus are lower than in the modern part. This is basically due to the difference in urban geometries that prevent the solar radiation in Old Damascus to reach the ground and heat up the surface whereas in modern Damascus, the streets are prescribed as wide spaces with high values of Sky View Factor (SVF is about 0.7). Moreover, the canyons in the old part are paved in cobblestones whereas the asphalt is the main material used in the streets of modern Damascus. Furthermore, Old Damascus is less stressful than the modern part (the difference in PET index is about 10 °C). The thermal situation is enhanced when different vegetation are considered (an improvement of 13 °C in the surface temperature is recorded in modern Damascus). The study recommends considering a detailed landscape code at street levels to be integrated in urban regulations of Damascus in order to achieve a better urban development in harmony with micro-climate and comfort. Such strategy will be very useful to decrease the urban warming in the city.Keywords: micro-climate, outdoor thermal comfort, urban planning regulations, urban spaces
Procedia PDF Downloads 485228 Variation of Carbon Isotope Ratio (δ13C) and Leaf-Productivity Traits in Aquilaria Species (Thymelaeceae)
Authors: Arlene López-Sampson, Tony Page, Betsy Jackes
Abstract:
Aquilaria genus produces a highly valuable fragrant oleoresin known as agarwood. Agarwood forms in a few trees in the wild as a response to injure or pathogen attack. The resin is used in perfume and incense industry and medicine. Cultivation of Aquilaria species as a sustainable source of the resin is now a common strategy. Physiological traits are frequently used as a proxy of crop and tree productivity. Aquilaria species growing in Queensland, Australia were studied to investigate relationship between leaf-productivity traits with tree growth. Specifically, 28 trees, representing 12 plus trees and 16 trees from yield plots, were selected to conduct carbon isotope analysis (δ13C) and monitor six leaf attributes. Trees were grouped on four diametric classes (diameter at 150 mm above ground level) ensuring the variability in growth of the whole population was sampled. Model averaging technique based on the Akaike’s information criterion (AIC) was computed to identify whether leaf traits could assist in diameter prediction. Carbon isotope values were correlated with height classes and leaf traits to determine any relationship. In average four leaves per shoot were recorded. Approximately one new leaf per week is produced by a shoot. Rate of leaf expansion was estimated in 1.45 mm day-1. There were no statistical differences between diametric classes and leaf expansion rate and number of new leaves per week (p > 0.05). Range of δ13C values in leaves of Aquilaria species was from -25.5 ‰ to -31 ‰ with an average of -28.4 ‰ (± 1.5 ‰). Only 39% of the variability in height can be explained by δ13C in leaf. Leaf δ13C and nitrogen content values were positively correlated. This relationship implies that leaves with higher photosynthetic capacities also had lower intercellular carbon dioxide concentrations (ci/ca) and less depleted values of 13C. Most of the predictor variables have a weak correlation with diameter (D). However, analysis of the 95% confidence of best-ranked regression models indicated that the predictors that could likely explain growth in Aquilaria species are petiole length (PeLen), values of δ13C (true13C) and δ15N (true15N), leaf area (LA), specific leaf area (SLA) and number of new leaf produced per week (NL.week). The model constructed with PeLen, true13C, true15N, LA, SLA and NL.week could explain 45% (R2 0.4573) of the variability in D. The leaf traits studied gave a better understanding of the leaf attributes that could assist in the selection of high-productivity trees in Aquilaria.Keywords: 13C, petiole length, specific leaf area, tree growth
Procedia PDF Downloads 509227 Comparing Deep Architectures for Selecting Optimal Machine Translation
Authors: Despoina Mouratidis, Katia Lida Kermanidis
Abstract:
Machine translation (MT) is a very important task in Natural Language Processing (NLP). MT evaluation is crucial in MT development, as it constitutes the means to assess the success of an MT system, and also helps improve its performance. Several methods have been proposed for the evaluation of (MT) systems. Some of the most popular ones in automatic MT evaluation are score-based, such as the BLEU score, and others are based on lexical similarity or syntactic similarity between the MT outputs and the reference involving higher-level information like part of speech tagging (POS). This paper presents a language-independent machine learning framework for classifying pairwise translations. This framework uses vector representations of two machine-produced translations, one from a statistical machine translation model (SMT) and one from a neural machine translation model (NMT). The vector representations consist of automatically extracted word embeddings and string-like language-independent features. These vector representations used as an input to a multi-layer neural network (NN) that models the similarity between each MT output and the reference, as well as between the two MT outputs. To evaluate the proposed approach, a professional translation and a "ground-truth" annotation are used. The parallel corpora used are English-Greek (EN-GR) and English-Italian (EN-IT), in the educational domain and of informal genres (video lecture subtitles, course forum text, etc.) that are difficult to be reliably translated. They have tested three basic deep learning (DL) architectures to this schema: (i) fully-connected dense, (ii) Convolutional Neural Network (CNN), and (iii) Long Short-Term Memory (LSTM). Experiments show that all tested architectures achieved better results when compared against those of some of the well-known basic approaches, such as Random Forest (RF) and Support Vector Machine (SVM). Better accuracy results are obtained when LSTM layers are used in our schema. In terms of a balance between the results, better accuracy results are obtained when dense layers are used. The reason for this is that the model correctly classifies more sentences of the minority class (SMT). For a more integrated analysis of the accuracy results, a qualitative linguistic analysis is carried out. In this context, problems have been identified about some figures of speech, as the metaphors, or about certain linguistic phenomena, such as per etymology: paronyms. It is quite interesting to find out why all the classifiers led to worse accuracy results in Italian as compared to Greek, taking into account that the linguistic features employed are language independent.Keywords: machine learning, machine translation evaluation, neural network architecture, pairwise classification
Procedia PDF Downloads 132226 Biosorption of Nickel by Penicillium simplicissimum SAU203 Isolated from Indian Metalliferous Mining Overburden
Authors: Suchhanda Ghosh, A. K. Paul
Abstract:
Nickel, an industrially important metal is not mined in India, due to the lack of its primary mining resources. But, the chromite deposits occurring in the Sukinda and Baula-Nuasahi region of Odhisa, India, is reported to contain around 0.99% of nickel entrapped in the goethite matrix of the lateritic iron rich ore. Weathering of the dumped chromite mining overburden often leads to the contamination of the ground as well as the surface water with toxic nickel. Microbes inherent to this metal contaminated environment are reported to be capable of removal as well as detoxification of various metals including nickel. Nickel resistant fungal isolates obtained in pure form from the metal rich overburden were evaluated for their potential to biosorb nickel by using their dried biomass. Penicillium simplicissimum SAU203 was the best nickel biosorbant among the 20 fungi tested and was capable to sorbing 16.85 mg Ni/g biomass from a solution containing 50 mg/l of Ni. The identity of the isolate was confirmed using 18S rRNA gene analysis. The sorption capacity of the isolate was further standardized following Langmuir and Freundlich adsorption isotherm models and the results reflected energy efficient sorption. Fourier-transform infrared spectroscopy studies of the nickel loaded and control biomass in a comparative basis revealed the involvement of hydroxyl, amine and carboxylic groups in Ni binding. The sorption process was also optimized for several standard parameters like initial metal ion concentration, initial sorbet concentration, incubation temperature and pH, presence of additional cations and pre-treatment of the biomass by different chemicals. Optimisation leads to significant improvements in the process of nickel biosorption on to the fungal biomass. P. simplicissimum SAU203 could sorb 54.73 mg Ni/g biomass with an initial Ni concentration of 200 mg/l in solution and 21.8 mg Ni/g biomass with an initial biomass concentration of 1g/l solution. Optimum temperature and pH for biosorption was recorded to be 30°C and pH 6.5 respectively. Presence of Zn and Fe ions improved the sorption of Ni(II), whereas, cobalt had a negative impact. Pre-treatment of biomass with various chemical and physical agents has affected the proficiency of Ni sorption by P. simplicissimum SAU203 biomass, autoclaving as well as treatment of biomass with 0.5 M sulfuric acid and acetic acid reduced the sorption as compared to the untreated biomass, whereas, NaOH and Na₂CO₃ and Twin 80 (0.5 M) treated biomass resulted in augmented metal sorption. Hence, on the basis of the present study, it can be concluded that P. simplicissimum SAU203 has the potential for the removal as well as detoxification of nickel from contaminated environments in general and particularly from the chromite mining areas of Odhisa, India.Keywords: nickel, fungal biosorption, Penicillium simplicissimum SAU203, Indian chromite mines, mining overburden
Procedia PDF Downloads 191225 Photophysics of a Coumarin Molecule in Graphene Oxide Containing Reverse Micelle
Authors: Aloke Bapli, Debabrata Seth
Abstract:
Graphene oxide (GO) is the two-dimensional (2D) nanoscale allotrope of carbon having several physiochemical properties such as high mechanical strength, high surface area, strong thermal and electrical conductivity makes it an important candidate in various modern applications such as drug delivery, supercapacitors, sensors etc. GO has been used in the photothermal treatment of cancers and Alzheimer’s disease etc. The main idea to choose GO in our work is that it is a surface active molecule, it has a large number of hydrophilic functional groups such as carboxylic acid, hydroxyl, epoxide on its surface and in basal plane. So it can easily interact with organic fluorophores through hydrogen bonding or any other kind of interaction and easily modulate the photophysics of the probe molecules. We have used different spectroscopic techniques for our work. The Ground-state absorption spectra and steady-state fluorescence emission spectra were measured by using UV-Vis spectrophotometer from Shimadzu (model-UV-2550) and spectrofluorometer from Horiba Jobin Yvon (model-Fluoromax 4P) respectively. All the fluorescence lifetime and anisotropy decays were collected by using time-correlated single photon counting (TCSPC) setup from Edinburgh instrument (model: LifeSpec-II, U.K.). Herein, we described the photophysics of a hydrophilic molecule 7-(n,n׀-diethylamino) coumarin-3-carboxylic acid (7-DCCA) in the reverse micelles containing GO. It was observed that photophysics of dye is modulated in the presence of GO compared to photophysics of dye in the absence of GO inside the reverse micelles. Here we have reported the solvent relaxation and rotational relaxation time in GO containing reverse micelle and compare our work with normal reverse micelle system by using 7-DCCA molecule. Normal reverse micelle means reverse micelle in the absence of GO. The absorption maxima of 7-DCCA were blue shifted and emission maxima were red shifted in GO containing reverse micelle compared to normal reverse micelle. The rotational relaxation time in GO containing reverse micelle is always faster compare to normal reverse micelle. Solvent relaxation time, at lower w₀ values, is always slower in GO containing reverse micelle compare to normal reverse micelle and at higher w₀ solvent relaxation time of GO containing reverse micelle becomes almost equal to normal reverse micelle. Here emission maximum of 7-DCCA exhibit bathochromic shift in GO containing reverse micelles compared to that in normal reverse micelles because in presence of GO the polarity of the system increases, as polarity increases the emission maxima was red shifted an average decay time of GO containing reverse micelle is less than that of the normal reverse micelle. In GO containing reverse micelle quantum yield, decay time, rotational relaxation time, solvent relaxation time at λₑₓ=375 nm is always higher than λₑₓ=405 nm, shows the excitation wavelength dependent photophysics of 7-DCCA in GO containing reverse micelles.Keywords: photophysics, reverse micelle, rotational relaxation, solvent relaxation
Procedia PDF Downloads 155224 Implementation of Learning Disability Annual Review Clinics to Ensure Good Patient Care, Safety, and Equality in Covid-19: A Two Pass Audit in General Practice
Authors: Liam Martin, Martha Watson
Abstract:
Patients with learning disabilities (LD) are at increased risk of physical and mental illness due to health inequality. To address this, NICE recommends that people from the age of 14 with a learning disability should have an annual LD health check. This consultation should include a holistic review of the patient’s physical, mental and social health needs with a view of creating an action plan to support the patient’s care. The expected standard set by the Quality and Outcomes Framework (QOF) is that each general practice should review at least 75% of their LD patients annually. During COVID-19, there have been barriers to primary care, including health anxiety, the shift to online general practice and the increase in GP workloads. A surgery in North London wanted to assess whether they were falling short of the expected standard for LD patient annual reviews in order to optimize care post Covid-19. A baseline audit was completed to assess how many LD patients were receiving their annual reviews over the period of 29th September 2020 to 29th September 2021. This information was accessed using EMIS Web Health Care System (EMIS). Patients included were aged 14 and over as per QOF standards. Doctors were not notified of this audit taking place. Following the results of this audit, the creation of learning disability clinics was recommended. These clinics were recommended to be on the ground floor and should be a dedicated time for LD reviews. A re-audit was performed via the same process 6 months later in March 2022. At the time of the baseline audit, there were 71 patients aged 14 and over that were on the LD register. 54% of these LD patients were found to have documentation of an annual LD review within the last 12 months. None of the LD patients between the ages of 14-18 years old had received their annual review. The results were discussed with the practice, and dedicated clinics were set up to review their LD patients. A second pass of the audit was completed 6 months later. This showed an improvement, with 84% of the LD patients registered at the surgery now having a documented annual review within the last 12 months. 78% of the patients between the ages of 14-18 years old had now been reviewed. The baseline audit revealed that the practice was not meeting the expected standard for LD patient’s annual health checks as outlined by QOF, with the most neglected patients being between the ages of 14-18. Identification and awareness of this vulnerable cohort is important to ensure measures can be put into place to support their physical, mental and social wellbeing. Other practices could consider an audit of their annual LD health checks to make sure they are practicing within QOF standards, and if there is a shortfall, they could consider implementing similar actions as used here; dedicated clinics for LD patient reviews.Keywords: COVID-19, learning disability, learning disability health review, quality and outcomes framework
Procedia PDF Downloads 85223 Exploring the Practices of Global Citizenship Education in Finland and Scotland
Authors: Elisavet Anastasiadou
Abstract:
Global citizenship refers to an economic, social, political, and cultural interconnectedness, and it is inextricably intertwined with social justice, respect for human rights, peace, and a sense of responsibility to act on a local and global level. It aims to be transformative, enhance critical thinking and participation with pedagogical approaches based on social justice and democracy. The purpose of this study is to explore how Global Citizenship Education (GCE) is presented and implemented in two educational contexts, specifically in the curricula and pedagogical practices of primary education in Finland and Scotland. The impact of GCE is recognized as means for further development by institution such as and Finnish and Scottish curricula acknowledge the significance of GCE, emphasizing the student's ability to act and succeed in diverse and global communities. This comparative study should provide a good basis for further developing teaching practices based on informed understanding of how GCE is constrained or enabled from two different perspectives, extend the methodological applications of Practice Architectures and provide critical insights into GCE as a theoretical notion adopted by national and international educational policy. The study is directly connected with global citizenship aiming at future and societal change. The empirical work employs a multiple case study approach, including interviews and analysis of existing documents (textbook, curriculum). The data consists of the Finnish and Scottish curriculum. A systematic analysis of the curriculum in relation to GCE will offer insights into how the aims of GCE are presented and framed within the two contexts. This will be achieved using the theory of Practice Architectures. Curricula are official policy documentations (texts) that frame and envisage pedagogical practices. Practices, according to the theory of practice architectures, consist of sayings, doings, and relatings. Hence, even if the text analysis includes the semantic space (sayings) that are prefigured by the cultural-discursive arrangements and the relating prefigured by the socio-political arrangements, they will inevitably reveal information on the (doings) prefigured by the material-economic arrangements, as they hang together in practices. The results will assist educators in making changes to their teaching and enhance their self-conscious understanding of the history-making significance of their practices. It will also have a potential reform and focus on educationally relevant to such issues. Thus, the study will be able to open the ground for interventions and further research while it will consider the societal demands of a world in change.Keywords: citizenhsip, curriculum, democracy, practices
Procedia PDF Downloads 207