Search results for: density
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3361

Search results for: density

361 Periurban Landscape as an Opportunity Field to Solve Ecological Urban Conflicts

Authors: Cristina Galiana Carballo, Ibon Doval Martínez

Abstract:

Urban boundaries often result in a controversial limit between countryside and city in Europe. This territory is normally defined by the very limited land uses and the abundance of open space. The dimension and dynamics of peri-urbanization in the last decades have increased this land stock, which has influenced/impacted in several factors in terms of economic costs (maintenance, transport), ecological disturbances of the territory and changes in inhabitant´s behaviour. In an increasingly urbanised world and a growing urban population, cities also face challenges such as Climate Change. In this context, new near-future corrective trends including circular economies for local food supply or decentralised waste management became key strategies towards more sustainable urban models. Those new solutions need to be planned and implemented considering the potential conflict with current land uses. The city of Vitoria-Gasteiz (Basque Country, Spain) has triplicated land consumption per habitant in 10 years, resulting in a vast extension of low-density urban type confronting rural land and threatening agricultural uses, landscape and urban sustainability. Urban planning allows managing and optimum use allocation based on soil vocation and socio-ecosystem needs, while peri-urban space arises as an opportunity for developing different uses which do not match either within the compact city, not in open agricultural lands, such as medium-size agrocomposting systems or biomass plants. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Therefore, a qualitative multi-criteria methodology has been developed for Vitoria-Gasteiz city to assess the spatial definition of peri-urban land. Climate change and circular economy were identified as frameworks where to determine future land, soil vocation and urban planning requirements which eventually become estimations of required local food and renewable energy supply along with alternative waste management system´s implementation. By means of it, it has been developed an urban planning proposal which overcomes urban-non urban dichotomy in Vitoria-Gasteiz. The proposal aims to enhance rural system and improve urban sustainability performance through the normative recognition of an agricultural peri-urban belt.

Keywords: landscape ecology, land-use management, periurban, urban planning

Procedia PDF Downloads 133
360 An Efficient Automated Radiation Measuring System for Plasma Monopole Antenna

Authors: Gurkirandeep Kaur, Rana Pratap Yadav

Abstract:

This experimental study is aimed to examine the radiation characteristics of different plasma structures of a surface wave-driven plasma antenna by an automated measuring system. In this study, a 30 cm long plasma column of argon gas with a diameter of 3 cm is excited by surface wave discharge mechanism operating at 13.56 MHz with RF power level up to 100 Watts and gas pressure between 0.01 to 0.05 mb. The study reveals that a single structured plasma monopole can be modified into an array of plasma antenna elements by forming multiple striations or plasma blobs inside the discharge tube by altering the values of plasma properties such as working pressure, operating frequency, input RF power, discharge tube dimensions, i.e., length, radius, and thickness. It is also reported that plasma length, electron density, and conductivity are functions of operating plasma parameters and controlled by changing working pressure and input power. To investigate the antenna radiation efficiency for the far-field region, an automation-based radiation measuring system has been fabricated and presented in detail. This developed automated system involves a combined setup of controller, dc servo motors, vector network analyzer, and computing device to evaluate the radiation intensity, directivity, gain and efficiency of plasma antenna. In this system, the controller is connected to multiple motors for moving aluminum shafts in both elevation and azimuthal plane whereas radiation from plasma monopole antenna is measured by a Vector Network Analyser (VNA) which is further wired up with the computing device to display radiations in polar plot forms. Here, the radiation characteristics of both continuous and array plasma monopole antenna have been studied for various working plasma parameters. The experimental results clearly indicate that the plasma antenna is as efficient as a metallic antenna. The radiation from plasma monopole antenna is significantly influenced by plasma properties which provides a wider range in radiation pattern where desired radiation parameters like beam-width, the direction of radiation, radiation intensity, antenna efficiency, etc. can be achieved in a single monopole. Due to its wide range of selectivity in radiation pattern; this can meet the demands of wider bandwidth to get high data speed in communication systems. Moreover, this developed system provides an efficient and cost-effective solution for measuring the radiation pattern in far-field zone for any kind of antenna system.

Keywords: antenna radiation characteristics, dynamically reconfigurable, plasma antenna, plasma column, plasma striations, surface wave

Procedia PDF Downloads 94
359 Numerical Erosion Investigation of Standalone Screen (Wire-Wrapped) Due to the Impact of Sand Particles Entrained in a Single-Phase Flow (Water Flow)

Authors: Ahmed Alghurabi, Mysara Mohyaldinn, Shiferaw Jufar, Obai Younis, Abdullah Abduljabbar

Abstract:

Erosion modeling equations were typically acquired from regulated experimental trials for solid particles entrained in single-phase or multi-phase flows. Evidently, those equations were later employed to predict the erosion damage caused by the continuous impacts of solid particles entrained in streamflow. It is also well-known that the particle impact angle and velocity do not change drastically in gas-sand flow erosion prediction; hence an accurate prediction of erosion can be projected. On the contrary, high-density fluid flows, such as water flow, through complex geometries, such as sand screens, greatly affect the sand particles’ trajectories/tracks and consequently impact the erosion rate predictions. Particle tracking models and erosion equations are frequently applied simultaneously as a method to improve erosion visualization and estimation. In the present work, computational fluid dynamic (CFD)-based erosion modeling was performed using a commercially available software; ANSYS Fluent. The continuous phase (water flow) behavior was simulated using the realizable K-epsilon model, and the secondary phase (solid particles), having a 5% flow concentration, was tracked with the help of the discrete phase model (DPM). To accomplish a successful erosion modeling, three erosion equations from the literature were utilized and introduced to the ANSYS Fluent software to predict the screen wire-slot velocity surge and estimate the maximum erosion rates on the screen surface. Results of turbulent kinetic energy, turbulence intensity, dissipation rate, the total pressure on the screen, screen wall shear stress, and flow velocity vectors were presented and discussed. Moreover, the particle tracks and path-lines were also demonstrated based on their residence time, velocity magnitude, and flow turbulence. On one hand, results from the utilized erosion equations have shown similarities in screen erosion patterns, locations, and DPM concentrations. On the other hand, the model equations estimated slightly different values of maximum erosion rates of the wire-wrapped screen. This is solely based on the fact that the utilized erosion equations were developed with some assumptions that are controlled by the experimental lab conditions.

Keywords: CFD simulation, erosion rate prediction, material loss due to erosion, water-sand flow

Procedia PDF Downloads 129
358 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis

Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin

Abstract:

With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.

Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism

Procedia PDF Downloads 237
357 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body

Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker

Abstract:

This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel

Procedia PDF Downloads 366
356 A Static and Dynamic Slope Stability Analysis of Sonapur

Authors: Rupam Saikia, Ashim Kanti Dey

Abstract:

Sonapur is an intense hilly region on the border of Assam and Meghalaya lying in North-East India and is very near to a seismic fault named as Dauki besides which makes the region seismically active. Besides, these recently two earthquakes of magnitude 6.7 and 6.9 have struck North-East India in January and April 2016. Also, the slope concerned for this study is adjacent to NH 44 which for a long time has been a sole important connecting link to the states of Manipur and Mizoram along with some parts of Assam and so has been a cause of considerable loss to life and property since past decades as there has been several recorded incidents of landslide, road-blocks, etc. mostly during the rainy season which comes into news. Based on this issue this paper reports a static and dynamic slope stability analysis of Sonapur which has been carried out in MIDAS GTS NX. The slope being highly unreachable due to terrain and thick vegetation in-situ test was not feasible considering the current scope available so disturbed soil sample was collected from the site for the determination of strength parameters. The strength parameters were so determined for varying relative density with further variation in water content. The slopes were analyzed considering plane strain condition for three slope heights of 5 m, 10 m and 20 m which were then further categorized based on slope angles 30, 40, 50, 60, and 70 considering the possible extent of steepness. Initially static analysis under dry state was performed then considering the worst case that can develop during rainy season the slopes were analyzed for fully saturated condition along with partial degree of saturation with an increase in the waterfront. Furthermore, dynamic analysis was performed considering the El-Centro Earthquake which had a magnitude of 6.7 and peak ground acceleration of 0.3569g at 2.14 sec for the slope which were found to be safe during static analysis under both dry and fully saturated condition. Some of the conclusions were slopes with inclination above 40 onwards were found to be highly vulnerable for slopes of height 10 m and above even under dry static condition. Maximum horizontal displacement showed an exponential increase with an increase in inclination from 30 to 70. The vulnerability of the slopes was seen to be further increased during rainy season as even slopes of minimal steepness of 30 for height 20 m was seen to be on the verge of failure. Also, during dynamic analysis slopes safe during static analysis were found to be highly vulnerable. Lastly, as a part of the study a comparative study on Strength Reduction Method (SRM) versus Limit Equilibrium Method (LEM) was also carried out and some of the advantages and disadvantages were figured out.

Keywords: dynamic analysis, factor of safety, slope stability, strength reduction method

Procedia PDF Downloads 235
355 Investigation of Physical-Mechanical Characteristics of Granulated Artificial Aggregates Synthesized from Wood Ash Using Green Technology

Authors: Vitoldas Vidikas, Algirdas Augonis

Abstract:

Different ecological binders have been used to minimize the negative effects of cement production and use on the environment. Wood ash is one of these alternative binders, and there has been increasing research related to this topic recently. The incineration process in power plants produces numerous amounts of residues, the potential applications of which remain incompletely understood. However, it is established that wood ash improves concrete properties, serves as a fertilizer, and substitutes natural aggregates in artificial aggregate production. This study presents the production and properties of wood ash artificial aggregate, their integration into concrete, and the assessment of their strength. Due to the aforementioned large amount of incineration waste accumulating in landfills, the recovery of this waste is important, and reuse and recycling of this waste is necessary. Artificial aggregates stand out as a significant innovation in this effort. In this study, the artificial aggregate was carbonized using wood waste incineration ash and alkali activators, with the alkaline activator consisting of Ca(OH)2. Various mixtures were formulated, incorporating different materials and compositions of activators. Initially, fillers were created using wood ash, followed by formulations subsequently supplemented with wood ash. A series of tests, including XRD, SEM, and compression tests, were conducted. The artificial aggregate exhibits minimal water absorption and holds potential as a substitute for natural materials. Its prospective applications extend to agriculture, where it could function as a fertilizer, and construction, where it could serve as an artificial aggregate. Concrete incorporating the artificial aggregate demonstrates stability, stiffness, and relatively low density. In our research, a test was developed and applied to determine the compressive strength of a manufactured artificial aggregate, not by direct loading, but by subjecting a cementitious test specimen containing the aggregate under test to a load. In this way, the test not only determines the effect of the aggregate on the compressive behavior of such a specimen but also the characteristics of the fracture, which shows how these artificial aggregates adhere to the cement matrix. This testing methodology holds promise for evaluating the suitability of artificial aggregates in construction materials, not only in terms of their load-bearing capacity but also of their adhesion to the mineral binder. The results showed that the mechanical properties of granular artificial aggregates vary significantly with the amount of binder (lime), i.e. an increase of ~15% in the amount of binder resulted in an increase in the crushing strength of the carbonized aggregate by ~15-20%, while the compressive strength of the cementitious specimen with this aggregate increased by ~18%.

Keywords: wood ash, artificial aggregate, carbonization, compressive strength

Procedia PDF Downloads 17
354 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.

Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese

Procedia PDF Downloads 51
353 TNF-Alpha and MDA Levels in Hearts of Cholesterol-Fed Rats Supplemented with Extra Virgin Olive Oil or Sunflower Oil, in Either Commercial or Modified Forms

Authors: Ageliki I. Katsarou, Andriana C. Kaliora, Antonia Chiou, Apostolos Papalois, Nick Kalogeropoulos, Nikolaos K. Andrikopoulos

Abstract:

Oxidative stress is a major mechanism underlying CVDs while inflammation, an intertwined process with oxidative stress, is also linked to CVDs. Extra virgin olive oil (EVOO) is widely known to play a pivotal role in CVD prevention and CVD reduction. However, in most studies, olive oil constituents are evaluated individually and not as part of the native food, hence potential synergistic effects as drivers of EVOO beneficial properties may be underestimated. In this study, EVOO lipidic and polar phenolics fractions were evaluated for their effect on inflammatory (TNF-alpha) and oxidation (malondialdehyde/MDA) markers, in cholesterol-fed rats. Thereat, oils with discernible lipidic profile and polar phenolic content were used. Wistar rats were fed on either a high-cholesterol diet (HCD) or a HCD supplemented with oils, either commercially available, i.e. EVOO, sunflower oil (SO), or modified as to their polar phenol content, i.e. phenolics deprived-EVOO (EVOOd), SO enriched with the EVOO phenolics (SOe). After 9 weeks of dietary intervention, heart and blood samples were collected. HCD induced dylipidemia shown by increase in serum total cholesterol, low-density lipoprotein cholesterol (LDL-c) and triacylglycerols. Heart tissue has been affected by dyslipidemia; oxidation was indicated by increase in MDA in cholesterol-fed rats and inflammation by increase in TNF-alpha. In both cases, this augmentation was attenuated in EVOO and SOe diets. With respect to oxidation, SO enrichment with the EVOO phenolics brought its lipid peroxidation levels as low as in EVOO-fed rats. This suggests that phenolic compounds may act as antioxidant agents in rat heart. A possible mechanism underlying this activity may be the protective effect of phenolics in mitochondrial membrane against oxidative damage. This was further supported by EVOO/EVOOd comparison with the former presenting lower heart MDA content. As for heart inflammation, phenolics naturally present in EVOO as well as phenolics chemically added in SO, exhibited quenching abilities in heart TNF-alpha levels of cholesterol-fed rats. TNF-alpha may have played a causative role in oxidative stress induction while the opposite may have also happened, hence setting up a vicious cycle. Overall, diet supplementation with EVOO or SOe attenuated hypercholesterolemia-induced increase in MDA and TNF-alpha in Wistar rat hearts. This is attributed to phenolic compounds either naturally existing in olive oil or as fortificants in seed oil.

Keywords: extra virgin olive oil, hypercholesterolemic rats, MDA, polar phenolics, TNF-alpha

Procedia PDF Downloads 468
352 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 350
351 The Relationship between Elderly People with Depression and Built Environment Factors

Authors: Hung-Chun Lin, Tzu-Yuan Chao

Abstract:

As the population aging has become an inevitable trend globally, issues of improving the well-being of elderly people in urban areas have been a challenging task for urban planners. Recent studies of ageing trend have also expended to explore the relationship between the built environment and mental condition of elderly people. These studies have proved that even though the built environment may not necessarily play the decisive role in affecting mental health, it can have positive impacts on individual mental health by promoting social linkages and social networks among older adults. There has been a great amount of relevant research examined the impact of the built environment attributes on depression in the elderly; however, most were conducted in the Western countries. Little attention has been paid in Asian cities with contrarily high density and mix-use urban contexts such as Taiwan regarding how the built environment attributes related to depression in elderly people. Hence, more empirical cross-principle studies are needed to explore the possible impacts of Asia urban characteristics on older residents’ mental condition. This paper intends to focus on Tainan city, the fourth biggest metropolis in Taiwan. We first analyze with data from National Health Insurance Research Database to pinpoint the empirical study area where residing most elderly patients, aged over 65, with depressive disorders. Secondly, we explore the relationship between specific attributes of the built environment collected from previous studies and elderly individuals who suffer from depression, under different socio-cultural and networking circumstances. To achieve the results, the research methods adopted in this study include questionnaire and database analysis, and the results will be proceeded by correlation analysis. In addition, through literature review, by generalizing the built environment factors that have been used in Western research to evaluate the relationship between built environment and older individuals with depressive disorders, a set of local evaluative indicators of the built environment for future studies will be proposed as well. In order to move closer to develop age-friendly cities and improve the well-being for the elderly in Taiwan, the findings of this paper can provide empirical results to grab planners’ attention for how built environment makes the elderly feel and to reconsider the relationship between them. Furthermore, with an interdisciplinary topic, the research results are expected to make suggestions for amending the procedures of drawing up an urban plan or a city plan from a different point of view.

Keywords: built environment, depression, elderly, Tainan

Procedia PDF Downloads 96
350 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 221
349 Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity

Authors: Mustafa Metin Donma, Orkide Donma

Abstract:

Obesity in children particularly draws attention because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases, including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. The liver is at the center of the metabolic pathways network. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and two major hepatokines, cytokines produced predominantly by the liver, fibroblast growth factor-21 (FGF-21) and fetuin A were investigated in pediatric obesity. Two groups were constituted from seventy-six obese children based on World Health Organization criteria. Group 1 was composed of children whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children who are above the 99ᵗʰ percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by an automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by the SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and the platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p<0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p<0.01) in MO group. Both hepatokines were increased in the same group; however, increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r=0.425; p<0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.

Keywords: childhood obesity, fetuin A , fibroblast growth factor-21, hematologic markers, red cell distribution width

Procedia PDF Downloads 167
348 The First Import of Yellow Fever Cases in China and Its Revealing Suggestions for the Control and Prevention of Imported Emerging Diseases

Authors: Chao Li, Lei Zhou, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li

Abstract:

Background: In 2016, yellow fever had been first ever discovered in China, soon after the yellow fever epidemic occurred in Angola. After the discovery, China had promptly made the national protocol of control and prevention and strengthened the surveillance on passenger and vector. In this study, a descriptive analysis was conducted to summarize China’s experiences of response towards this import epidemic, in the hope of providing experiences on prevention and control of yellow fever and other similar imported infectious diseases in the future. Methods: The imported cases were discovered and reported by General Administration of Quality Supervision, Inspection and Quarantine (AQSIQ) and several hospitals. Each clinically diagnosed yellow fever case was confirmed by real-time reverse transcriptase polymerase chain reaction (RT–PCR). The data of the imported yellow fever cases were collected by local Centers for Disease Control and Prevention (CDC) through field investigations soon after they received the reports. Results: A total of 11 imported cases from Angola were reported in China, during Angola’s yellow fever outbreak. Six cases were discovered by the AQSIQ, among which two with mild symptom were initiative declarations at the time of entry. Except for one death, the remaining 10 cases all had recovered after timely and proper treatment. All cases are Chinese, and lived in Luanda, the capital of Angola. 73% were retailers (8/11) from Fuqing city in Fujian province, and the other three were labors send by companies. 10 cases had experiences of medical treatment in Luanda after onset, among which 8 cases visited the same local Chinese medicine hospital (China Railway four Bureau Hospital). Among the 11 cases, only one case had an effective vaccination. The result of emergency surveillance for mosquito density showed that only 14 containers of water were found positive around places of three cases, and the Breteau Index is 15. Conclusions: Effective response was taken to control and prevent the outbreak of yellow fever in China after discovering the imported cases. However, though the similar origin of Chinese in Angola has provided an easy access for disease detection, information sharing, health education and vaccination on yellow fever; these conveniences were overlooked during previous disease prevention methods. Besides, only one case having effective vaccination revealed the inadequate capacity of immunization service in China. These findings will provide suggestions to improve China’s capacity to deal with not only yellow fever but also other similar imported diseases in China.

Keywords: yellow fever, first import, China, suggestion

Procedia PDF Downloads 165
347 Improving Alkaline Water Electrolysis by Using an Asymmetrical Electrode Cell Design

Authors: Gabriel Wosiak, Felipe Staciaki, Eryka Nobrega, Ernesto Pereira

Abstract:

Hydrogen is an energy carrier with potential applications in various industries. Alkaline electrolysis is a commonly used method for hydrogen production; however, its energy cost remains relatively high compared to other methods. This is due in part to interfacial pH changes that occur during the electrolysis process. Interfacial pH changes refer to the changes in pH that occur at the interface between the cathode electrode and the electrolyte solution. These changes are caused by the electrochemical reactions at both electrodes, which consume or produces hydroxide ions (OH-) from the electrolyte solution. This results in an important change in the local pH at the electrode surface, which can have several impacts on the energy consumption and durability of electrolysers. One impact of interfacial pH changes is an increase in the overpotential required for hydrogen production. Overpotential is the difference between the theoretical potential required for a reaction to occur and the actual potential that is applied to the electrodes. In the case of water electrolysis, the overpotential is caused by a number of factors, including the mass transport of reactants and products to and from the electrodes, the kinetics of the electrochemical reactions, and the interfacial pH. An increase in the interfacial pH at the anode surface in alkaline conditions can lead to an increase in the overpotential for hydrogen production. This is because the lower local pH makes it more difficult for the hydroxide ions to be oxidized. As a result, there is an increase in the required energy to the process occur. In addition to increasing the overpotential, interfacial pH changes can also lead to the degradation of the electrodes. This is because the lower pH can make the electrode more susceptible to corrosion. As a result, the electrodes may need to be replaced more frequently, which can increase the overall cost of water electrolysis. The method presented in the paper addresses the issue of interfacial pH changes by using a cell design with a different cell design, introducing the electrode asymmetry. This design helps to mitigate the pH gradient at the anode/electrolyte interface, which reduces the overpotential and improves the energy efficiency of the electrolyser. The method was tested using a multivariate approach in both laboratory and industrial current density conditions and validated the results with numerical simulations. The results demonstrated a clear improvement (11.6%) in energy efficiency, providing an important contribution to the field of sustainable energy production. The findings of the paper have important implications for the development of cost-effective and sustainable hydrogen production methods. By mitigating interfacial pH changes, it is possible to improve the energy efficiency of alkaline electrolysis and make it a more competitive option for hydrogen production.

Keywords: electrolyser, interfacial pH, numerical simulation, optimization, asymmetric cell

Procedia PDF Downloads 41
346 Challenges Encountered by Small Business Owners in Building Their Social Media Marketing Competency

Authors: Nilay Balkan

Abstract:

Introductory statement: The purpose of this study is to understand how small business owners develop social media marketing competency, the challenges they encounter in doing so, and establish the social media training needs of such businesses. These challenges impact the extent to which small business owners build effective social media knowledge and, in turn, impact their ability to implement effective social media marketing into their business practices. This means small businesses are not fully able to benefit from social media, such as benefits to customer relationship management or increasing brand image, which would support the overall business operations for these businesses. This research is part one of a two-phased study. The first phase aims to establish the challenges small business owners face in building social media marketing competency and their specific training needs. Phase two will then focus in more depth on the barriers and challenges emerging from phase one. Summary of Methodology: Interviews with ten small business owners were conducted from various sectors, including fitness, tourism, food, and drinks. These businesses were located in the central belt of Scotland, which is an area with the highest population and business density in Scotland. These interviews were in-depth and semi-structured, with the purpose of being investigative and understanding the phenomena from the lived experience of the small business owners. A purposive sampling was used, where small business owners fulfilling certain criteria were approached to take part in the interviews. Key findings: The study found four ways in which small business owners develop their social media competency (informal methods, formal methods, learning through a network, and experimenting) and the various challenges they face with these methods. Further, the study established four barriers impacting the development of social media marketing competency among the interviewed small business owners. In doing so, preliminary support needs have also emerged. Concluding statement: The contribution of this study is to understand the challenges small business owners face when learning how to use social media for business purposes and identifying their training needs. This understanding can help the development of specific and tailored support. In addition, specific and tailored training can support small businesses in building competency. This supports small businesses to progress to the next stage of their development, which could be to further their digital transformation or grow their business. The insights from this study can be used to support business competitiveness and support small businesses to become more resilient. Moreover, small businesses and entrepreneurs share some similar characteristics, such as limited resources and conflicting priorities, and the findings of this study may be able to support entrepreneurs in their social media marketing strategies as well.

Keywords: small business, marketing theory and applications, social media marketing, strategic management, digital competency, digitalisation, marketing research and strategy, entrepreneurship

Procedia PDF Downloads 60
345 System Analysis on Compact Heat Storage in the Built Environment

Authors: Wilko Planje, Remco Pollé, Frank van Buuren

Abstract:

An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.

Keywords: compact thermal storage, thermochemical material, built environment, renewable energy

Procedia PDF Downloads 216
344 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 376
343 Survival of Byzantine Heritage in Gerace, Calabria

Authors: Marcus Papandrea

Abstract:

Gerace survives as one of the best examples of unspoiled Byzantine heritage in Calabria and the world due to its strategic location. As the last western province of the Byzantine Empire, Calabria was not subject to the destruction or conversion of sites which took place by the Ottomans in the east or the Arabs in Sicily and North Africa. Situated ten kilometers inland atop a 500m high table mountain, Gerace overlooks the Ionian coast and is a gateway to the rugged and wild mountain interior of the Calabrian peninsula. It is only connected to the outside world by a single windy and crumbling road and, unfortunately, faces serious economic and demographic decline. Largely due to its isolation, Gerace has remained understudied and under-recognized in a country that boasts the most UNESCO sites in the world despite its wealth and high density of Byzantine monuments. In 1995, the Patriarch of the Eastern Orthodox church, Bartholomew I, visited Gerace. He re-opened and blessed the ancient Byzantine church San Giovanni Crisostomo, reviving Gerace’s cultural origins and links to Byzantium. This paper examines how these links have persisted over a millennium, starting from the community’s humble origins as a refuge for ascetic monks to becoming the “city of one-hundred churches.” While little is documented or written about Gerace’s early history, this paper employs archaeological findings as well as hagiography to present valuable insight into this area which became known as the “land of the saints.” By characterizingGerace’s early Byzantine society and helping to understand its strong spiritual roots, this paper creates the basis necessary to understand the endurance of its Byzantine legacy and appreciate its important cultural contributions to the Italian Renaissance as a hub of Greek literacy which attracted great humanists from the fourteenth to fifteenth century such as Barlaam of Seminara, Simone Autumano, Bessarion, and AthanasioChalkeolopus.Inbringing together these characters, this paper propels Gerace onto the world stage as an important cultural center in medieval Mediterranean history which facilitated cross cultural interactions between Byzantine Greeks, Sicilian Arabs, Jews, and Normans. From this intersection developed a syncretism which led to modern-day Calabrian identity culture and society and is perhaps most visible in some of Gerace’s last surviving monuments from this time. While emphasizing this unassuming town’s cultural importance and unique Byzantine heritage, this paper also highlights the criteria which Gerace fulfills for being included in the World Heritage List.

Keywords: byzantine rite, greek rite, italo-greek, latinization

Procedia PDF Downloads 73
342 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 108
341 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs

Authors: Johannes Mertl

Abstract:

Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.

Keywords: airborne germs, allergens, classification of filters, fine dust

Procedia PDF Downloads 224
340 Restoring Ecosystem Balance in Arid Regions: A Case Study of a Royal Nature Reserve in the Kingdom of Saudi Arabia

Authors: Talal Alharigi, Kawther Alshlash, Mariska Weijerman

Abstract:

The government of Saudi Arabia has developed an ambitious “Vision 2030”, which includes a Green Initiative (i.e., the planting of 10 billion trees) and the establishment of seven Royal Reserves as protected areas that comprise 13% of the total land area. The main objective of the reserves is to restore ecosystem balance and reconnect people with nature. Two royal reserves are managed by The Imam Abdulaziz bin Mohammed Royal Reserve Development Authority, including Imam Abdulaziz bin Mohammed Royal Reserve and King Khalid Royal Reserve. The authority has developed a management plan to enhance the habitat through seed dispersal and the planting of 10 million trees, and to restock wildlife that was once abundant in these arid ecosystems (e.g., oryx, Nubian ibex, gazelles, red-necked ostrich). Expectations are that with the restoration of the native vegetation, soil condition and natural hydrologic processes will improve and lead to further enhancement of vegetation and, over time, an increase in biodiversity of flora and fauna. To evaluate the management strategies in reaching these expectations, a comprehensive monitoring and evaluation program was developed. The main objectives of this program are to (1) monitor the status and trends of indicator species, (2) improve desert ecosystem understanding, (3) assess the effects of human activities, and (4) provide science-based management recommendations. Using a random stratified survey design, a diverse suite of survey methods will be implemented, including belt and quadrant transects, camera traps, GPS tracking devices, and drones. Data will be gathered on biotic parameters (plant and animal diversity, density, and distribution) and abiotic parameters (humidity, temperature, precipitation, wind, air, soil quality, vibrations, and noise levels) to meet the goals of the monitoring program. This case study intends to provide a detailed overview of the management plan and monitoring program of two royal reserves and outlines the types of data gathered which can be made available for future research projects.

Keywords: camera traps, desert ecosystem, enhancement, GPS tracking, management evaluation, monitoring, planting, restocking, restoration

Procedia PDF Downloads 85
339 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 104
338 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 273
337 Polar Nanoregions in Lead-Free Relaxor Ceramics: Unveiling through Impedance Spectroscopy

Authors: Mohammed Mesrar, Hamza El Malki, Hamza Mesrar

Abstract:

In this study, ceramics of (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3 were synthesized through a conventional calcination process (solid-state method) at 1000°C for 4 hours, with x(%) values ranging from 0.0 to 100. Room temperature XRD patterns confirmed the phase formation of the samples. The Rietveld refinement method was employed to verify the morphotropic phase boundary (MPB) at x(%)=16-20. We investigated the average crystallite size and lattice strain using Scherrer's formula and Williamson-Hall (W-H) analysis. SEM image analyses provided additional evidence of the impact of doping on structural growth under low temperatures. Relaxation time extracted from Z″(f) and M″(f) spectra for x(%) = 0.0, 12, 16, 20, and 30 followed the Arrhenius law, revealing the presence of three distinct relaxation mechanisms with varying activation energies. The shoulder response in M″(f) indirectly indicated the existence of highly polarizable entities in the samples, serving as a signature of polar nanoregions (PNRs) within the grains.In this study, ceramics of (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3 were synthesized through a conventional calcination process (solid-state method) at 1000°C for 4 hours, with x(%) values ranging from 0.0 to 100. Room temperature XRD patterns confirmed the phase formation of the samples. The Rietveld refinement method was employed to verify the morphotropic phase boundary (MPB) at x(%)=16-20. We investigated the average crystallite size and lattice strain using Scherrer's formula and Williamson-Hall (W-H) analysis. SEM image analyses provided additional evidence of the impact of doping on structural growth under low temperatures. Relaxation time extracted from Z″(f) and M″(f) spectra for x(%) = 0.0, 12, 16, 20, and 30 followed the Arrhenius law, revealing the presence of three distinct relaxation mechanisms with varying activation energies. The shoulder response in M″(f) indirectly indicated the existence of highly polarizable entities in the samples, serving as a signature of polar nanoregions (PNRs) within the grains.

Keywords: (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3, Rietveld refinement, Scanning electron microscopy (SEM), Williamson-Hall plots, charge density distribution, dielectric properties

Procedia PDF Downloads 23
336 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 71
335 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 43
334 Zoledronic Acid with Neoadjuvant Chemotherapy in Advanced Breast Cancer Prospective Study 2011–2014

Authors: S. Sakhri

Abstract:

Background: The use of Zoledronic acid (ZA) is an established place in the treatment of malignant tumors with a predilection for the skeleton of interest (in particular metastasis). Although the main target of Zoledronic acid was osteoclasts, there are preclinical data suggest that Zoledronic acid may have an antitumor effect on cells other than osteoclasts, including tumor cells. Antitumor activity, including the inhibition of tumor cell growth and the induction of apoptosis of tumor cells, inhibition of tumor cell adhesion and invasion, and anti-angiogenic effects have been demonstrated. Methods. From (2012 to 2014), 438 patients were included respondents the inclusion criteria, respectively. This is a prospective study over a 4 year period. Of all patients (N=438), 432 received neoadjuvant chemotherapy with Zoledronic acid. The primary end point was the pathologic complete response in advancer breast cancer stage. The secondary end point is to evaluate Clinical response according to RECIST criteria; estimate the bone density before and at the end of chemotherapy in women with locally advanced breast cancer, Toxicity Evaluation and Overall survival using Kaplan-Meier and log test. Result: The Objective response rate was 97% after (C4) with 3% stabilizations and 99, 3% of which 0.7% C8 after stabilization. The clinical complete response was 28% after C4 respectively, and 46.8% after C8, the pathologic complete response rate was 40.13% according to the classification Sataloff. We observed that the pathologic complete response rate was the most raised in the group including Her2 (luminal Her2 and Her2) the lowest in the triple negative group as classified by Sataloff. We found that the pCR is significantly higher in the age group (35-50 years) with 53.17%. Those who have more than 50 years in 2nd place with 27.7% and the lower in young woman 35 years pCR was 19%, not statistically significant, -The pCR was also in favor of the menopausal group in 51, 4%, and 48, 55% for non-menopausal women. The average duration of overall survival was also significantly in the subgroup (Luminal -Her2, Her2) compared with triple negative. It is 47.18 months in the luminal group vs. 38.95 in the triple negative group. -Was observed in our study a difference in quality of life between (C1) was the admission of the patient, and after (C8), we found an increase in general signs and a deterioration in the psychological state C1, in contrast to the C8 these general signs and mental status improves, up to 12, and 24 months. Conclusion The results of this study suggest that the addition of ZA to néoadjuvant CT has potential anti-cancer benefit in patients (Luminal -Her2, Her2) compared with triple negative with or without menopause status.

Keywords: HER2+, RH+, breast cancer, tyrosine kinase

Procedia PDF Downloads 189
333 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 120
332 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 209