Search results for: larvae density
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3593

Search results for: larvae density

353 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation

Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh

Abstract:

The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.

Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese

Procedia PDF Downloads 54
352 TNF-Alpha and MDA Levels in Hearts of Cholesterol-Fed Rats Supplemented with Extra Virgin Olive Oil or Sunflower Oil, in Either Commercial or Modified Forms

Authors: Ageliki I. Katsarou, Andriana C. Kaliora, Antonia Chiou, Apostolos Papalois, Nick Kalogeropoulos, Nikolaos K. Andrikopoulos

Abstract:

Oxidative stress is a major mechanism underlying CVDs while inflammation, an intertwined process with oxidative stress, is also linked to CVDs. Extra virgin olive oil (EVOO) is widely known to play a pivotal role in CVD prevention and CVD reduction. However, in most studies, olive oil constituents are evaluated individually and not as part of the native food, hence potential synergistic effects as drivers of EVOO beneficial properties may be underestimated. In this study, EVOO lipidic and polar phenolics fractions were evaluated for their effect on inflammatory (TNF-alpha) and oxidation (malondialdehyde/MDA) markers, in cholesterol-fed rats. Thereat, oils with discernible lipidic profile and polar phenolic content were used. Wistar rats were fed on either a high-cholesterol diet (HCD) or a HCD supplemented with oils, either commercially available, i.e. EVOO, sunflower oil (SO), or modified as to their polar phenol content, i.e. phenolics deprived-EVOO (EVOOd), SO enriched with the EVOO phenolics (SOe). After 9 weeks of dietary intervention, heart and blood samples were collected. HCD induced dylipidemia shown by increase in serum total cholesterol, low-density lipoprotein cholesterol (LDL-c) and triacylglycerols. Heart tissue has been affected by dyslipidemia; oxidation was indicated by increase in MDA in cholesterol-fed rats and inflammation by increase in TNF-alpha. In both cases, this augmentation was attenuated in EVOO and SOe diets. With respect to oxidation, SO enrichment with the EVOO phenolics brought its lipid peroxidation levels as low as in EVOO-fed rats. This suggests that phenolic compounds may act as antioxidant agents in rat heart. A possible mechanism underlying this activity may be the protective effect of phenolics in mitochondrial membrane against oxidative damage. This was further supported by EVOO/EVOOd comparison with the former presenting lower heart MDA content. As for heart inflammation, phenolics naturally present in EVOO as well as phenolics chemically added in SO, exhibited quenching abilities in heart TNF-alpha levels of cholesterol-fed rats. TNF-alpha may have played a causative role in oxidative stress induction while the opposite may have also happened, hence setting up a vicious cycle. Overall, diet supplementation with EVOO or SOe attenuated hypercholesterolemia-induced increase in MDA and TNF-alpha in Wistar rat hearts. This is attributed to phenolic compounds either naturally existing in olive oil or as fortificants in seed oil.

Keywords: extra virgin olive oil, hypercholesterolemic rats, MDA, polar phenolics, TNF-alpha

Procedia PDF Downloads 473
351 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks

Authors: Shin-Pin Tseng

Abstract:

Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.

Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG

Procedia PDF Downloads 356
350 The Relationship between Elderly People with Depression and Built Environment Factors

Authors: Hung-Chun Lin, Tzu-Yuan Chao

Abstract:

As the population aging has become an inevitable trend globally, issues of improving the well-being of elderly people in urban areas have been a challenging task for urban planners. Recent studies of ageing trend have also expended to explore the relationship between the built environment and mental condition of elderly people. These studies have proved that even though the built environment may not necessarily play the decisive role in affecting mental health, it can have positive impacts on individual mental health by promoting social linkages and social networks among older adults. There has been a great amount of relevant research examined the impact of the built environment attributes on depression in the elderly; however, most were conducted in the Western countries. Little attention has been paid in Asian cities with contrarily high density and mix-use urban contexts such as Taiwan regarding how the built environment attributes related to depression in elderly people. Hence, more empirical cross-principle studies are needed to explore the possible impacts of Asia urban characteristics on older residents’ mental condition. This paper intends to focus on Tainan city, the fourth biggest metropolis in Taiwan. We first analyze with data from National Health Insurance Research Database to pinpoint the empirical study area where residing most elderly patients, aged over 65, with depressive disorders. Secondly, we explore the relationship between specific attributes of the built environment collected from previous studies and elderly individuals who suffer from depression, under different socio-cultural and networking circumstances. To achieve the results, the research methods adopted in this study include questionnaire and database analysis, and the results will be proceeded by correlation analysis. In addition, through literature review, by generalizing the built environment factors that have been used in Western research to evaluate the relationship between built environment and older individuals with depressive disorders, a set of local evaluative indicators of the built environment for future studies will be proposed as well. In order to move closer to develop age-friendly cities and improve the well-being for the elderly in Taiwan, the findings of this paper can provide empirical results to grab planners’ attention for how built environment makes the elderly feel and to reconsider the relationship between them. Furthermore, with an interdisciplinary topic, the research results are expected to make suggestions for amending the procedures of drawing up an urban plan or a city plan from a different point of view.

Keywords: built environment, depression, elderly, Tainan

Procedia PDF Downloads 100
349 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions

Authors: M. Tarik Boyraz, M. Bilge Imer

Abstract:

Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.

Keywords: heat treatment, IN738LC, simulations, super-alloys

Procedia PDF Downloads 227
348 Hematologic Inflammatory Markers and Inflammation-Related Hepatokines in Pediatric Obesity

Authors: Mustafa Metin Donma, Orkide Donma

Abstract:

Obesity in children particularly draws attention because it may threaten the individual’s future life due to many chronic diseases it may lead to. Most of these diseases, including obesity itself altogether are related to inflammation. For this reason, inflammation-related parameters gain importance. Within this context, complete blood cell counts, ratios or indices derived from these counts have recently found some platform to be used as inflammatory markers. So far, mostly adipokines were investigated within the field of obesity. The liver is at the center of the metabolic pathways network. Metabolic inflammation is closely associated with cellular dysfunction. In this study, hematologic inflammatory markers and two major hepatokines, cytokines produced predominantly by the liver, fibroblast growth factor-21 (FGF-21) and fetuin A were investigated in pediatric obesity. Two groups were constituted from seventy-six obese children based on World Health Organization criteria. Group 1 was composed of children whose age- and sex-adjusted body mass index (BMI) percentiles were between 95 and 99. Group 2 consists of children who are above the 99ᵗʰ percentile. The first and the latter groups were defined as obese (OB) and morbid obese (MO). Anthropometric measurements of the children were performed. Informed consent forms and the approval of the institutional ethics committee were obtained. Blood cell counts and ratios were determined by an automated hematology analyzer. The related ratios and indexes were calculated. Statistical evaluation of the data was performed by the SPSS program. There was no statistically significant difference in terms of neutrophil-to lymphocyte ratio, monocyte-to-high density lipoprotein cholesterol ratio and the platelet-to-lymphocyte ratio between the groups. Mean platelet volume and platelet distribution width values were decreased (p<0.05), total platelet count, red cell distribution width (RDW) and systemic immune inflammation index values were increased (p<0.01) in MO group. Both hepatokines were increased in the same group; however, increases were not statistically significant. In this group, also a strong correlation was calculated between FGF-21 and RDW when controlled by age, hematocrit, iron and ferritin (r=0.425; p<0.01). In conclusion, the association between RDW, a hematologic inflammatory marker, and FGF-21, an inflammation-related hepatokine, found in MO group is an important finding discriminating between OB and MO children. This association is even more powerful when controlled by age and iron-related parameters.

Keywords: childhood obesity, fetuin A , fibroblast growth factor-21, hematologic markers, red cell distribution width

Procedia PDF Downloads 169
347 The First Import of Yellow Fever Cases in China and Its Revealing Suggestions for the Control and Prevention of Imported Emerging Diseases

Authors: Chao Li, Lei Zhou, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li

Abstract:

Background: In 2016, yellow fever had been first ever discovered in China, soon after the yellow fever epidemic occurred in Angola. After the discovery, China had promptly made the national protocol of control and prevention and strengthened the surveillance on passenger and vector. In this study, a descriptive analysis was conducted to summarize China’s experiences of response towards this import epidemic, in the hope of providing experiences on prevention and control of yellow fever and other similar imported infectious diseases in the future. Methods: The imported cases were discovered and reported by General Administration of Quality Supervision, Inspection and Quarantine (AQSIQ) and several hospitals. Each clinically diagnosed yellow fever case was confirmed by real-time reverse transcriptase polymerase chain reaction (RT–PCR). The data of the imported yellow fever cases were collected by local Centers for Disease Control and Prevention (CDC) through field investigations soon after they received the reports. Results: A total of 11 imported cases from Angola were reported in China, during Angola’s yellow fever outbreak. Six cases were discovered by the AQSIQ, among which two with mild symptom were initiative declarations at the time of entry. Except for one death, the remaining 10 cases all had recovered after timely and proper treatment. All cases are Chinese, and lived in Luanda, the capital of Angola. 73% were retailers (8/11) from Fuqing city in Fujian province, and the other three were labors send by companies. 10 cases had experiences of medical treatment in Luanda after onset, among which 8 cases visited the same local Chinese medicine hospital (China Railway four Bureau Hospital). Among the 11 cases, only one case had an effective vaccination. The result of emergency surveillance for mosquito density showed that only 14 containers of water were found positive around places of three cases, and the Breteau Index is 15. Conclusions: Effective response was taken to control and prevent the outbreak of yellow fever in China after discovering the imported cases. However, though the similar origin of Chinese in Angola has provided an easy access for disease detection, information sharing, health education and vaccination on yellow fever; these conveniences were overlooked during previous disease prevention methods. Besides, only one case having effective vaccination revealed the inadequate capacity of immunization service in China. These findings will provide suggestions to improve China’s capacity to deal with not only yellow fever but also other similar imported diseases in China.

Keywords: yellow fever, first import, China, suggestion

Procedia PDF Downloads 168
346 Improving Alkaline Water Electrolysis by Using an Asymmetrical Electrode Cell Design

Authors: Gabriel Wosiak, Felipe Staciaki, Eryka Nobrega, Ernesto Pereira

Abstract:

Hydrogen is an energy carrier with potential applications in various industries. Alkaline electrolysis is a commonly used method for hydrogen production; however, its energy cost remains relatively high compared to other methods. This is due in part to interfacial pH changes that occur during the electrolysis process. Interfacial pH changes refer to the changes in pH that occur at the interface between the cathode electrode and the electrolyte solution. These changes are caused by the electrochemical reactions at both electrodes, which consume or produces hydroxide ions (OH-) from the electrolyte solution. This results in an important change in the local pH at the electrode surface, which can have several impacts on the energy consumption and durability of electrolysers. One impact of interfacial pH changes is an increase in the overpotential required for hydrogen production. Overpotential is the difference between the theoretical potential required for a reaction to occur and the actual potential that is applied to the electrodes. In the case of water electrolysis, the overpotential is caused by a number of factors, including the mass transport of reactants and products to and from the electrodes, the kinetics of the electrochemical reactions, and the interfacial pH. An increase in the interfacial pH at the anode surface in alkaline conditions can lead to an increase in the overpotential for hydrogen production. This is because the lower local pH makes it more difficult for the hydroxide ions to be oxidized. As a result, there is an increase in the required energy to the process occur. In addition to increasing the overpotential, interfacial pH changes can also lead to the degradation of the electrodes. This is because the lower pH can make the electrode more susceptible to corrosion. As a result, the electrodes may need to be replaced more frequently, which can increase the overall cost of water electrolysis. The method presented in the paper addresses the issue of interfacial pH changes by using a cell design with a different cell design, introducing the electrode asymmetry. This design helps to mitigate the pH gradient at the anode/electrolyte interface, which reduces the overpotential and improves the energy efficiency of the electrolyser. The method was tested using a multivariate approach in both laboratory and industrial current density conditions and validated the results with numerical simulations. The results demonstrated a clear improvement (11.6%) in energy efficiency, providing an important contribution to the field of sustainable energy production. The findings of the paper have important implications for the development of cost-effective and sustainable hydrogen production methods. By mitigating interfacial pH changes, it is possible to improve the energy efficiency of alkaline electrolysis and make it a more competitive option for hydrogen production.

Keywords: electrolyser, interfacial pH, numerical simulation, optimization, asymmetric cell

Procedia PDF Downloads 45
345 Challenges Encountered by Small Business Owners in Building Their Social Media Marketing Competency

Authors: Nilay Balkan

Abstract:

Introductory statement: The purpose of this study is to understand how small business owners develop social media marketing competency, the challenges they encounter in doing so, and establish the social media training needs of such businesses. These challenges impact the extent to which small business owners build effective social media knowledge and, in turn, impact their ability to implement effective social media marketing into their business practices. This means small businesses are not fully able to benefit from social media, such as benefits to customer relationship management or increasing brand image, which would support the overall business operations for these businesses. This research is part one of a two-phased study. The first phase aims to establish the challenges small business owners face in building social media marketing competency and their specific training needs. Phase two will then focus in more depth on the barriers and challenges emerging from phase one. Summary of Methodology: Interviews with ten small business owners were conducted from various sectors, including fitness, tourism, food, and drinks. These businesses were located in the central belt of Scotland, which is an area with the highest population and business density in Scotland. These interviews were in-depth and semi-structured, with the purpose of being investigative and understanding the phenomena from the lived experience of the small business owners. A purposive sampling was used, where small business owners fulfilling certain criteria were approached to take part in the interviews. Key findings: The study found four ways in which small business owners develop their social media competency (informal methods, formal methods, learning through a network, and experimenting) and the various challenges they face with these methods. Further, the study established four barriers impacting the development of social media marketing competency among the interviewed small business owners. In doing so, preliminary support needs have also emerged. Concluding statement: The contribution of this study is to understand the challenges small business owners face when learning how to use social media for business purposes and identifying their training needs. This understanding can help the development of specific and tailored support. In addition, specific and tailored training can support small businesses in building competency. This supports small businesses to progress to the next stage of their development, which could be to further their digital transformation or grow their business. The insights from this study can be used to support business competitiveness and support small businesses to become more resilient. Moreover, small businesses and entrepreneurs share some similar characteristics, such as limited resources and conflicting priorities, and the findings of this study may be able to support entrepreneurs in their social media marketing strategies as well.

Keywords: small business, marketing theory and applications, social media marketing, strategic management, digital competency, digitalisation, marketing research and strategy, entrepreneurship

Procedia PDF Downloads 64
344 System Analysis on Compact Heat Storage in the Built Environment

Authors: Wilko Planje, Remco Pollé, Frank van Buuren

Abstract:

An increased share of renewable energy sources in the built environment implies the usage of energy buffers to match supply and demand and to prevent overloads of existing grids. Compact heat storage systems based on thermochemical materials (TCM) are promising to be incorporated in future installations as an alternative for regular thermal buffers. This is due to the high energy density (1 – 2 GJ/m3). In order to determine the feasibility of TCM-based systems on building level several installation configurations are simulated and analyzed for different mixes of renewable energy sources (solar thermal, PV, wind, underground, air) for apartments/multistore-buildings for the Dutch situation. Thereby capacity, volume and financial costs are calculated. The simulation consists of options to include the current and future wind power (sea and land) and local roof-attached PV or solar-thermal systems. Thereby, the compact thermal buffer and optionally an electric battery (typically 10 kWhe) form the local storage elements for energy matching and shaving purposes. Besides, electric-driven heat pumps (air / ground) can be included for efficient heat generation in case of power-to-heat. The total local installation provides both space heating, domestic hot water as well as electricity for a specific case with low-energy apartments (annually 9 GJth + 8 GJe) in the year 2025. The energy balance is completed with grid-supplied non-renewable electricity. Taking into account the grid capacities (permanent 1 kWe/household), spatial requirements for the thermal buffer (< 2.5 m3/household) and a desired minimum of 90% share of renewable energy per household on the total consumption the wind-powered scenario results in acceptable sizes of compact thermal buffers with an energy-capacity of 4 - 5 GJth per household. This buffer is combined with a 10 kWhe battery and air source heat pump system. Compact thermal buffers of less than 1 GJ (typically volumes 0.5 - 1 m3) are possible when the installed wind-power is increased with a factor 5. In case of 15-fold of installed wind power compact heat storage devices compete with 1000 L water buffers. The conclusion is that compact heat storage systems can be of interest in the coming decades in combination with well-retrofitted low energy residences based on the current trends of installed renewable energy power.

Keywords: compact thermal storage, thermochemical material, built environment, renewable energy

Procedia PDF Downloads 221
343 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 379
342 Survival of Byzantine Heritage in Gerace, Calabria

Authors: Marcus Papandrea

Abstract:

Gerace survives as one of the best examples of unspoiled Byzantine heritage in Calabria and the world due to its strategic location. As the last western province of the Byzantine Empire, Calabria was not subject to the destruction or conversion of sites which took place by the Ottomans in the east or the Arabs in Sicily and North Africa. Situated ten kilometers inland atop a 500m high table mountain, Gerace overlooks the Ionian coast and is a gateway to the rugged and wild mountain interior of the Calabrian peninsula. It is only connected to the outside world by a single windy and crumbling road and, unfortunately, faces serious economic and demographic decline. Largely due to its isolation, Gerace has remained understudied and under-recognized in a country that boasts the most UNESCO sites in the world despite its wealth and high density of Byzantine monuments. In 1995, the Patriarch of the Eastern Orthodox church, Bartholomew I, visited Gerace. He re-opened and blessed the ancient Byzantine church San Giovanni Crisostomo, reviving Gerace’s cultural origins and links to Byzantium. This paper examines how these links have persisted over a millennium, starting from the community’s humble origins as a refuge for ascetic monks to becoming the “city of one-hundred churches.” While little is documented or written about Gerace’s early history, this paper employs archaeological findings as well as hagiography to present valuable insight into this area which became known as the “land of the saints.” By characterizingGerace’s early Byzantine society and helping to understand its strong spiritual roots, this paper creates the basis necessary to understand the endurance of its Byzantine legacy and appreciate its important cultural contributions to the Italian Renaissance as a hub of Greek literacy which attracted great humanists from the fourteenth to fifteenth century such as Barlaam of Seminara, Simone Autumano, Bessarion, and AthanasioChalkeolopus.Inbringing together these characters, this paper propels Gerace onto the world stage as an important cultural center in medieval Mediterranean history which facilitated cross cultural interactions between Byzantine Greeks, Sicilian Arabs, Jews, and Normans. From this intersection developed a syncretism which led to modern-day Calabrian identity culture and society and is perhaps most visible in some of Gerace’s last surviving monuments from this time. While emphasizing this unassuming town’s cultural importance and unique Byzantine heritage, this paper also highlights the criteria which Gerace fulfills for being included in the World Heritage List.

Keywords: byzantine rite, greek rite, italo-greek, latinization

Procedia PDF Downloads 78
341 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization

Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford

Abstract:

The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.

Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator

Procedia PDF Downloads 109
340 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs

Authors: Johannes Mertl

Abstract:

Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.

Keywords: airborne germs, allergens, classification of filters, fine dust

Procedia PDF Downloads 226
339 Restoring Ecosystem Balance in Arid Regions: A Case Study of a Royal Nature Reserve in the Kingdom of Saudi Arabia

Authors: Talal Alharigi, Kawther Alshlash, Mariska Weijerman

Abstract:

The government of Saudi Arabia has developed an ambitious “Vision 2030”, which includes a Green Initiative (i.e., the planting of 10 billion trees) and the establishment of seven Royal Reserves as protected areas that comprise 13% of the total land area. The main objective of the reserves is to restore ecosystem balance and reconnect people with nature. Two royal reserves are managed by The Imam Abdulaziz bin Mohammed Royal Reserve Development Authority, including Imam Abdulaziz bin Mohammed Royal Reserve and King Khalid Royal Reserve. The authority has developed a management plan to enhance the habitat through seed dispersal and the planting of 10 million trees, and to restock wildlife that was once abundant in these arid ecosystems (e.g., oryx, Nubian ibex, gazelles, red-necked ostrich). Expectations are that with the restoration of the native vegetation, soil condition and natural hydrologic processes will improve and lead to further enhancement of vegetation and, over time, an increase in biodiversity of flora and fauna. To evaluate the management strategies in reaching these expectations, a comprehensive monitoring and evaluation program was developed. The main objectives of this program are to (1) monitor the status and trends of indicator species, (2) improve desert ecosystem understanding, (3) assess the effects of human activities, and (4) provide science-based management recommendations. Using a random stratified survey design, a diverse suite of survey methods will be implemented, including belt and quadrant transects, camera traps, GPS tracking devices, and drones. Data will be gathered on biotic parameters (plant and animal diversity, density, and distribution) and abiotic parameters (humidity, temperature, precipitation, wind, air, soil quality, vibrations, and noise levels) to meet the goals of the monitoring program. This case study intends to provide a detailed overview of the management plan and monitoring program of two royal reserves and outlines the types of data gathered which can be made available for future research projects.

Keywords: camera traps, desert ecosystem, enhancement, GPS tracking, management evaluation, monitoring, planting, restocking, restoration

Procedia PDF Downloads 91
338 Static Charge Control Plan for High-Density Electronics Centers

Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda

Abstract:

Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.

Keywords: electrostatics, ESD protocols, HBM, static charge control

Procedia PDF Downloads 106
337 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 274
336 Polar Nanoregions in Lead-Free Relaxor Ceramics: Unveiling through Impedance Spectroscopy

Authors: Mohammed Mesrar, Hamza El Malki, Hamza Mesrar

Abstract:

In this study, ceramics of (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3 were synthesized through a conventional calcination process (solid-state method) at 1000°C for 4 hours, with x(%) values ranging from 0.0 to 100. Room temperature XRD patterns confirmed the phase formation of the samples. The Rietveld refinement method was employed to verify the morphotropic phase boundary (MPB) at x(%)=16-20. We investigated the average crystallite size and lattice strain using Scherrer's formula and Williamson-Hall (W-H) analysis. SEM image analyses provided additional evidence of the impact of doping on structural growth under low temperatures. Relaxation time extracted from Z″(f) and M″(f) spectra for x(%) = 0.0, 12, 16, 20, and 30 followed the Arrhenius law, revealing the presence of three distinct relaxation mechanisms with varying activation energies. The shoulder response in M″(f) indirectly indicated the existence of highly polarizable entities in the samples, serving as a signature of polar nanoregions (PNRs) within the grains.In this study, ceramics of (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3 were synthesized through a conventional calcination process (solid-state method) at 1000°C for 4 hours, with x(%) values ranging from 0.0 to 100. Room temperature XRD patterns confirmed the phase formation of the samples. The Rietveld refinement method was employed to verify the morphotropic phase boundary (MPB) at x(%)=16-20. We investigated the average crystallite size and lattice strain using Scherrer's formula and Williamson-Hall (W-H) analysis. SEM image analyses provided additional evidence of the impact of doping on structural growth under low temperatures. Relaxation time extracted from Z″(f) and M″(f) spectra for x(%) = 0.0, 12, 16, 20, and 30 followed the Arrhenius law, revealing the presence of three distinct relaxation mechanisms with varying activation energies. The shoulder response in M″(f) indirectly indicated the existence of highly polarizable entities in the samples, serving as a signature of polar nanoregions (PNRs) within the grains.

Keywords: (1-x)(Na0.5Bi0.5)TiO3 x(K0.5 Bi0.5)TiO3, Rietveld refinement, Scanning electron microscopy (SEM), Williamson-Hall plots, charge density distribution, dielectric properties

Procedia PDF Downloads 31
335 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 75
334 Implicit U-Net Enhanced Fourier Neural Operator for Long-Term Dynamics Prediction in Turbulence

Authors: Zhijie Li, Wenhui Peng, Zelong Yuan, Jianchun Wang

Abstract:

Turbulence is a complex phenomenon that plays a crucial role in various fields, such as engineering, atmospheric science, and fluid dynamics. Predicting and understanding its behavior over long time scales have been challenging tasks. Traditional methods, such as large-eddy simulation (LES), have provided valuable insights but are computationally expensive. In the past few years, machine learning methods have experienced rapid development, leading to significant improvements in computational speed. However, ensuring stable and accurate long-term predictions remains a challenging task for these methods. In this study, we introduce the implicit U-net enhanced Fourier neural operator (IU-FNO) as a solution for stable and efficient long-term predictions of the nonlinear dynamics in three-dimensional (3D) turbulence. The IU-FNO model combines implicit re-current Fourier layers to deepen the network and incorporates the U-Net architecture to accurately capture small-scale flow structures. We evaluate the performance of the IU-FNO model through extensive large-eddy simulations of three types of 3D turbulence: forced homogeneous isotropic turbulence (HIT), temporally evolving turbulent mixing layer, and decaying homogeneous isotropic turbulence. The results demonstrate that the IU-FNO model outperforms other FNO-based models, including vanilla FNO, implicit FNO (IFNO), and U-net enhanced FNO (U-FNO), as well as the dynamic Smagorinsky model (DSM), in predicting various turbulence statistics. Specifically, the IU-FNO model exhibits improved accuracy in predicting the velocity spectrum, probability density functions (PDFs) of vorticity and velocity increments, and instantaneous spatial structures of the flow field. Furthermore, the IU-FNO model addresses the stability issues encountered in long-term predictions, which were limitations of previous FNO models. In addition to its superior performance, the IU-FNO model offers faster computational speed compared to traditional large-eddy simulations using the DSM model. It also demonstrates generalization capabilities to higher Taylor-Reynolds numbers and unseen flow regimes, such as decaying turbulence. Overall, the IU-FNO model presents a promising approach for long-term dynamics prediction in 3D turbulence, providing improved accuracy, stability, and computational efficiency compared to existing methods.

Keywords: data-driven, Fourier neural operator, large eddy simulation, fluid dynamics

Procedia PDF Downloads 48
333 Zoledronic Acid with Neoadjuvant Chemotherapy in Advanced Breast Cancer Prospective Study 2011–2014

Authors: S. Sakhri

Abstract:

Background: The use of Zoledronic acid (ZA) is an established place in the treatment of malignant tumors with a predilection for the skeleton of interest (in particular metastasis). Although the main target of Zoledronic acid was osteoclasts, there are preclinical data suggest that Zoledronic acid may have an antitumor effect on cells other than osteoclasts, including tumor cells. Antitumor activity, including the inhibition of tumor cell growth and the induction of apoptosis of tumor cells, inhibition of tumor cell adhesion and invasion, and anti-angiogenic effects have been demonstrated. Methods. From (2012 to 2014), 438 patients were included respondents the inclusion criteria, respectively. This is a prospective study over a 4 year period. Of all patients (N=438), 432 received neoadjuvant chemotherapy with Zoledronic acid. The primary end point was the pathologic complete response in advancer breast cancer stage. The secondary end point is to evaluate Clinical response according to RECIST criteria; estimate the bone density before and at the end of chemotherapy in women with locally advanced breast cancer, Toxicity Evaluation and Overall survival using Kaplan-Meier and log test. Result: The Objective response rate was 97% after (C4) with 3% stabilizations and 99, 3% of which 0.7% C8 after stabilization. The clinical complete response was 28% after C4 respectively, and 46.8% after C8, the pathologic complete response rate was 40.13% according to the classification Sataloff. We observed that the pathologic complete response rate was the most raised in the group including Her2 (luminal Her2 and Her2) the lowest in the triple negative group as classified by Sataloff. We found that the pCR is significantly higher in the age group (35-50 years) with 53.17%. Those who have more than 50 years in 2nd place with 27.7% and the lower in young woman 35 years pCR was 19%, not statistically significant, -The pCR was also in favor of the menopausal group in 51, 4%, and 48, 55% for non-menopausal women. The average duration of overall survival was also significantly in the subgroup (Luminal -Her2, Her2) compared with triple negative. It is 47.18 months in the luminal group vs. 38.95 in the triple negative group. -Was observed in our study a difference in quality of life between (C1) was the admission of the patient, and after (C8), we found an increase in general signs and a deterioration in the psychological state C1, in contrast to the C8 these general signs and mental status improves, up to 12, and 24 months. Conclusion The results of this study suggest that the addition of ZA to néoadjuvant CT has potential anti-cancer benefit in patients (Luminal -Her2, Her2) compared with triple negative with or without menopause status.

Keywords: HER2+, RH+, breast cancer, tyrosine kinase

Procedia PDF Downloads 190
332 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 123
331 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space

Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson

Abstract:

Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.

Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling

Procedia PDF Downloads 211
330 Sustainable Wood Harvesting from Juniperus procera Trees Managed under a Participatory Forest Management Scheme in Ethiopia

Authors: Mindaye Teshome, Evaldo Muñoz Braz, Carlos M. M. Eleto Torres, Patricia Mattos

Abstract:

Sustainable forest management planning requires up-to-date information on the structure, standing volume, biomass, and growth rate of trees from a given forest. This kind of information is lacking in many forests in Ethiopia. The objective of this study was to quantify the population structure, diameter growth rate, and standing volume of wood from Juniperus procera trees in the Chilimo forest. A total of 163 sample plots were set up in the forest to collect the relevant vegetation data. Growth ring measurements were conducted on stem disc samples collected from 12 J. procera trees. Diameter and height measurements were recorded from a total of 1399 individual trees with dbh ≥ 2 cm. The growth rate, maximum current and mean annual increments, minimum logging diameter, and cutting cycle were estimated, and alternative cutting cycles were established. Using these data, the harvestable volume of wood was projected by alternating four minimum logging diameters and five cutting cycles following the stand table projection method. The results show that J. procera trees have an average density of 183 stems ha⁻¹, a total basal area of 12.1 m² ha⁻¹, and a standing volume of 98.9 m³ ha⁻¹. The mean annual diameter growth ranges between 0.50 and 0.65 cm year⁻¹ with an overall mean of 0.59 cm year⁻¹. The population of J. procera tree followed a reverse J-shape diameter distribution pattern. The maximum current annual increment in volume (CAI) occurred at around 49 years when trees reached 30 cm in diameter. Trees showed the maximum mean annual increment in volume (MAI) around 91 years, with a diameter size of 50 cm. The simulation analysis revealed that 40 cm MLD and a 15-year cutting cycle are the best minimum logging diameter and cutting cycle. This combination showed the largest harvestable volume of wood potential, volume increments, and a 35% recovery of the initially harvested volume. It is concluded that the forest is well stocked and has a large amount of harvestable volume of wood from J. procera trees. This will enable the country to partly meet the national wood demand through domestic wood production. The use of the current population structure and diameter growth data from tree ring analysis enables the exact prediction of the harvestable volume of wood. The developed model supplied an idea about the productivity of the J. procera tree population and enables policymakers to develop specific management criteria for wood harvesting.

Keywords: logging, growth model, cutting cycle, minimum logging diameter

Procedia PDF Downloads 61
329 Mitigation of Indoor Human Exposure to Traffic-Related Fine Particulate Matter (PM₂.₅)

Authors: Ruchi Sharma, Rajasekhar Balasubramanian

Abstract:

Motor vehicles emit a number of air pollutants, among which fine particulate matter (PM₂.₅) is of major concern in cities with high population density due to its negative impacts on air quality and human health. Typically, people spend more than 80% of their time indoors. Consequently, human exposure to traffic-related PM₂.₅ in indoor environments has received considerable attention. Most of the public residential buildings in tropical countries are designed for natural ventilation where indoor air quality tends to be strongly affected by the migration of air pollutants of outdoor origin. However, most of the previously reported traffic-related PM₂.₅ exposure assessment studies relied on ambient PM₂.₅ concentrations and thus, the health impact of traffic-related PM₂.₅ on occupants in naturally ventilated buildings remains largely unknown. Therefore, a systematic field study was conducted to assess indoor human exposure to traffic-related PM₂.₅ with and without mitigation measures in a typical naturally ventilated residential apartment situated near a road carrying a large volume of traffic. Three PM₂.₅ exposure scenarios were simulated in this study, i.e., Case 1: keeping all windows open with a ceiling fan on as per the usual practice, Case 2: keeping all windows fully closed as a mitigation measure, and Case 3: keeping all windows fully closed with the operation of a portable indoor air cleaner as an additional mitigation measure. The indoor to outdoor (I/O) ratios for PM₂.₅ mass concentrations were assessed and the effectiveness of using the indoor air cleaner was quantified. Additionally, potential human health risk based on the bioavailable fraction of toxic trace elements was also estimated for the three cases in order to identify a suitable mitigation measure for reducing PM₂.₅ exposure indoors. Traffic-related PM₂.₅ levels indoors exceeded the air quality guidelines (12 µg/m³) in Case 1, i.e., under natural ventilation conditions due to advective flow of outdoor air into the indoor environment. However, while using the indoor air cleaner, a significant reduction (p < 0.05) in the PM₂.₅ exposure levels was noticed indoors. Specifically, the effectiveness of the air cleaner in terms of reducing indoor PM₂.₅ exposure was estimated to be about 74%. Moreover, potential human health risk assessment also indicated a substantial reduction in potential health risk while using the air cleaner. This is the first study of its kind that evaluated the indoor human exposure to traffic-related PM₂.₅ and identified a suitable exposure mitigation measure that can be implemented in densely populated cities to realize health benefits.

Keywords: fine particulate matter, indoor air cleaner, potential human health risk, vehicular emissions

Procedia PDF Downloads 103
328 Dealing with the Spaces: Ultra Conservative Approach from Childhood to Adulthood

Authors: Maryam Firouzmandi, Moosa Miri

Abstract:

Common reasons for early tooth loss are trauma, extraction due to caries or periodontal disease and congenital missing. The remaining space after tooth loss may cause functional and esthetic problems. Therefore restorative dentists should attempt to manage these spaces using conservative methods. The goal is to restore the lost esthetic and function, prevent phonetic, self-esteem and personality problems and tongue habits. Preserving alveolar bone is also of great importance during the growth stage. Purpose: When deciding about the management of the missing tooth, space implants are contradicted until the completion of dentoalveolar development. Even in adulthood, due to systemic or periodontal problems or biological and economic issues, the implant might not be indicated. In this article, the alternative conservative restorative methods of space maintenance are going to be discussed. Essix retainers are made chair-side as easy as forming a custom bleaching tray with some modifications. They are esthetically acceptable and not expensive. These temporaries provide support for the lips but could not be used during function. Mini-screw-supported temporaries are another option for maintaining the space, especially after orthodontic treatment when there is a time lag between the termination of orthodontic treatment and definitive restoration. Two techniques will be presented for this kind of restoration: Denture tooth pontic or a composite crown. The benefits are alveolar bone preservation, Physiologic pressure on the alveolar ridge to increase its density and even can be retained until the completion of the definitive treatment. Bonded fixed partial denture includes Maryland bridge, fiber-reinforced composite bridge, resin-bonded bridge, and ceramic bonded bridge. These types of bridges are recommended to be used after a pubertal growth spurt and a recent meta-analysis considered their clinical success similar to conventional FDPs and implant-supported crowns. However, they have several advantages that are going to be discussed by presenting some clinical examples. Practical instruction on how to construct an FRC bridge and a novel chair-side Maryland bridge will be given by means of clinical cases. Clinical relevance: minimally invasive options should always be considered and destruction of healthy enamel and dentin during the preparation phase should be avoided as much as possible.

Keywords: tooth missing, fiber-reinforced composite, Maryland, Essix retainers, screw-retained restoration

Procedia PDF Downloads 174
327 Influence of Controlled Retting on the Quality of the Hemp Fibres Harvested at the Seed Maturity by Using a Designed Lab-Scale Pilot Unit

Authors: Brahim Mazian, Anne Bergeret, Jean-Charles Benezet, Sandrine Bayle, Luc Malhautier

Abstract:

Hemp fibers are increasingly used as reinforcements in polymer matrix composites due to their competitive performance (low density, mechanical properties and biodegradability) compared to conventional fibres such as glass fibers. However, the huge variation of their biochemical, physical and mechanical properties limits the use of these natural fibres in structural applications when high consistency and homogeneity are required. In the hemp industry, traditional processes termed field retting are commonly used to facilitate the extraction and separation of stem fibers. This retting treatment consists to spread out the stems on the ground for a duration ranging from a few days to several weeks. Microorganisms (fungi and bacteria) grow on the stem surface and produce enzymes that degrade pectinolytic substances in the middle lamellae surrounding the fibers. This operation depends on the weather conditions and is currently carried out very empirically in the fields so that a large variability in the hemp fibers quality (mechanical properties, color, morphology, chemical composition…) is resulting. Nonetheless, if controlled, retting might be favorable for good properties of hemp fibers and then of hemp fibers reinforced composites. Therefore, the present study aims to investigate the influence of controlled retting within a designed environmental chamber (lab-scale pilot unit) on the quality of the hemp fibres harvested at the seed maturity growth stage. Various assessments were applied directly on fibers: color observations, morphological (optical microscope), surface (ESEM), biochemical (gravimetry) analysis, spectrocolorimetric measurements (pectins content), thermogravimetric analysis (TGA) and tensile testing. The results reveal that controlled retting leads to a rapid change of color from yellow to dark grey due to development of microbial communities (fungi and bacteria) at the stem surface. An increase of thermal stability of fibres due to the removal of non-cellulosic components along retting is also observed. A separation of bast fibers to elementary fibers occurred with an evolution of chemical composition (degradation of pectins) and a rapid decrease in tensile properties (380MPa to 170MPa after 3 weeks) due to accelerated retting process. The influence of controlled retting on the biocomposite material (PP / hemp fibers) properties is under investigation.

Keywords: controlled retting, hemp fibre, mechanical properties, thermal stability

Procedia PDF Downloads 130
326 Application of Multilinear Regression Analysis for Prediction of Synthetic Shear Wave Velocity Logs in Upper Assam Basin

Authors: Triveni Gogoi, Rima Chatterjee

Abstract:

Shear wave velocity (Vs) estimation is an important approach in the seismic exploration and characterization of a hydrocarbon reservoir. There are varying methods for prediction of S-wave velocity, if recorded S-wave log is not available. But all the available methods for Vs prediction are empirical mathematical models. Shear wave velocity can be estimated using P-wave velocity by applying Castagna’s equation, which is the most common approach. The constants used in Castagna’s equation vary for different lithologies and geological set-ups. In this study, multiple regression analysis has been used for estimation of S-wave velocity. The EMERGE module from Hampson-Russel software has been used here for generation of S-wave log. Both single attribute and multi attributes analysis have been carried out for generation of synthetic S-wave log in Upper Assam basin. Upper Assam basin situated in North Eastern India is one of the most important petroleum provinces of India. The present study was carried out using four wells of the study area. Out of these wells, S-wave velocity was available for three wells. The main objective of the present study is a prediction of shear wave velocities for wells where S-wave velocity information is not available. The three wells having S-wave velocity were first used to test the reliability of the method and the generated S-wave log was compared with actual S-wave log. Single attribute analysis has been carried out for these three wells within the depth range 1700-2100m, which corresponds to Barail group of Oligocene age. The Barail Group is the main target zone in this study, which is the primary producing reservoir of the basin. A system generated list of attributes with varying degrees of correlation appeared and the attribute with the highest correlation was concerned for the single attribute analysis. Crossplot between the attributes shows the variation of points from line of best fit. The final result of the analysis was compared with the available S-wave log, which shows a good visual fit with a correlation of 72%. Next multi-attribute analysis has been carried out for the same data using all the wells within the same analysis window. A high correlation of 85% has been observed between the output log from the analysis and the recorded S-wave. The almost perfect fit between the synthetic S-wave and the recorded S-wave log validates the reliability of the method. For further authentication, the generated S-wave data from the wells have been tied to the seismic and correlated them. Synthetic share wave log has been generated for the well M2 where S-wave is not available and it shows a good correlation with the seismic. Neutron porosity, density, AI and P-wave velocity are proved to be the most significant variables in this statistical method for S-wave generation. Multilinear regression method thus can be considered as a reliable technique for generation of shear wave velocity log in this study.

Keywords: Castagna's equation, multi linear regression, multi attribute analysis, shear wave logs

Procedia PDF Downloads 201
325 An Audit on the Role of Sentinel Node Biopsy in High-Risk Ductal Carcinoma in Situ and Intracystic Papillary Carcinoma

Authors: M. Sulieman, H. Arabiyat, H. Ali, K. Potiszil, I. Abbas, R. English, P. King, I. Brown, P. Drew

Abstract:

Introduction: The incidence of breast ductal Carcinoma in Situ (DCIS) has been increasing; it currently represents up 20-25% of all breast carcinomas. Some aspects of DCIS management are still controversial, mainly due to the heterogeneity of its clinical presentation and of its biological and pathological characteristics. In DCIS, histological diagnosis obtained preoperatively, carries the risk of sampling error if the presence of invasive cancer is subsequently diagnosed. The mammographic extent over than 4–5 cm and the presence of architectural distortion, focal asymmetric density or mass on mammography are proven important risk factors of preoperative histological under staging. Intracystic papillary cancer (IPC) is a rare form of breast carcinoma. Despite being previously compared to DCIS it has been shown to present histologically with invasion of the basement membrane and even metastasis. SLNB – Carries the risk of associated comorbidity that should be considered when planning surgery for DCIS and IPC. Objectives: The aim of this Audit was to better define a ‘high risk’ group of patients with pre-op diagnosis of non-invasive cancer undergoing breast conserving surgery, who would benefit from sentinel node biopsy. Method: Retrospective data collection of all patients with ductal carcinoma in situ over 5 years. 636 patients identified, and after exclusion criteria applied: 394 patients were included. High risk defined as: Extensive micro-calcification >40mm OR any mass forming DCIS. IPC: Winpath search from for the term ‘papillary carcinoma’ in any breast specimen for 5 years duration;.29 patients were included in this group. Results: DCIS: 188 deemed high risk due to >40mm calcification or a mass forming (radiological or palpable) 61% of those had a mastectomy and 32% BCS. Overall, in that high-risk group - the number with invasive disease was 38%. Of those high-risk DCIS pts 85% had a SLN - 80% at the time of surgery and 5% at a second operation. For the BCS patients - 42% had SLN at time of surgery and 13% (8 patients) at a second operation. 15 (7.9%) pts in the high-risk group had a positive SLNB, 11 having a mastectomy and 4 having BCS. IPC: The provisional diagnosis of encysted papillary carcinoma is upgraded to an invasive carcinoma on final histology in around a third of cases. This has may have implications when deciding whether to offer sentinel node removal at the time of therapeutic surgery. Conclusions: We have defined a ‘high risk’ group of pts with pre-op diagnosis of non-invasive cancer undergoing BCS, who would benefit from SLNB at the time of the surgery. In patients with high-risk features; the risk of invasive disease is up to 40% but the risk of nodal involvement is approximately 8%. The risk of morbidity from SLN is up to about 5% especially the risk of lymphedema.

Keywords: breast ductal carcinoma in Situ (DCIS), intracystic papillary carcinoma (IPC), sentinel node biopsy (SLNB), high-risk, non-invasive, cancer disease

Procedia PDF Downloads 83
324 Prediction of Sound Transmission Through Framed Façade Systems

Authors: Fangliang Chen, Yihe Huang, Tejav Deganyar, Anselm Boehm, Hamid Batoul

Abstract:

With growing population density and further urbanization, the average noise level in cities is increasing. Excessive noise is not only annoying but also leads to a negative impact on human health. To deal with the increasing city noise, environmental regulations bring up higher standards on acoustic comfort in buildings by mitigating the noise transmission from building envelope exterior to interior. Framed window, door and façade systems are the leading choice for modern fenestration construction, which provides demonstrated quality of weathering reliability, environmental efficiency, and installation ease. The overall sound insulation of such systems depends both on glasses and frames, where glass usually covers the majority of the exposed surfaces, thus it is the main source of sound energy transmission. While frames in modern façade systems become slimmer for aesthetic appearance, which contribute to a minimal percentage of exposed surfaces. Nevertheless, frames might provide substantial transmission paths for sound travels through because of much less mass crossing the path, thus becoming more critical in limiting the acoustic performance of the whole system. There are various methodologies and numerical programs that can accurately predict the acoustic performance of either glasses or frames. However, due to the vast variance of size and dimension between frame and glass in the same system, there is no satisfactory theoretical approach or affordable simulation tool in current practice to access the over acoustic performance of a whole façade system. For this reason, laboratory test turns out to be the only reliable source. However, laboratory test is very time consuming and high costly, moreover different lab might provide slightly different test results because of varieties of test chambers, sample mounting, and test operations, which significantly constrains the early phase design of framed façade systems. To address this dilemma, this study provides an effective analytical methodology to predict the acoustic performance of framed façade systems, based on vast amount of acoustic test results on glass, frame and the whole façade system consist of both. Further test results validate the current model is able to accurately predict the overall sound transmission loss of a framed system as long as the acoustic behavior of the frame is available. Though the presented methodology is mainly developed from façade systems with aluminum frames, it can be easily extended to systems with frames of other materials such as steel, PVC or wood.

Keywords: city noise, building facades, sound mitigation, sound transmission loss, framed façade system

Procedia PDF Downloads 27