Search results for: city cost function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13548

Search results for: city cost function

9168 The Effect of Bilingualism on Prospective Memory

Authors: Aslı Yörük, Mevla Yahya, Banu Tavat

Abstract:

It is well established that bilinguals outperform monolinguals on executive function tasks. However, the effects of bilingualism on prospective memory (PM), which also requires executive functions, have not been investigated vastly. This study aimed to compare bi and monolingual participants' PM performance in focal and non-focal PM tasks. Considering that bilinguals have greater executive function abilities than monolinguals, we predict that bilinguals’ PM performance would be higher than monolinguals on the non-focal PM task, which requires controlled monitoring processes. To investigate these predictions, we administered the focal and non-focal PM task and measured the PM and ongoing task performance. Forty-eight Turkish-English bilinguals residing in North Macedonia and forty-eight Turkish monolinguals living in Turkey between the ages of 18-30 participated in the study. They were instructed to remember responding to rarely appearing PM cues while engaged in an ongoing task, i.e., spatial working memory task. The focality of the task was manipulated by giving different instructions for PM cues. In the focal PM task, participants were asked to remember to press an enter key whenever a particular target stimulus appeared in the working memory task; in the non-focal PM task, instead of responding to a specific target shape, participants were asked to remember to press the enter key whenever the background color of the working memory trials changes to a specific color (yellow). To analyze data, we performed a 2 × 2 mixed factorial ANOVA with the task (focal versus non-focal) as a within-subject variable and language group (bilinguals versus monolinguals) as a between-subject variable. The results showed no direct evidence for a bilingual advantage in PM. That is, the group’s performance did not differ in PM accuracy and ongoing task accuracy. However, bilinguals were overall faster in the ongoing task, yet this was not specific to PM cue’s focality. Moreover, the results showed a reversed effect of PM cue's focality on the ongoing task performance. That is, both bi and monolinguals showed enhanced performance in the non-focal PM cue task. These findings raise skepticism about the literature's prevalent findings and theoretical explanations. Future studies should investigate possible alternative explanations.

Keywords: bilingualism, executive functions, focality, prospective memory

Procedia PDF Downloads 106
9167 Developing City-Level Sustainability Indicators in the Mena Region with the Case of Benghazi and Amman

Authors: Serag El Hegazi

Abstract:

The development of an assessment methodological framework for local and institutional sustainability is a key factor for future development plans and visions. This paper develops an approach to local and institutional sustainability assessment (ALISA). The ALISA methodology is a methodological framework that assists in the clarification, formulation, preparation, selection, and ranking of key indicators to facilitate the assessment of the level of sustainability at the local and institutional levels in North African and Middle Eastern cities. According to the literature review, this paper formulates a methodological framework, ALISA, which is a combination of the UNCSD (2001) Theme Indicators Framework and the issue-based Framework illustrated by McLaren (1996). The methodological framework has been implemented to formulate, select, and prioritise key indicators that most directly reflect the issues of a case study at the local community and institutional level. Yet, in the meantime, there is a lack of clear indicators and frameworks that can be developed to apply successfully at the local and institutional levels in the MENA Region, particularly in the cities of Benghazi and Amman. This is an essential issue for sustainability development estimation. Therefore, a conceptual framework was developed to be tested as a methodology to collect and classify data. The Approach to Local and Institutional Sustainability Assessment (ALISA) is a methodological framework that was developed to apply to certain cities in the MENA region. The main goal is to develop the ALISA framework to formulate, choose, and prioritize sustainability key indicators, which then can assist in guiding an assessment progress to improve decisions and policymakers towards the development of sustainable cities at the local and institutional level in the city of Benghazi. The conceptual, methodological framework, which supports this research with joint documentary and analysed data in two case studies, including focus-group discussions, semi-structured interviews, and questionnaires, reflects the approach required to develop a combined framework that assists the development of sustainability indicators. To achieve this progress and reach the aim of this paper, which is developing a practical approach for sustainability indicators framework that could be used as a tool to develop local and institutional sustainability indicators, appropriate stages must be applied to propose a set of local and institutional sustainability indicators as follows: Step one: issues clarifications, Step two: objectives formation/analysing of issues and boundaries, Step three: indicators preparation, First list of proposed indictors, Step four: indicator selection, Step five: indicator rating/ranking.

Keywords: sustainability indicators, approach to local and institutional level, ALISA, policymakers

Procedia PDF Downloads 14
9166 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 128
9165 Arginase Enzyme Activity in Human Serum as a Marker of Cognitive Function: The Role of Inositol in Combination with Arginine Silicate

Authors: Katie Emerson, Sara Perez-Ojalvo, Jim Komorowski, Danielle Greenberg

Abstract:

The purpose of this study was to evaluate arginase activity levels in response to combinations of an inositol-stabilized arginine silicate (ASI; Nitrosigine®), L-arginine, and Inositol. Arginine acts as a vasodilator that promotes increased blood flow resulting in enhanced delivery of oxygen and nutrients to the brain and other tissues. ASI alone has been shown to improve performance on cognitive tasks. Arginase, found in human serum, catalyzes the conversion of arginine to ornithine and urea, completing the last step in the urea cycle. Decreasing arginase levels maintains arginine and results in increased nitric oxide production. This study aimed to determine the most effective combination of ASI, L-arginine and inositol for minimizing arginase levels and therefore maximize ASI’s effect on cognition. Serum was taken from untreated healthy donors by separation from clotted factors. Arginase activity of serum in the presence or absence of test products was determined (QuantiChrom™, DARG-100, Bioassay Systems, Hayward CA). The remaining ultra-filtrated serum units were harvested and used as the source for the arginase enzyme. ASI alone or combined with varied levels of Inositol were tested as follows: ASI + inositol at 0.25 g, 0.5 g, 0.75 g, or 1.00 g. L-arginine was also tested as a positive control. All tests elicited changes in arginase activity demonstrating the efficacy of the method used. Adding L-arginine to serum from untreated subjects, with or without inositol only had a mild effect. Adding inositol at all levels reduced arginase activity. Adding 0.5 g to the standardized amount of ASI led to the lowest amount of arginase activity as compared to the 0.25g 0.75g or 1.00g doses of inositol or to L-arginine alone. The outcome of this study demonstrates an interaction of the pairing of inositol with ASI on the activity of the enzyme arginase. We found that neither the maximum nor minimum amount of inositol tested in this study led to maximal arginase inhibition. Since the inhibition of arginase activity is desirable for product formulations looking to maintain arginine levels, the most effective amount of inositol was deemed preferred. Subsequent studies suggest this moderate level of inositol in combination with ASI leads to cognitive improvements including reaction time, executive function, and concentration.

Keywords: arginine, inositol, arginase, cognitive benefits

Procedia PDF Downloads 106
9164 Study of a Crude Oil Desalting Plant of the National Iranian South Oil Company in Gachsaran by Using Artificial Neural Networks

Authors: H. Kiani, S. Moradi, B. Soltani Soulgani, S. Mousavian

Abstract:

Desalting/dehydration plants (DDP) are often installed in crude oil production units in order to remove water-soluble salts from an oil stream. In order to optimize this process, desalting unit should be modeled. In this research, artificial neural network is used to model efficiency of desalting unit as a function of input parameter. The result of this research shows that the mentioned model has good agreement with experimental data.

Keywords: desalting unit, crude oil, neural networks, simulation, recovery, separation

Procedia PDF Downloads 440
9163 Stabilized Earth Roads Construction and Its Challenges

Authors: Mokhtar Nikgoo

Abstract:

Road definition and road construction: in engineering literature, a road is defined as a means of communication between two different places by air, land, and sea. In this way, all sea, land, and air routes are considered as roads. Road construction is an operation to create a road on the ground between 2 points with a specified width, which includes works such as subgrade, paving, placing tables, and traffic signs on the road. In this article, the stages of road construction are explained from the beginning to the end. Road construction is generally done in the construction of rural, urban, and inter-city roads, and according to the special conditions of this area, the precision of engineers in its design and calculations is very important. For example, if the design of a road does not pay enough attention to the way the road curves, there will undoubtedly be countless accidents. Also, adjusting the road surface and its durability and uniformity are among the things that engineers solve according to the upcoming obstacles.

Keywords: road construction, surveying, freeway, pavement, excavator

Procedia PDF Downloads 85
9162 Clay Mineralogy of Mukdadiya Formation in Shewasoor Area: Northeastern Kirkuk City, Iraq

Authors: Abbas R. Ali, Diana A. Bayiz

Abstract:

14 mudstone samples were collected within the sedimentary succession of Mukdadiya Formation (Late Miocene – Early Pliocene) from Shewasoor area at Northeastern Iraq. The samples were subjected to laboratory studies including mineralogical analysis (using X-ray Diffraction technique) in order to identify the clay mineralogy of Mukdadiya Formation of both clay and non-clay minerals. The results of non-clay minerals are: quartz, feldspar and carbonate (calcite and dolomite) minerals. The clay minerals are: montmorillonite, kaolinite, palygorskite, chlorite, and illite by the major basal reflections of each mineral. The origins of these minerals are deduced also.

Keywords: Mukdadiya Formation, mudstone, clay minerals, XRD, Shewasoor

Procedia PDF Downloads 342
9161 To Include or Not to Include: Resolving Ethical Concerns over the 20% High Quality Cassava Flour Inclusion in Wheat Flour Policy in Nigeria

Authors: Popoola I. Olayinka, Alamu E. Oladeji, B. Maziya-Dixon

Abstract:

Cassava, an indigenous crop grown locally by subsistence farmers in Nigeria has potential to bring economic benefits to the country. Consumption of bread and other confectionaries has been on the rise due to lifestyle changes of Nigerian consumers. However, wheat, being the major ingredient for bread and confectionery production does not thrive well under Nigerian climate hence the huge spending on wheat importation. To reduce spending on wheat importation, the Federal Government of Nigeria intends passing into law mandatory inclusion of 20% high-quality cassava flour (HQCF) in wheat flour. While the proposed policy may reduce post harvest loss of cassava, and also increase food security and domestic agricultural productivity, there are downsides to the policy which include reduction in nutritional quality and low sensory appeal of cassava-wheat bread, reluctance of flour millers to use HQCF, technology and processing challenges among others. The policy thus presents an ethical dilemma which must be resolved for its successful implementation. While inclusion of HQCF to wheat flour in bread and confectionery is a topic that may have been well addressed, resolving the ethical dilemma resulting from the act has not received much attention. This paper attempts to resolve this dilemma using various approaches in food ethics (cost benefits, utilitarianism, deontological and deliberative). The Cost-benefit approach did not provide adequate resolution of the dilemma as all the costs and benefits of the policy could not be stated in the quantitative term. The utilitarianism approach suggests that the policy delivers greatest good to the greatest number while the deontological approach suggests that the act (inclusion of HQCF to wheat flour) is right hence the policy is not utterly wrong. The deliberative approach suggests a win-win situation through deliberation with the parties involved.

Keywords: HQCF, ethical dilemma, food security, composite flour, cassava bread

Procedia PDF Downloads 403
9160 The Instruction of Imagination: A Theory of Language as a Social Communication Technology

Authors: Daniel Dor

Abstract:

The research presents a new general theory of language as a socially-constructed communication technology, designed by cultural evolution for a very specific function: the instruction of imagination. As opposed to all the other systems of intentional communication, which provide materials for the interlocutors to experience, language allows speakers to instruct their interlocutors in the process of imagining the intended meaning-instead of experiencing it. It is thus the only system that bridges the experiential gaps between speakers. This is the key to its enormous success.

Keywords: experience, general theory of language, imagination, language as technology, social essence of language

Procedia PDF Downloads 579
9159 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 91
9158 Developing Offshore Energy Grids in Norway as Capability Platforms

Authors: Vidar Hepsø

Abstract:

The energy and oil companies on the Norwegian Continental shelf come from a situation where each asset control and manage their energy supply (island mode) and move towards a situation where the assets need to collaborate and coordinate energy use with others due to increased cost and scarcity of electric energy sharing the energy that is provided. Currently, several areas are electrified either with an onshore grid cable or are receiving intermittent energy from offshore wind-parks. While the onshore grid in Norway is well regulated, the offshore grid is still in the making, with several oil and gas electrification projects and offshore wind development just started. The paper will describe the shift in the mindset that comes with operating this new offshore grid. This transition process heralds an increase in collaboration across boundaries and integration of energy management across companies, businesses, technical disciplines, and engagement with stakeholders in the larger society. This transition will be described as a function of the new challenges with increased complexity of the energy mix (wind, oil/gas, hydrogen and others) coupled with increased technical and organization complexity in energy management. Organizational complexity denotes an increasing integration across boundaries, whether these boundaries are company, vendors, professional disciplines, regulatory regimes/bodies, businesses, and across numerous societal stakeholders. New practices must be developed, made legitimate and institutionalized across these boundaries. Only parts of this complexity can be mitigated technically, e.g.: by use of batteries, mixing energy systems and simulation/ forecasting tools. Many challenges must be mitigated with legitimated societal and institutionalized governance practices on many levels. Offshore electrification supports Norway’s 2030 climate targets but is also controversial since it is exploiting the larger society’s energy resources. This means that new systems and practices must also be transparent, not only for the industry and the authorities, but must also be acceptable and just for the larger society. The paper report from ongoing work in Norway, participant observation and interviews in projects and people working with offshore grid development in Norway. One case presented is the development of an offshore floating windfarm connected to two offshore installations and the second case is an offshore grid development initiative providing six installations electric energy via an onshore cable. The development of the offshore grid is analyzed using a capability platform framework, that describes the technical, competence, work process and governance capabilities that are under development in Norway. A capability platform is a ‘stack’ with the following layers: intelligent infrastructure, information and collaboration, knowledge sharing & analytics and finally business operations. The need for better collaboration and energy forecasting tools/capabilities in this stack will be given a special attention in the two use cases that are presented.

Keywords: capability platform, electrification, carbon footprint, control rooms, energy forecsting, operational model

Procedia PDF Downloads 63
9157 Mathematical Modeling and Analysis of COVID-19 Pandemic

Authors: Thomas Wetere

Abstract:

Background: The coronavirus disease 2019 (COVID-19) pandemic (COVID-19) virus infection is a severe infectious disease with the highly transmissible variant, which become the global public health treat now. It has taken the life of more than 4 million people so far. What makes the disease the worst of all is no specific effective treatment available, its dynamics is not much researched and understood. Methodology: To end the global COVID-19 pandemic, implementation of multiple population-wide strategies, including vaccination, environmental factors, Government action, testing, and contact tracing, is required. In this article, a new mathematical model incorporating both temperature and government action to study the dynamics of the COVID-19 pandemic has been developed and comprehensively analysed. The model considers eight stages of infection: susceptible (S), infected Asymptomatic and Undetected(IAU ), infected Asymptomatic and detected(IAD), infected symptomatic and Undetected(ISU ), infected Symptomatic and detected(ISD), Hospitalized or threatened(H), Recovered(R) and Died(D). Results: The existence as well as non-negativity of the solution to the model is also verified, and the basic reproduction number is calculated. Besides, stability conditions are also checked, and finally, simulation results are compared with real data. The results demonstrates that effective government action will need to be combined with vaccination to end the ongoing COVID-19 pandemic. Conclusion: Vaccination and Government action are highly the crucial measures to control the COVID-19 pandemic. Besides, as the cost of vaccination might be high, we recommend an optimal control to reduce the cost and number of infected individuals. Moreover, in order to prevent COVID-19 pandemic, through the analysis of the model, the government must strictly manage the policy on COVID-19 and carry it out. This, in turn, helps for health campaigning and raising health literacy which plays a role to control the quick spread of the disease. We finally strongly believe that our study will play its own role in the current effort of controlling the pandemic.

Keywords: modeling, COVID-19, MCMC, stability

Procedia PDF Downloads 104
9156 A Brief Overview of Seven Churches in Van Province

Authors: Eylem Güzel, Soner Guler, Mustafa Gulen

Abstract:

Van province which has a very rich historical heritage is located in eastern part of Turkey, between Lake Van and the Iranian border. Many civilizations prevailing in Van until today have built up many historical structures such as castles, mosques, churches, bridges, baths, etc. In 2011, a devastating earthquake with magnitude 7.2 Mw, epicenter in Tabanlı Village, occurred in Van, where a large part of the city locates in the first-degree earthquake zone. As a result of this earthquake, 644 people were killed; a lot of reinforced, unreinforced and historical structures were badly damaged. Many historical structures damaged due to this earthquake have been restored. In this study, the damages observed in Seven churches (Yedi Kilise) after 2011 Van earthquake is evaluated with regard to architecture and civil engineering perspective.

Keywords: earthquake, historical structures, Van province, church

Procedia PDF Downloads 538
9155 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 469
9154 The Application of Pareto Local Search to the Single-Objective Quadratic Assignment Problem

Authors: Abdullah Alsheddy

Abstract:

This paper presents the employment of Pareto optimality as a strategy to help (single-objective) local search escaping local optima. Instead of local search, Pareto local search is applied to solve the quadratic assignment problem which is multi-objectivized by adding a helper objective. The additional objective is defined as a function of the primary one with augmented penalties that are dynamically updated.

Keywords: Pareto optimization, multi-objectivization, quadratic assignment problem, local search

Procedia PDF Downloads 462
9153 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 155
9152 Characterization of a Putative Type 1 Toxin-Antitoxin System in Shigella Flexneri

Authors: David Sarpong, Waleed Khursheed, Ernest Danquah, Erin Murphy

Abstract:

Shigella is a pathogenic bacterium responsible for shigellosis, a severe diarrheal disease that claims the lives of immunocompromised individuals worldwide. To develop therapeutics against this disease, an understanding of the molecular mechanisms underlying the pathogen’s physiology is crucial. Small non-coding RNAs (sRNAs) have emerged as important regulators of bacterial physiology, including as components of toxin-antitoxin systems. In this study, we investigated the role of RyfA in S. flexneri physiology and virulence. RyfA, originally identified as an sRNA in Escherichia coli, is conserved within the Enterobacteriaceae family, including Shigella. Whereas two copies of ryfA are present in S. dysenteriae, all other Shigella species contain only one copy of the gene. Additionally, we identified a putative open reading frame within the RyfA transcript, suggesting that it may be a dual-functioning gene encoding a small protein in addition to its sRNA function. To study ryfA in vitro, we cloned the gene into an inducible plasmid and observed the effect on bacterial growth. Here, we report that RyfA production inhibits the growth of S. flexneri, and this inhibition is dependent on the contained open reading frame. In-silico analyses have revealed the presence of two divergently transcribed sRNAs, RyfB1 and RyfB2, which share nucleotide complementarity with RyfA and thus are predicted to function as anti-toxins. Our data demonstrate that RyfB2 has a stronger antitoxin effect than RyfB1. This regulatory pattern suggests a novel form of a toxin-antitoxin system in which the activity of a single toxin is inhibited to varying degrees by two sRNA antitoxins. Studies are ongoing to investigate the regulatory mechanism(s) of the antitoxin genes, as well as the downstream targets and mechanism of growth inhibition by the RyfA toxin. This study offers distinct insights into the regulatory mechanisms underlying Shigella physiology and may inform the development of new anti-Shigella therapeutics.

Keywords: sRNA, shigella, toxin-antitoxin, Type 1 toxin antitoxin

Procedia PDF Downloads 45
9151 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 124
9150 Reverse Logistics End of Life Products Acquisition and Sorting

Authors: Badli Shah Mohd Yusoff, Khairur Rijal Jamaludin, Rozetta Dollah

Abstract:

The emerging of reverse logistics and product recovery management is an important concept in reconciling economic and environmental objectives through recapturing values of the end of life product returns. End of life products contains valuable modules, parts, residues and materials that can create value if recovered efficiently. The main objective of this study is to explore and develop a model to recover as much of the economic value as reasonably possible to find the optimality of return acquisition and sorting to meet demand and maximize profits over time. In this study, the benefits that can be obtained for remanufacturer is to develop demand forecasting of used products in the future with uncertainty of returns and quality of products. Formulated based on a generic disassembly tree, the proposed model focused on three reverse logistics activity, namely refurbish, remanufacture and disposal incorporating all plausible means quality levels of the returns. While stricter sorting policy, constitute to the decrease amount of products to be refurbished or remanufactured and increases the level of discarded products. Numerical experiments carried out to investigate the characteristics and behaviour of the proposed model with mathematical programming model using Lingo 16.0 for medium-term planning of return acquisition, disassembly (refurbish or remanufacture) and disposal activities. Moreover, the model seeks an analysis a number of decisions relating to trade off management system to maximize revenue from the collection of use products reverse logistics services through refurbish and remanufacture recovery options. The results showed that full utilization in the sorting process leads the system to obtain less quantity from acquisition with minimal overall cost. Further, sensitivity analysis provides a range of possible scenarios to consider in optimizing the overall cost of refurbished and remanufactured products.

Keywords: core acquisition, end of life, reverse logistics, quality uncertainty

Procedia PDF Downloads 298
9149 Full Analytical Procedure to Derive P-I Diagram of a Steel Beam under Blast Loading

Authors: L. Hamra, J. F. Demonceau, V. Denoël

Abstract:

The aim of this paper is to study a beam extracted from a frame and subjected to blast loading. The demand of ductility depends on six dimensionless parameters: two related to the blast loading, two referring to the bending behavior of the beam and two corresponding to the dynamic behavior of the rest of the structure. We develop a full analytical procedure that provides the ductility demand as a function of these six dimensionless parameters.

Keywords: analytical procedure, blast loading, membrane force, P-I diagram

Procedia PDF Downloads 420
9148 Analytical Modeling of Globular Protein-Ferritin in α-Helical Conformation: A White Noise Functional Approach

Authors: Vernie C. Convicto, Henry P. Aringa, Wilson I. Barredo

Abstract:

This study presents a conformational model of the helical structures of globular protein particularly ferritin in the framework of white noise path integral formulation by using Associated Legendre functions, Bessel and convolution of Bessel and trigonometric functions as modulating functions. The model incorporates chirality features of proteins and their helix-turn-helix sequence structural motif.

Keywords: globular protein, modulating function, white noise, winding probability

Procedia PDF Downloads 470
9147 An Assessment of Tai Chi Exercise on Cognitive Performance in Vietnamese Older Adults

Authors: Hung Manh Nguyen, Duong Dai Nguyen

Abstract:

Objective: To evaluate the effects of Tai Chi exercise on cognitive performance of community-dwelling elderly in Vinh city, Vietnam. Design: A randomized controlled trial. Participants: One hundred and two subjected were recruited. Intervention: Subjects were divided randomly into two groups. Tai Chi group was assigned 6-months Tai Chi training. Control group was instructed to maintain their routine daily activities. Outcome measures: Trail Making Test (TMT) is primary outcome measure. Results: Participants in Tai Chi group reported significant improvement in TMT (part A) F(1, 71) = 78.37, p < .001, and in TMT (part B) F(1, 71)= 175.00, p < .001 in comparison with Control group. Conclusion: Tai Chi is beneficial to improve cognitive performance of the elderly.

Keywords: cognitive, elderly, Vietnam, Tai Chi

Procedia PDF Downloads 522
9146 Dynamics of Bacterial Contamination and Oral Health Risks Associated with Currency Notes and Coins Circulating in Kampala City

Authors: Abdul Walusansa

Abstract:

In this paper, paper notes and coins were collected from general public in Kampala City where ready-to-eat food can be served, in order to survey for bacterial contamination. The total bacterial number and potentially pathogenic organisms loading on currency were tested. All isolated potential pathogens were also tested for antibiotic resistance against four most commonly prescribed antibiotics. 1. The bacterial counts on one hundred paper notes sample were ranging between 6~10918/cm cm-2,the median was 141/ cm-2, according to the data it was much higher than credit cards and Australian notes which were made of polymer. The bacterial counts on sixty coin samples were ranging between 2~380/cm-2, much less than paper notes. 2. Coliform (65.6%), E. coli (45.9%), S. aureus (41.7%), B. cereus (67.7%), Salmonella (19.8%) were isolated on one hundred paper notes. Coliform (22.4%), E. coli (5.2%), S. aureus (24.1%), B. cereus (34.5%), Salmonella (10.3%) were isolated from sixty coin samples. These results suggested a high rate of potential pathogens contamination of paper notes than coins. 3. Antibiotic resistances are commonly in most of the pathogens isolated on currency. Ampicillin resistance was found in 60%of Staphylococcus aureus isolated on currency, as well as 76.6% of E. coil and 40% of Salmonella. Erythromycin resistance was detected in 56.6% of S. aureus and in 80.0% of E. coli. All the pathogens isolated were sensitive to Norfloxacin, Salmonella and S. aureus also sensitive to Cefaclor. In this paper, we also studied the antimicrobial capability of metal coins, coins collected from different countries were tested for the ability to inhibit the growth of E. sakazakii, S. aureus, E. coli, L. monocytogenes and S. typhimurium. 1) E. sakazakii appeared very sensitive to metal coins, the second is S. aureus, but E. coli, L. monocytogenes and S. typhimurium are more resistant to these metal coin samples. 2) Coins made of Nickel-brass alloy and Copper-nickel alloy showed a better effect in anti-microbe than other metal coins, especially the ability to inhibited the growth of E. sakazakii and S. aureus, all the inhibition zones produced on nutrient agar are more than 20.6 mm. Aluminium-bronze alloy revealed weak anti-microbe activity to S. aureus and no effect to kill other pathogens. Coins made of stainless steel also can’t resist bacteria growth. 3) Surprisingly, one cent coins of USA which were made of 97.5% Zinc and 2.5% Cu showed a significant antimicrobial capability, the average inhibition zone of these five pathogens is 45.5 mm.

Keywords: antibiotic sensitivity, bacteria, currency, coins, parasites

Procedia PDF Downloads 323
9145 Recycling, Reuse and Reintegration of Steel Plant Fines

Authors: R. K. Agrawal, Shiv Agrawal

Abstract:

Fines and micro create fundamental problems of respiration. From mines to mills steel plants generate lot of pollutants. Legislation & Government laws are stricter day by day & each plant has to think of recycling, reuse &reintegration of pollutants generated during the process of steel making. This paper deals with experiments conducted in Bhilai Steel Plant and Real Ispat and Power Limited for reuse, recycle & reintegrate some of the steel making process fines. Iron ore fines with binders have been agglomerated to be used as a part of the charge for small furnaces. This will improve yield at nominal cost. Rolling mill fines have been recycled to increase the yield of sinter making. This will solve the problems of fine disposal. Huge saving on account of recycling will be achieved. Lime fines after briquetting is used along with prime lime. Lime fines have also been used as a binding material during production of fly ash bricks. These fines serve as low-cost binder. Experiments have been conducted along with coke breeze & gas cleaning plant sludge. As a result, the anti-sloping compound has been developed for converter vessels. Dolo char and Char during Sponge Iron production have been successfully used in power generation and brick making. Pellets have been made with ventilation dust & flue dust. These samples have been tried as a coolant in the converter. Pellets have been made with Sinter Plant electrostatic precipitator micro fines with liquid binder. Trials have been conducted to reuse these pellets in sinter making. Coke breeze from coke-ovens fines and mill scale along with binders were agglomerated. This was used in furnace after attaining required screening and reactivity index. These actions will definitely bring social, economic and environment-friendly universe.

Keywords: briquette, dolo char, electrostatic precipitator, pellet, sinter

Procedia PDF Downloads 386
9144 Association between a Serotonin Re-Uptake Transporter Gene Polymorphism and Mucosal Serotonin Level in Women Patients with Irritable Bowel Syndrome and Healthy Control: A Pilot Study from Northern India

Authors: Sunil Kumar, Uday C. Ghoshal

Abstract:

Background and aims: Serotonin (5-hydroxtryptamine, 5-HT) is an important factor in gut function, playing key roles in intestinal peristalsis and secretion, and in sensory signaling in the brain-gut axis. Removal from its sites of action is mediated by a specific protein called the serotonin reuptake transporter (SERT). Polymorphisms in the promoter region of the SERT gene have effects on transcriptional activity, resulting in altered 5-HT reuptake efficiency. Functional polymorphisms may underlie disturbance in gut function in individuals suffering with disorders such as irritable bowel syndrome (IBS). The aim of this study was to assess the potential association between SERT polymorphisms and the diarrhea predominant IBS (D-IBS) phenotype Subjects: A total of 36 northern Indian female patients and 55 female northern Indian healthy controls (HC) were subjected to genotyping. Methods: Leucocyte DNA of all subjects was analyzed by polymerase chain reaction based technologies for SERT polymorphisms, specifically the insertion/deletion polymorphism in the promoter (SERT-P). Statistical analysis was performed to assess association of SERT polymorphism allele with the D-IBS phenotype. Results: The frequency of distribution of SERT-P gene was comparable between female patients with IBS and HC (p = 0.086). However, frequency of SERT-P deletion/deletion genotype was significantly higher in female patients with D-IBS compared to C-IBS and A-IBS [17/19 (89.5%) vs. 4/12 (33.3%) vs. 1/5 (20%), p=0.001, respectively]. The mucosal level of serotonin was higher in D-IBS compared to C-IBS and A-IBS [Median, range (159.26, 98.78–212.1) vs. 110.4, 67.87–143.53 vs. 92.34, 78.8–166.3 pmol/mL, p=0.001, respectively]. The mucosal level of serotonin was higher in female patients with IBS with SERT-P deletion/deletion genotype compared deletion/insertion and insertion/insertion [157.65, 67.87–212.1 vs. 110.4, 78.1–143.32 vs. 100.5, 69.1–132.03 pmol/mL, p=0.001, respectively]. Patients with D-IBS with deletion/deletion genotype more often reported symptoms of abdominal pain, discomfort (p=0.025) and bloating (p=0.039). Symptoms development following lactose ingestion was strongly associated with D-IBS and SERT-P deletion/deletion genotype (p=0.004). Conclusions: Significant association was observed between D-IBS and the SERT-P deletion/deletion genotype, suggesting that the serotonin transporter is a potential candidate gene for D-IBS in women.

Keywords: serotonin, SERT, inflammatory bowel disease, genetic polymorphism

Procedia PDF Downloads 331
9143 3D Estimation of Synaptic Vesicle Distributions in Serial Section Transmission Electron Microscopy

Authors: Mahdieh Khanmohammadi, Sune Darkner, Nicoletta Nava, Jens Randel Nyengaard, Jon Sporring

Abstract:

We study the effect of stress on nervous system and we use two experimental groups of rats: sham rats and rats subjected to acute foot-shock stress. We investigate the synaptic vesicles density as a function of distance to the active zone in serial section transmission electron microscope images in 2 and 3 dimensions. By estimating the density in 2D and 3D we compare two groups of rats.

Keywords: stress, 3-dimensional synaptic vesicle density, image registration, bioinformatics

Procedia PDF Downloads 274
9142 New Hybrid Process for Converting Small Structural Parts from Metal to CFRP

Authors: Yannick Willemin

Abstract:

Carbon fibre-reinforced plastic (CFRP) offers outstanding value. However, like all materials, CFRP also has its challenges. Many forming processes are largely manual and hard to automate, making it challenging to control repeatability and reproducibility (R&R); they generate significant scrap and are too slow for high-series production; fibre costs are relatively high and subject to supply and cost fluctuations; the supply chain is fragmented; many forms of CFRP are not recyclable, and many materials have yet to be fully characterized for accurate simulation; shelf life and outlife limitations add cost; continuous-fibre forms have design limitations; many materials are brittle; and small and/or thick parts are costly to produce and difficult to automate. A majority of small structural parts are metal due to high CFRP fabrication costs for the small-size class. The fact that CFRP manufacturing processes that produce the highest performance parts also tend to be the slowest and least automated is another reason CFRP parts are generally higher in cost than comparably performing metal parts, which are easier to produce. Fortunately, business is in the midst of a major manufacturing evolution—Industry 4.0— one technology seeing rapid growth is additive manufacturing/3D printing, thanks to new processes and materials, plus an ability to harness Industry 4.0 tools. No longer limited to just prototype parts, metal-additive technologies are used to produce tooling and mold components for high-volume manufacturing, and polymer-additive technologies can incorporate fibres to produce true composites and be used to produce end-use parts with high aesthetics, unmatched complexity, mass customization opportunities, and high mechanical performance. A new hybrid manufacturing process combines the best capabilities of additive—high complexity, low energy usage and waste, 100% traceability, faster to market—and post-consolidation—tight tolerances, high R&R, established materials, and supply chains—technologies. The platform was developed by Zürich-based 9T Labs AG and is called Additive Fusion Technology (AFT). It consists of a design software offering the possibility to determine optimal fibre layup, then exports files back to check predicted performance—plus two pieces of equipment: a 3d-printer—which lays up (near)-net-shape preforms using neat thermoplastic filaments and slit, roll-formed unidirectional carbon fibre-reinforced thermoplastic tapes—and a post-consolidation module—which consolidates then shapes preforms into final parts using a compact compression press fitted with a heating unit and matched metal molds. Matrices—currently including PEKK, PEEK, PA12, and PPS, although nearly any high-quality commercial thermoplastic tapes and filaments can be used—are matched between filaments and tapes to assure excellent bonding. Since thermoplastics are used exclusively, larger assemblies can be produced by bonding or welding together smaller components, and end-of-life parts can be recycled. By combining compression molding with 3D printing, higher part quality with very-low voids and excellent surface finish on A and B sides can be produced. Tight tolerances (min. section thickness=1.5mm, min. section height=0.6mm, min. fibre radius=1.5mm) with high R&R can be cost-competitively held in production volumes of 100 to 10,000 parts/year on a single set of machines.

Keywords: additive manufacturing, composites, thermoplastic, hybrid manufacturing

Procedia PDF Downloads 92
9141 Inkjet Printed Silver Nanowire Network as Semi-Transparent Electrode for Organic Photovoltaic Devices

Authors: Donia Fredj, Marie Parmentier, Florence Archet, Olivier Margeat, Sadok Ben Dkhil, Jorg Ackerman

Abstract:

Transparent conductive electrodes (TCEs) or transparent electrodes (TEs) are a crucial part of many electronic and optoelectronic devices such as touch panels, liquid crystal displays (LCDs), organic light-emitting diodes (OLEDs), solar cells, and transparent heaters. The indium tin oxide (ITO) electrode is the most widely utilized transparent electrode due to its excellent optoelectrical properties. However, the drawbacks of ITO, such as the high cost of this material, scarcity of indium, and the fragile nature, limit the application in large-scale flexible electronic devices. Importantly, flexibility is becoming more and more attractive since flexible electrodes have the potential to open new applications which require transparent electrodes to be flexible, cheap, and compatible with large-scale manufacturing methods. So far, several materials as alternatives to ITO have been developed, including metal nanowires, conjugated polymers, carbon nanotubes, graphene, etc., which have been extensively investigated for use as flexible and low-cost electrodes. Among them, silver nanowires (AgNW) are one of the promising alternatives to ITO thanks to their excellent properties, high electrical conductivity as well as desirable light transmittance. In recent years, inkjet printing became a promising technique for large-scale printed flexible and stretchable electronics. However, inkjet printing of AgNWs still presents many challenges. In this study, a synthesis of stable AgNW that could compete with ITO was developed. This material was printed by inkjet technology directly on a flexible substrate. Additionally, we analyzed the surface microstructure, optical and electrical properties of the printed AgNW layers. Our further research focused on the study of all inkjet-printed organic modules with high efficiency.

Keywords: transparent electrodes, silver nanowires, inkjet printing, formulation of stable inks

Procedia PDF Downloads 215
9140 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality

Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia

Abstract:

Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.

Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy

Procedia PDF Downloads 230
9139 Small and Medium-Sized Enterprises, Flash Flooding and Organisational Resilience Capacity: Qualitative Findings on Implications of the Catastrophic 2017 Flash Flood Event in Mandra, Greece

Authors: Antonis Skouloudis, Georgios Deligiannakis, Panagiotis Vouros, Konstantinos Evangelinos, Loannis Nikolaou

Abstract:

On November 15th, 2017, a catastrophic flash flood devastated the city of Mandra in Central Greece, resulting in 24 fatalities and extensive damages to the built environment and infrastructure. It was Greece's deadliest and most destructive flood event for the past 40 years. In this paper, we examine the consequences of this event too small and medium-sized enterprises (SMEs) operating in Mandra during the flood event, which were affected by the floodwaters to varying extents. In this context, we conducted semi-structured interviews with business owners-managers of 45 SMEs located in flood inundated areas and are still active nowadays, based on an interview guide that spanned 27 topics. The topics pertained to the disaster experience of the business and business owners-managers, knowledge and attitudes towards climate change and extreme weather, aspects of disaster preparedness and related assistance needs. Our findings reveal that the vast majority of the affected businesses experienced heavy damages in equipment and infrastructure or total destruction, which resulted in business interruption from several weeks up to several months. Assistance from relatives or friends helped for the damage repairs and business recovery, while state compensations were deemed insufficient compared to the extent of the damages. Most interviewees pinpoint flooding as one of the most critical risks, and many connect it with the climate crisis. However, they are either not willing or unable to apply property-level prevention measures in their businesses due to cost considerations or complex and cumbersome bureaucratic processes. In all cases, the business owners are fully aware of the flood hazard implications, and since the recovery from the event, they have engaged in basic mitigation measures and contingency plans in case of future flood events. Such plans include insurance contracts whenever possible (as the vast majority of the affected SMEs were uninsured at the time of the 2017 event) as well as simple relocations of critical equipment within their property. The study offers fruitful insights on latent drivers and barriers of SMEs' resilience capacity to flash flooding. In this respect, findings such as ours, highlighting tensions that underpin behavioral responses and experiences, can feed into a) bottom-up approaches for devising actionable and practical guidelines, manuals and/or standards on business preparedness to flooding, and, ultimately, b) policy-making for an enabling environment towards a flood-resilient SME sector.

Keywords: flash flood, small and medium-sized enterprises, organizational resilience capacity, disaster preparedness, qualitative study

Procedia PDF Downloads 129