Search results for: force estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3988

Search results for: force estimation

658 Aten Years Rabies Data Exposure and Death Surveillance Data Analysis in Tigray Region, Ethiopia, 2023

Authors: Woldegerima G. Medhin, Tadele Araya

Abstract:

Background: Rabies is acute viral encephalitis affecting mainly carnivores and insectivorous but can affect any mammal. Case fatality rate is 100% once clinical signs appear. Rabies has a worldwide distribution in continental regions of Asia and Africa. Globally, rabies is responsible for more than 61000 human deaths annually. An estimation of human mortality rabies in Asia and Africa annually exceed 35172 and 21476 respectively. Ethiopia approximately 2900 people were estimated to die of rabies annually, Tigary region approximately 98 people were estimated to die annually. The aim of this study is to analyze trends, describe, and evaluate the ten years rabies data in Tigray, Ethiopia. Methods: We conducted descriptive epidemiological study from 15-30 February, 2023 of rabies exposure and death in humans by reviewing the health management information system report from Tigray Regional Health Bureau and vaccination coverage of dog population from 2013 to 2022. We used case definition, suspected cases are those bitten by the dogs displaying clinical signs consistent with rabies and confirmed cases were deaths from rabies at time of the exposure. Results: A total 21031 dog bites and 375 deaths report of rabies and 18222 post exposure treatments for humans in Tigray region were used. A suspected rabies patients had shown an increasing trend from 2013 to 2015 and 2018 to 2019. Overall mortality rate was 19/1000 in Tigray. Majority of suspected patients (45%) were age <15 years old. An estimated by Agriculture Bureau of Tigray Region about 12000 owned and 2500 stray dogs are available in the region, but yearly dog vaccination remains low (50%). Conclusion: Rabies is a public health problem in Tigray region. It is highly recommended to vaccinate individually owned dogs and concerned sectors should eliminate stray dogs. Surveillance system should strengthen for estimating the real magnitude, launch preventive and control measures.

Keywords: rabies, Virus, transmision, prevalence

Procedia PDF Downloads 41
657 Toxicity of PPCPs on Adapted Sludge Community

Authors: G. Amariei, K. Boltes, R. Rosal, P. Leton

Abstract:

Wastewater treatment plants (WWTPs) are supposed to hold an important place in the reduction of emerging contaminants, but provide an environment that has potential for the development and/or spread of adaptation, as bacteria are continuously mixed with contaminants at sub-inhibitory concentrations. Reviewing the literature, there are little data available regarding the use of adapted bacteria forming activated sludge community for toxicity assessment, and only individual validations have been performed. Therefore, the aim of this work was to study the toxicity of Triclosan (TCS) and Ibuprofen (IBU), individually and in binary combination, on adapted activated sludge (AS). For this purpose a battery of biomarkers were assessed, involving oxidative stress and cytotoxicity responses: glutation-S-transferase (GST), catalase (CAT) and viable cells with FDA. In addition, we compared the toxic effects on adapted bacteria with unadapted bacteria, from a previous research. Adapted AS comes from three continuous-flow AS laboratory systems; two systems received IBU and TCS, individually; while the other received the binary combination, for 14 days. After adaptation, each bacterial culture condition was exposure to IBU, TCS and the combination, at 12 h. The concentration of IBU and TCS ranged 0.5-4mg/L and 0.012-0.1 mg/L, respectively. Batch toxicity experiments were performed using Oxygraph system (Hansatech), for determining the activity of CAT enzyme based on the quantification of oxygen production rate. Fluorimetric technique was applied as well, using a Fluoroskan Ascent Fl (Thermo) for determining the activity of GST enzyme, using monochlorobimane-GSH as substrate, and to the estimation of viable cell of the sludge, by fluorescence staining using Fluorescein Diacetate (FDA). For IBU adapted sludge, CAT activity it was increased at low concentration of IBU, TCS and mixture. However, increasing the concentration the behavior was different: while IBU tends to stabilize the CAT activity, TCS and the mixture decreased this one. GST activity was significantly increased by TCS and mixture. For IBU, no variations it was observed. For TCS adapted sludge, no significant variations on CAT activity it was observed. GST activity it was significant decreased for all contaminants. For mixture adapted sludge the behaviour of CAT activity it was similar to IBU adapted sludge. GST activity it was decreased at all concentration of IBU. While the presence of TCS and mixture, respectively, increased the GST activity. These findings were consistent with the viability cells evaluation, which clearly showed a variation of sludge viability. Our results suggest that, compared with unadapted bacteria, the adapted bacteria conditions plays a relevant role in the toxicity behaviour towards activated sludge communities.

Keywords: adapted sludge community, mixture, PPCPs, toxicity

Procedia PDF Downloads 375
656 Fast Estimation of Fractional Process Parameters in Rough Financial Models Using Artificial Intelligence

Authors: Dávid Kovács, Bálint Csanády, Dániel Boros, Iván Ivkovic, Lóránt Nagy, Dalma Tóth-Lakits, László Márkus, András Lukács

Abstract:

The modeling practice of financial instruments has seen significant change over the last decade due to the recognition of time-dependent and stochastically changing correlations among the market prices or the prices and market characteristics. To represent this phenomenon, the Stochastic Correlation Process (SCP) has come to the fore in the joint modeling of prices, offering a more nuanced description of their interdependence. This approach has allowed for the attainment of realistic tail dependencies, highlighting that prices tend to synchronize more during intense or volatile trading periods, resulting in stronger correlations. Evidence in statistical literature suggests that, similarly to the volatility, the SCP of certain stock prices follows rough paths, which can be described using fractional differential equations. However, estimating parameters for these equations often involves complex and computation-intensive algorithms, creating a necessity for alternative solutions. In this regard, the Fractional Ornstein-Uhlenbeck (fOU) process from the family of fractional processes offers a promising path. We can effectively describe the rough SCP by utilizing certain transformations of the fOU. We employed neural networks to understand the behavior of these processes. We had to develop a fast algorithm to generate a valid and suitably large sample from the appropriate process to train the network. With an extensive training set, the neural network can estimate the process parameters accurately and efficiently. Although the initial focus was the fOU, the resulting model displayed broader applicability, thus paving the way for further investigation of other processes in the realm of financial mathematics. The utility of SCP extends beyond its immediate application. It also serves as a springboard for a deeper exploration of fractional processes and for extending existing models that use ordinary Wiener processes to fractional scenarios. In essence, deploying both SCP and fractional processes in financial models provides new, more accurate ways to depict market dynamics.

Keywords: fractional Ornstein-Uhlenbeck process, fractional stochastic processes, Heston model, neural networks, stochastic correlation, stochastic differential equations, stochastic volatility

Procedia PDF Downloads 82
655 Effect of Aminoethoxyvinylglycine on Ceasing in Sweet Orange

Authors: Zahoor Hussain

Abstract:

Creasing is a physiological disorder of rind in sweet orange [Citrus sinensis (L.) Osbeck] fruit and causes serious economic losses in various countries of the world. The reversible inhibitor of ethylene, aminoethoxyvinylglycine (AVG) with the effects of different concentrations (0, 20, 40 and 60 mgL⁻¹) AVG with 0.05% ‘Tween 20’ as a surfactant applied at the fruit set, the golf ball or at the colour break stage on controlling creasing, rheological properties of fruit and rind as well as fruit quality in of Washington Navel and Lane Late sweet orange was investigated. Creasing was substantially reduced and fruit quality was improved with the exogenous application of AVG depending upon its concentration and stage of application in both cultivars. The spray application of AVG (60 mgL⁻¹) at the golf ball stage was effective in reducing creasing (27.86% and 24.29%) compared to the control (52.14 and 51.53%) in cv. Washington Navel during 2011 and 2012, respectively. Whilst, in cv. Lane Late lowest creasing was observed When AVG was applied at fruit set stage (22.86%) compared to the control (51.43%) during 2012. In cv. Washington Navel, AVG treatment (60 mgL⁻¹) was more effective to increase the fruit firmness (318.97 N) and rind hardness (25.94 N) when applied at fruit set stage. However, rind tensile strength was higher, when AVG was applied at the golf ball stage (54.13 N). In cv. Lane Late, the rind harness (28.61 N), rind tensile strength (78.82 N) was also higher when AVG was sprayed at fruit set stage. Whilst, the fruit compression force (369.68 N) was higher when AVG was applied at the golf ball stage. Similarly, the treatment AVG (60 mgL⁻¹) was more effective in improving fruit weight (281.00 and 298.50 g) and fruit diameter (87.30 and 82.69 mm), rind thickness (5.56 and 5.38 mm) and total sugars (15.27 mg.100ml⁻¹) when AVG was applied at the fruit golf ball stage in cv. Washington Navel and Lane Late, respectively. Similarly, rind harness (25.94 and 28.61 N), total antioxidants (45.30 and 46.48 mM trolox 100ml⁻¹), total sugars (13.64 and 15.27 mg.100ml⁻¹), citric acid (1.66 and 1.32 mg100ml⁻¹), malic acid (0.36 and 0.63 mg.100ml⁻¹) and succinic acid (0.35 and 0.38 mg100ml⁻¹) were also higher, when AVG was applied at the fruit set stage in both cultivars. In conclusion, the exogenous applications of AVG substantially reduces the creasing incidence, improves rheological properties of fruit and rind as well as fruit quality in Washington Navel and Lane Late sweet orange fruit.

Keywords: AVG, creasing, ethylene inhibitor, sweet orange

Procedia PDF Downloads 132
654 Development of Composition and Technology of Vincristine Nanoparticles Using High-Molecular Carbohydrates of Plant Origin

Authors: L. Ebralidze, A. Tsertsvadze, D. Berashvili, A. Bakuridze

Abstract:

Current cancer therapy strategies are based on surgery, radiotherapy and chemotherapy. The problems associated with chemotherapy are one of the biggest challenges for clinical medicine. These include: low specificity, broad spectrum of side effects, toxicity and development of cellular resistance. Therefore, anti-cance drugs need to be develop urgently. Particularly, in order to increase efficiency of anti-cancer drugs and reduce their side effects, scientists work on formulation of nano-drugs. The objective of this study was to develop composition and technology of vincristine nanoparticles using high-molecular carbohydrates of plant origin. Plant polysacharides, particularly, soy bean seed polysaccharides, flaxseed polysaccharides, citrus pectin, gum arabic, sodium alginate were used as objects. Based on biopharmaceutical research, vincristine containing nanoparticle formulations were prepared. High-energy emulsification and solvent evaporation methods were used for preparation of nanosystems. Polysorbat 80, polysorbat 60, sodium dodecyl sulfate, glycerol, polyvinyl alcohol were used in formulation as emulsifying agent and stabilizer of the system. The ratio of API and polysacharides, also the type of the stabilizing and emulsifying agents are very effective on the particle size of the final product. The influence of preparation technology, type and concentration of stabilizing agents on the properties of nanoparticles were evaluated. For the next stage of research, nanosystems were characterized. Physiochemical characterization of nanoparticles: their size, shape, distribution was performed using Atomic force microscope and Scanning electron microscope. The present study explored the possibility of production of NPs using plant polysaccharides. Optimal ratio of active pharmaceutical ingredient and plant polysacharids, the best stabilizer and emulsifying agent was determined. The average range of nanoparticles size and shape was visualized by SEM.

Keywords: nanoparticles, target delivery, natural high molecule carbohydrates, surfactants

Procedia PDF Downloads 239
653 Disaggregate Travel Behavior and Transit Shift Analysis for a Transit Deficient Metropolitan City

Authors: Sultan Ahmad Azizi, Gaurang J. Joshi

Abstract:

Urban transportation has come to lime light in recent times due to deteriorating travel quality. The economic growth of India has boosted significant rise in private vehicle ownership in cities, whereas public transport systems have largely been ignored in metropolitan cities. Even though there is latent demand for public transport systems like organized bus services, most of the metropolitan cities have unsustainably low share of public transport. Unfortunately, Indian metropolitan cities have failed to maintain balance in mode share of various travel modes in absence of timely introduction of mass transit system of required capacity and quality. As a result, personalized travel modes like two wheelers have become principal modes of travel, which cause significant environmental, safety and health hazard to the citizens. Of late, the policy makers have realized the need to improve public transport system in metro cities for sustaining the development. However, the challenge to the transit planning authorities is to design a transit system for cities that may attract people to switch over from their existing and rather convenient mode of travel to the transit system under the influence of household socio-economic characteristics and the given travel pattern. In this context, the fast-growing industrial city of Surat is taken up as a case for the study of likely shift to bus transit. Deterioration of public transport system of bus after 1998, has led to tremendous growth in two-wheeler traffic on city roads. The inadequate and poor service quality of present bus transit has failed to attract the riders and correct the mode use balance in the city. The disaggregate travel behavior for trip generations and the travel mode choice has been studied for the West Adajan residential sector of city. Mode specific utility functions are calibrated under multi-nominal logit environment for two-wheeler, cars and auto rickshaws with respect to bus transit using SPSS. Estimation of shift to bus transit is carried indicate an average 30% of auto rickshaw users and nearly 5% of 2W users are likely to shift to bus transit if service quality is improved. However, car users are not expected to shift to bus transit system.

Keywords: bus transit, disaggregate travel nehavior, mode choice Behavior, public transport

Procedia PDF Downloads 231
652 Towards a Deconstructive Text: Beyond Language and the Politics of Absences in Samuel Beckett’s Waiting for Godot

Authors: Afia Shahid

Abstract:

The writing of Samuel Beckett is associated with meaning in the meaninglessness and the production of what he calls ‘literature of unword’. The casual escape from the world of words in the form of silences and pauses, in his play Waiting for Godot, urges to ask question of their existence and ultimately leads to investigate the theory behind their use in the play. This paper proposes that these absences (silence and pause) in Beckett’s play force to think ‘beyond’ language. This paper asks how silence and pause in Beckett’s text speak for the emergence of poststructuralist text. It aims to identify the significant features of the philosophy of deconstruction in the play of Beckett to demystify the hostile complicity between literature and philosophy. With the interpretive paradigm of poststructuralism this research focuses on the text as a research data. It attempts to delineate the relationship between poststructuralist theoretical concerns and text of Beckett. Keeping in view the theoretical concerns of Poststructuralist theorist Jacques Derrida, the main concern of the discussion is directed towards the notion of ‘beyond’ language into the absences that are aimed at silencing the existing discourse with the ‘radical irony’ of this anti-formal art that contains its own denial and thus represents the idea of ceaseless questioning and radical contradiction in art and any text. This article asks how text of Beckett vibrates with loud silence and has disrupted language to demonstrate the emptiness of words and thus exploring the limitless void of absences. Beckett’s text resonates with silence and pause that is neither negation nor affirmation rather a poststructuralist’s suspension of reality that is ever changing with the undecidablity of all meanings. Within the theoretical notion of Derrida’s Différance this study interprets silence and pause in Beckett’s art. The silence and pause behave like Derrida’s Différance and have questioned their own existence in the text to deconstruct any definiteness and finality of reality to extend an undecidable threshold of poststructuralists that aims to evade the ‘labyrinth of language’.

Keywords: Différance, language, pause, poststructuralism, silence, text

Procedia PDF Downloads 177
651 Influence of Urban Microclimates on Human Perceptions and Behavioral Patterns: A Relational Context of Human Parameters in Urban Design

Authors: Naveed Mazhar

Abstract:

Our cities are known to have significant modifying effects on the local climate. The nature of the modifications depends on a range of physical variables, usually assessed at a wide range of spatial scales. Physical spatial dimensions, such as measured parameters of microclimates and their significant influence on human sensations, are known to have far-reaching effects on human thermal comfort and by corollary a force that influences human perception. Less scholarship has thrown light on the subjective dimension and insufficiently demonstrates a relational approach between human behavior and how it is affected by the phenomenon of urban microclimates. Other than identifying gaps in the most recent scholarship and providing future research opportunities, the scope of this study will help improve urban design guidelines and raise framework standards of socially responsive urban design. This study will help equip future professionals to ameliorate the effects of urban microclimates on participant’s perceptions enabling more frequent usage of the outdoor urban spaces. However, it is informed that the physical parameters of an outdoor open space determine psychological human adaptations and is a measure of the degree to which people are willing to adapt to their surroundings. A large amount of research is available related to urban microclimates. However, very few studies are focused on the elucidation of the critical factors influencing human perceptions of the microclimates in urban spatial configurations. Based on the most recent scholarship, this study has evaluated the role urban microclimatic conditions have in the formation of human perceptions and, by extension, behavioral patterns formulating in outdoor open spaces. Furthermore, this study also defines, in the backdrop of the current scholarly literature, the socio-spatial interdependence of behavioral patterns with relationship to the built urban fabric and its resultant correlation with human perception. A comprehensive review and analysis of the recent research conducted within the scope of the study will help frame gaps, issues, current research methods and future research opportunities.

Keywords: urban design, urban microcliamate, human perception, human behavioral patterns

Procedia PDF Downloads 269
650 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 232
649 Impact of Import Restriction on Rice Production in Nigeria

Authors: C. O. Igberi, M. U. Amadi

Abstract:

This research paper on the impact of import restriction on rice production in Nigeria is aimed at finding/proffering valid solutions to the age long problem of rice self-sufficiency, through a better understanding of policy measures used in the past, in this case, the effectiveness of rice import restriction of the early 90’s. It tries to answer the questions of; import restriction boosting domestic rice production and the macroeconomic determining factors of Gross Domestic Rice Product (GDRP). The research probe is investigated through literature and analytical frameworks, such that time series data on the GDRP, Gross Fixed Capital Formation (GFCF), average foreign rice producers’ prices(PPF), domestic producers’ prices (PPN) and the labour force (LABF) are collated for analysis (with an import restriction dummy variable, POL1). The research objectives/hypothesis are analysed using; Cointegration, Vector Error Correction Model (VECM), Impulse Response Function (IRF) and Granger Causality Test(GCT) methodologies. Results show that in the short-run error correction specification for GDRP, a percentage (1%) deviation away from the long-run equilibrium in a current quarter is only corrected by 0.14% in the subsequent quarter. Also, the rice import restriction policy had no significant effect on the GDRP at this time. Other findings show that the policy period has, in fact, had effects on the PPN and LABF. The choice variables used are valid macroeconomic factors that explain the GDRP of Nigeria, as adduced from the IRF and GCT, and in the long-run. Policy recommendations suggest that the import restriction is not disqualified as a veritable tool for improving domestic rice production, rather better enforcement procedures and strict adherence to the policy dictates is needed. Furthermore, accompanying policies which drive public and private capital investment and accumulation must be introduced. Also, employment rate and labour substitution in the agricultural sector should not be drastically changed, rather its welfare and efficiency be improved.

Keywords: import restriction, gross domestic rice production, cointegration, VECM, Granger causality, impulse response function

Procedia PDF Downloads 177
648 Decommissioning of Nuclear Power Plants: The Current Position and Requirements

Authors: A. Stifi, S. Gentes

Abstract:

Undoubtedly from construction's perspective, the use of explosives will remove a large facility such as a 40-storey building , that took almost 3 to 4 years for construction, in few minutes. Usually, the reconstruction or decommissioning, the last phase of life cycle of any facility, is considered to be the shortest. However, this is proved to be wrong in the case of nuclear power plant. Statistics says that in the last 30 years, the construction of a nuclear power plant took an average time of 6 years whereas it is estimated that decommissioning of such plants may take even a decade or more. This paper is all about the decommissioning phase of a nuclear power plant which needs to be given more attention and encouragement from the research institutes as well as the nuclear industry. Currently, there are 437 nuclear power reactors in operation and 70 reactors in construction. With around 139 nuclear facilities already been shut down and are in different decommissioning stages and approximately 347 nuclear reactors will be in decommissioning phase in the next 20 years (assuming the operation time of a reactor as 40 years), This fact raises the following two questions (1) How far is the nuclear and construction Industry ready to face the challenges of decommissioning project? (2) What is required for a safety and reliable decommissioning project delivery? The decommissioning of nuclear facilities across the global have severe time and budget overruns. Largely the decommissioning processes are being executed by the force of manual labour where the change in regulations is respectively observed. In term of research and development, some research projects and activities are being carried out in this area, but the requirement seems to be much more. The near future of decommissioning shall be better through a sustainable development strategy where all stakeholders agree to implement innovative technologies especially for dismantling and decontamination processes and to deliever a reliable and safety decommissioning. The scope of technology transfer from other industries shall be explored. For example, remotery operated robotic technologies used in automobile and production industry to reduce time and improve effecincy and saftey shall be tried here. However, the innovative technologies are highly requested but they are alone not enough, the implementation of creative and innovative management methodologies should be also investigated and applied. Lean Management with it main concept "elimination of waste within process", is a suitable example here. Thus, the cooperation between international organisations and related industries and the knowledge-sharing may serve as a key factor for the successful decommissioning projects.

Keywords: decommissioning of nuclear facilities, innovative technology, innovative management, sustainable development

Procedia PDF Downloads 444
647 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 540
646 Conflation Methodology Applied to Flood Recovery

Authors: Eva L. Suarez, Daniel E. Meeroff, Yan Yong

Abstract:

Current flooding risk modeling focuses on resilience, defined as the probability of recovery from a severe flooding event. However, the long-term damage to property and well-being by nuisance flooding and its long-term effects on communities are not typically included in risk assessments. An approach was developed to address the probability of recovering from a severe flooding event combined with the probability of community performance during a nuisance event. A consolidated model, namely the conflation flooding recovery (&FR) model, evaluates risk-coping mitigation strategies for communities based on the recovery time from catastrophic events, such as hurricanes or extreme surges, and from everyday nuisance flooding events. The &FR model assesses the variation contribution of each independent input and generates a weighted output that favors the distribution with minimum variation. This approach is especially useful if the input distributions have dissimilar variances. The &FR is defined as a single distribution resulting from the product of the individual probability density functions. The resulting conflated distribution resides between the parent distributions, and it infers the recovery time required by a community to return to basic functions, such as power, utilities, transportation, and civil order, after a flooding event. The &FR model is more accurate than averaging individual observations before calculating the mean and variance or averaging the probabilities evaluated at the input values, which assigns the same weighted variation to each input distribution. The main disadvantage of these traditional methods is that the resulting measure of central tendency is exactly equal to the average of the input distribution’s means without the additional information provided by each individual distribution variance. When dealing with exponential distributions, such as resilience from severe flooding events and from nuisance flooding events, conflation results are equivalent to the weighted least squares method or best linear unbiased estimation. The combination of severe flooding risk with nuisance flooding improves flood risk management for highly populated coastal communities, such as in South Florida, USA, and provides a method to estimate community flood recovery time more accurately from two different sources, severe flooding events and nuisance flooding events.

Keywords: community resilience, conflation, flood risk, nuisance flooding

Procedia PDF Downloads 61
645 Preliminary Evaluation of Decommissioning Wastes for the First Commercial Nuclear Power Reactor in South Korea

Authors: Kyomin Lee, Joohee Kim, Sangho Kang

Abstract:

The commercial nuclear power reactor in South Korea, Kori Unit 1, which was a 587 MWe pressurized water reactor that started operation since 1978, was permanently shut down in June 2017 without an additional operating license extension. The Kori 1 Unit is scheduled to become the nuclear power unit to enter the decommissioning phase. In this study, the preliminary evaluation of the decommissioning wastes for the Kori Unit 1 was performed based on the following series of process: firstly, the plant inventory is investigated based on various documents (i.e., equipment/ component list, construction records, general arrangement drawings). Secondly, the radiological conditions of systems, structures and components (SSCs) are established to estimate the amount of radioactive waste by waste classification. Third, the waste management strategies for Kori Unit 1 including waste packaging are established. Forth, selection of the proper decontamination and dismantling (D&D) technologies is made considering the various factors. Finally, the amount of decommissioning waste by classification for Kori 1 is estimated using the DeCAT program, which was developed by KEPCO-E&C for a decommissioning cost estimation. The preliminary evaluation results have shown that the expected amounts of decommissioning wastes were less than about 2% and 8% of the total wastes generated (i.e., sum of clean wastes and radwastes) before/after waste processing, respectively, and it was found that the majority of contaminated material was carbon or alloy steel and stainless steel. In addition, within the range of availability of information, the results of the evaluation were compared with the results from the various decommissioning experiences data or international/national decommissioning study. The comparison results have shown that the radioactive waste amount from Kori Unit 1 decommissioning were much less than those from the plants decommissioned in U.S. and were comparable to those from the plants in Europe. This result comes from the difference of disposal cost and clearance criteria (i.e., free release level) between U.S. and non-U.S. The preliminary evaluation performed using the methodology established in this study will be useful as a important information in establishing the decommissioning planning for the decommissioning schedule and waste management strategy establishment including the transportation, packaging, handling, and disposal of radioactive wastes.

Keywords: characterization, classification, decommissioning, decontamination and dismantling, Kori 1, radioactive waste

Procedia PDF Downloads 184
644 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 421
643 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 9
642 Formulating a Definition of Hate Speech: From Divergence to Convergence

Authors: Avitus A. Agbor

Abstract:

Numerous incidents, ranging from trivial to catastrophic, do come to mind when one reflects on hate. The victims of these belong to specific identifiable groups within communities. These experiences evoke discussions on Islamophobia, xenophobia, homophobia, anti-Semitism, racism, ethnic hatred, atheism, and other brutal forms of bigotry. Common to all these is an invisible but portent force that drives all of them: hatred. Such hatred is usually fueled by a profound degree of intolerance (to diversity) and the zeal to impose on others their beliefs and practices which they consider to be the conventional norm. More importantly, the perpetuation of these hateful acts is the unfortunate outcome of an overplay of invectives and hate speech which, to a greater extent, cannot be divorced from hate. From a legal perspective, acknowledging the existence of an undeniable link between hate speech and hate is quite easy. However, both within and without legal scholarship, the notion of “hate speech” remains a conundrum: a phrase that is quite easily explained through experiences than propounding a watertight definition that captures the entire essence and nature of what it is. The problem is further compounded by a few factors: first, within the international human rights framework, the notion of hate speech is not used. In limiting the right to freedom of expression, the ICCPR simply excludes specific kinds of speeches (but does not refer to them as hate speech). Regional human rights instruments are not so different, except for the subsequent developments that took place in the European Union in which the notion has been carefully delineated, and now a much clearer picture of what constitutes hate speech is provided. The legal architecture in domestic legal systems clearly shows differences in approaches and regulation: making it more difficult. In short, what may be hate speech in one legal system may very well be acceptable legal speech in another legal system. Lastly, the cornucopia of academic voices on the issue of hate speech exude the divergence thereon. Yet, in the absence of a well-formulated and universally acceptable definition, it is important to consider how hate speech can be defined. Taking an evidence-based approach, this research looks into the issue of defining hate speech in legal scholarship and how and why such a formulation is of critical importance in the prohibition and prosecution of hate speech.

Keywords: hate speech, international human rights law, international criminal law, freedom of expression

Procedia PDF Downloads 39
641 The Experiences and Needs of Fathers’ of Children With Cancer in Coping With the Child's Illness

Authors: Karina Lõbus, Silver Muld, Kadri Kööp, Mare Tupits

Abstract:

Aim: The aim of the research is to describe the experiences and needs of fathers’ of children with cancer in coping with the child's disease. Background: Today, about 80% of children diagnosed with malignancy in developed countries survive. Despite the positive statistics, recovery is not always certain, treatment is often very intensive and long-term. Cancer is affecting an increasing number of the population, which is increasing the demand for quality care, but the nature of expected care is currently unclear. This topic is important for the development of professional practice, as nurses complain that their knowledge to deal with the relatives of a patient with a difficult diagnosis is limited and would therefore like additional information to deal with the situation. Design: Qualitative, empirical, descriptive research. Method: The data were collected through semi-structured interviews and analysed by inductive content analysis method. Interviews were conducted during Autumn 2020. 4 subjects participated in the research. Results and Conclusions: The thesis revealed that fathers had different experiences and needs in dealing with the child's illness. Fathers' experiences of coping with child's diseases encompassed experiences with information, social relationships, healthcare, changes in personal health and experiences regarding the child. Regarding information, the respondents pointed out bad experiences with the availability of information and the ability to convey the necessary information. Experiences regarding social relationships included experiences with relatives and strangers. Regarding healthcare, fathers mentioned experiences related to the child's health and healthcare professionals. In regards to personal health, fathers pointed out negative changes in their mental and physical health. In relation to the child, the subjects revealed experiences regarding changed values, way of life and raising the child. According to the research, fathers’ needs in relation to dealing with child's cancer included material, social, and spiritual needs. In regard to material needs, fathers pointed out the need for state assistance and the needs related to the surrounding environment. The needs concerning social belonging involved needs for a driving force and involvement in the treatment process. Regarding spiritual needs, fathers expressed mixed feelings towards the need for outside and professional help.

Keywords: father, coping, cancer, child, experience, need

Procedia PDF Downloads 107
640 Migrantional Entrepreneurship: Ethnography of a Journey That Changes Lives and the Territory

Authors: Francesca Alemanno

Abstract:

As a complex socio-spatial phenomenon, migration is a practice that also contains a strong imaginative component with respect to the place that, through displacement, one person wants to reach. Every migrant has undertaken his journey having in his mind an image of the displacement he was about to make, of its implications and finally, of the place or city in which he was or would have liked to land. Often, however, the imaginary that has come to build before departure does not fully correspond to the reality of landing; this discrepancy, which can be more or less wide, plays an important role in the relationship that is established with the territory and in the evolution, therefore, of the city itself. In this sense, therefore, the clash that occurs between the imagined and the real is one of the factors that can contribute to making the entry of a migrant into new territory as critical as it can be. Starting from this perspective, the experiences of people who derive from a migratory context and who, over time, manage to create a bond with the land of reception, are taken into account as stories of resistance as they are necessarily charged with a force that is capable of driving difficult and articulated processes of change. The phenomenon of migrant entrepreneurship that is taken into consideration by this abstract plays a very important role because it highlights the story of many people who have managed to build such a close bond with the new territory of arrival that they can imagine and then realize the construction of their own personal business. The margin of contrast between the imagined city and the one that will be inhabited will be observed through the narratives of those who, through the realization of his business project has acted directly on the reality in which he landed. The margin of contrast that exists between the imagined city and the one actually inhabited, together with the implications that this may have on real life, has been observed and analyzed through a period of fieldwork, practicing ethnography, through the narratives of people who find themselves living in a new city as a result of a migration path, and has been contextualized with the support of semi-structured interviews and field notes. At the theoretical level, the research is inserted into a constructionist framework, particularly suited to detect and analyze processes of change, construction of the imaginary and its own modification, being able to capture the consequent repercussions of this process on the conceptual, emotional and practical level.

Keywords: entrepreneurship, imagination, migration, resistance

Procedia PDF Downloads 128
639 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 141
638 Analysis of Structural and Photocatalytical Properties of Anatase, Rutile and Mixed Phase TiO2 Films Deposited by Pulsed-Direct Current and Radio Frequency Magnetron Co-Sputtering

Authors: S. Varnagiris, M. Urbonavicius, S. Tuckute, M. Lelis, K. Bockute

Abstract:

Amongst many water purification techniques, TiO2 photocatalysis is recognized as one of the most promising sustainable methods. It is known that for photocatalytical applications anatase is the most suitable TiO2 phase, however heterojunction of anatase/rutile phases could improve the photocatalytical activity of TiO2 even further. Despite the relative simplicity of TiO2 different synthesis methods lead to the highly dispersed crystal phases and photocatalytic activity of the corresponding samples. Accordingly, suggestions and investigations of various innovative methods of TiO2 synthesis are still needed. In this work structural and photocatalytical properties of TiO2 films deposited by the unconventional method of simultaneous co-sputtering from two magnetrons powered by pulsed-Direct Current (pDC) and Radio Frequency (RF) power sources with negative bias voltage have been studied. More specifically, TiO2 film thickness, microstructure, surface roughness, crystal structure, optical transmittance and photocatalytical properties were investigated by profilometer, scanning electron microscope, atomic force microscope, X-ray diffractometer and UV-Vis spectrophotometer respectively. The proposed unconventional two magnetron co-sputtering based TiO2 film formation method showed very promising results for crystalline TiO2 film formation while keeping process temperatures below 100 °C. XRD analysis revealed that by using proper combination of power source type and bias voltage various TiO2 phases (amorphous, anatase, rutile or their mixture) can be synthesized selectively. Moreover, strong dependency between power source type and surface roughness, as well as between the bias voltage and band gap value of TiO2 films was observed. Interestingly, TiO2 films deposited by two magnetron co-sputtering without bias voltage had one of the highest band gap values between the investigated films but its photocatalytic activity was superior compared to all other samples. It is suggested that this is due to the dominating nanocrystalline anatase phase with various exposed surfaces including photocatalytically the most active {001}.

Keywords: films, magnetron co-sputtering, photocatalysis, TiO₂

Procedia PDF Downloads 96
637 Formulation and Evaluation of Curcumin-Zn (II) Microparticulate Drug Delivery System for Antimalarial Activity

Authors: M. R. Aher, R. B. Laware, G. S. Asane, B. S. Kuchekar

Abstract:

Objective: Studies have shown that a new combination therapy with Artemisinin derivatives and curcumin is unique, with potential advantages over known ACTs. In present study an attempt was made to prepare microparticulate drug delivery system of Curcumin-Zn complex and evaluate it in combination with artemether for antimalarial activity. Material and method: Curcumin Zn complex was prepared and encapsulated using sodium alginate. Microparticles thus obtained are further coated with various enteric polymers at different coating thickness to control the release. Microparticles are evaluated for encapsulation efficiency, drug loading and in vitro drug release. Roentgenographic Studies was conducted in rabbits with BaSO 4 tagged formulation. Optimized formulation was screened for antimalarial activity using P. berghei-infected mice survival test and % paracetemia inhibition, alone (three oral dose of 5mg/day) and in combination with arthemether (i.p. 500, 1000 and 1500µg). Curcumin-Zn(II) was estimated in serum after oral administration to rats by using spectroflurometry. Result: Microparticles coated with Cellulose acetate phthalate showed most satisfactory and controlled release with 479 min time for 60% drug release. X-ray images taken at different time intervals confirmed the retention of formulation in GI tract. Estimation of curcumin in serum by spectroflurometry showed that drug concentration is maintained in the blood for longer time with tmax of 6 hours. The survival time (40 days post treatment) of mice infected with P. berghei was compared to survival after treatment with either Curcumin-Zn(II) microparticles artemether combination, curcumin-Zn complex and artemether. Oral administration of Curcumin-Zn(II)-artemether prolonged the survival of P.berghei-infected mice. All the mice treated with Curcumin-Zn(II) microparticles (5mg/day) artemether (1000µg) survived for more than 40 days and recovered with no detectable parasitemia. Administration of Curcumin-Zn(II) artemether combination reduced the parasitemia in mice by more than 90% compared to that in control mice for the first 3 days after treatment. Conclusion: Antimalarial activity of the curcumin Zn-artemether combination was more pronounced than mono therapy. A single dose of 1000µg of artemether in curcumin-Zn combination gives complete protection in P. berghei-infected mice. This may reduce the chances of drug resistance in malaria management.

Keywords: formulation, microparticulate drug delivery, antimalarial, pharmaceutics

Procedia PDF Downloads 369
636 The Role of Climate-Smart Agriculture in the Contribution of Small-Scale Farming towards Ensuring Food Security in South Africa

Authors: Victor O. Abegunde, Melusi Sibanda

Abstract:

There is need for a great deal of attention on small-scale agriculture for livelihood and food security because of the expanding global population. Small-scale agriculture has been identified as a major driving force of agricultural and rural development. However, the high dependence of the sector on natural and climatic resources has made small-scale farmers highly vulnerable to the adverse impact of climatic change thereby necessitating the need for embracing practices or concepts that will help absorb shocks from changes in climatic condition. This study examines the strategic position of small-scale farming in South African agriculture and in ensuring food security in the country, the vulnerability of small-scale agriculture to climate change and the potential of the concept of climate-smart agriculture to tackle the challenge of climate change. The study carried out a systematic review of peer-reviewed literature touching small-scale agriculture, climate change, food security and climate-smart agriculture, employing the realist review method. Findings revealed that increased productivity in the small-scale agricultural sector has a great potential of improving the food security of households in South Africa and reducing dependence on food purchase in a context of high food price inflation. Findings, however, also revealed that climate change affects small-scale subsistence farmers in terms of productivity, food security and family income, categorizing the impact on smallholder livelihoods into three major groups; biological processes, environmental and physical processes and impact on health. Analysis of the literature consistently showed that climate-smart agriculture integrates the benefits of adaptation and resilience to climate change, mitigation, and food security. As a result, farming households adopting climate-smart agriculture will be better off than their counterparts who do not. This study concludes that climate-smart agriculture could be a very good bridge linking small-scale agricultural sector and agricultural productivity and development which could bring about the much needed food security.

Keywords: climate change, climate-smart agriculture, food security, small-scale

Procedia PDF Downloads 204
635 Estimation of Biomedical Waste Generated in a Tertiary Care Hospital in New Delhi

Authors: Priyanka Sharma, Manoj Jais, Poonam Gupta, Suraiya K. Ansari, Ravinder Kaur

Abstract:

Introduction: As much as the Health Care is necessary for the population, so is the management of the Biomedical waste produced. Biomedical waste is a wide terminology used for the waste material produced during the diagnosis, treatment or immunization of human beings and animals, in research or in the production or testing of biological products. Biomedical waste management is a chain of processes from the point of generation of Biomedical waste to its final disposal in the correct and proper way, assigned for that particular type of waste. Any deviation from the said processes leads to improper disposal of Biomedical waste which itself is a major health hazard. Proper segregation of Biomedical waste is the key for Biomedical Waste management. Improper disposal of BMW can cause sharp injuries which may lead to HIV, Hepatitis-B virus, Hepatitis-C virus infections. Therefore, proper disposal of BMW is of upmost importance. Health care establishments segregate the Biomedical waste and dispose it as per the Biomedical waste management rules in India. Objectives: This study was done to observe the current trends of Biomedical waste generated in a tertiary care Hospital in Delhi. Methodology: Biomedical waste management rounds were conducted in the hospital wards. Relevant details were collected and analysed and sites with maximum Biomedical waste generation were identified. All the data was cross checked with the commons collection site. Results: The total amount of waste generated in the hospital during January 2014 till December 2014 was 6,39,547 kg, of which 70.5% was General (non-hazardous) waste and the rest 29.5% was BMW which consisted highly infectious waste (12.2%), disposable plastic waste (16.3%) and sharps (1%). The maximum quantity of Biomedical waste producing sites were Obstetrics and Gynaecology wards with a total Biomedical waste production of 45.8%, followed by Paediatrics, Surgery and Medicine wards with 21.2 %, 4.6% and 4.3% respectively. The maximum average Biomedical waste generated was by Obstetrics and Gynaecology ward with 0.7 kg/bed/day, followed by Paediatrics, Surgery and Medicine wards with 0.29, 0.28 and 0.18 kg/bed/day respectively. Conclusions: Hospitals should pay attention to the sites which produce a large amount of BMW to avoid improper segregation of Biomedical waste. Also, induction and refresher training Program of Biomedical waste management should be conducted to avoid improper management of Biomedical waste. Healthcare workers should be made aware of risks of poor Biomedical waste management.

Keywords: biomedical waste, biomedical waste management, hospital-tertiary care, New Delhi

Procedia PDF Downloads 218
634 Extreme Heat and Workforce Health in Southern Nevada

Authors: Erick R. Bandala, Kebret Kebede, Nicole Johnson, Rebecca Murray, Destiny Green, John Mejia, Polioptro Martinez-Austria

Abstract:

Summertemperature data from Clark County was collected and used to estimate two different heat-related indexes: the heat index (HI) and excess heat factor (EHF). These two indexes were used jointly with data of health-related deaths in Clark County to assess the effect of extreme heat on the exposed population. The trends of the heat indexes were then analyzed for the 2007-2016 decadeandthe correlation between heat wave episodes and the number of heat-related deaths in the area was estimated. The HI showed that this value has increased significantly in June, July, and August over the last ten years. The same trend was found for the EHF, which showed a clear increase in the severity and number of these events per year. The number of heat wave episodes increased from 1.4 per year during the 1980-2016 period to 1.66 per yearduring the 2007-2016 period. However, a different trend was found for heat-wave-event duration, which decreasedfrom an average of 20.4 days during the trans-decadal period (1980-2016) to 18.1 days during the most recent decade(2007-2016). The number of heat-related deaths was also found to increase from 2007 to 2016, with 2016 with the highest number of heat-related deaths. Both HI and the number of deaths showeda normal-like distribution for June, July, and August, with the peak values reached in late July and early August. The average maximum HI values better correlated with the number of deaths registered in Clark County than the EHF, probably because HI uses the maximum temperature and humidity in its estimation,whereas EHF uses the average medium temperature. However, it is worth testing the EHF of the study zone because it was reported to fit properly in the case of heat-related morbidity. For the overall period, 437 heat-related deaths were registered in Clark County, with 20% of the deaths occurring in June, 52% occurring in July, 18% occurring in August,and the remaining 10% occurring in the other months of the year. The most vulnerable subpopulation was people over 50 years old, for which 76% of the heat-related deaths were registered.Most of the cases were associated with heart disease preconditions. The second most vulnerable subpopulation was young adults (20-50), which accounted for 23% of the heat-related deaths. These deathswere associated with alcoholic/illegal drug intoxication.

Keywords: heat, health, hazards, workforce

Procedia PDF Downloads 75
633 The Effects of Billboard Content and Visible Distance on Driver Behavior

Authors: Arsalan Hassan Pour, Mansoureh Jeihani, Samira Ahangari

Abstract:

Distracted driving has been one of the most integral concerns surrounding our daily use of vehicles since the invention of the automobile. While much attention has been recently given to cell phones related distraction, commercial billboards along roads are also candidates for drivers' visual and cognitive distractions, as they may take drivers’ eyes from the road and their minds off the driving task to see, perceive and think about the billboard’s content. Using a driving simulator and a head-mounted eye-tracking system, speed change, acceleration, deceleration, throttle response, collision, lane changing, and offset from the center of the lane data along with gaze fixation duration and frequency data were collected in this study. Some 92 participants from a fairly diverse sociodemographic background drove on a simulated freeway in Baltimore, Maryland area and were exposed to three different billboards to investigate the effects of billboards on drivers’ behavior. Participants glanced at the billboards several times with different frequencies, the maximum of which occurred on the billboard with the highest cognitive load. About 74% of the participants didn’t look at billboards for more than two seconds at each glance except for the billboard with a short visible area. Analysis of variance (ANOVA) was performed to find the variations in driving behavior when they are invisible, readable, and post billboards area. The results show a slight difference in speed, throttle, brake, steering velocity, and lane changing, among different areas. Brake force and deviation from the center of the lane increased in the readable area in comparison with the visible area, and speed increased right after each billboard. The results indicated that billboards have a significant effect on driving performance and visual attention based on their content and visibility status. Generalized linear model (GLM) analysis showed no connection between participants’ age and driving experience with gaze duration. However, the visible distance of the billboard, gender, and billboard content had a significant effect on gaze duration.

Keywords: ANOVA, billboards, distracted driving, drivers' behavior, driving simulator, eye-Tracking system, GLM

Procedia PDF Downloads 99
632 Experimental Study on Different Load Operation and Rapid Load-change Characteristics of Pulverized Coal Combustion with Self-preheating Technology

Authors: Hongliang Ding, Ziqu Ouyang

Abstract:

Under the basic national conditions that the energy structure is dominated by coal, it is of great significance to realize deep and flexible peak shaving of boilers in pulverized coal power plants, and maximize the consumption of renewable energy in the power grid, to ensure China's energy security and scientifically achieve the goals of carbon peak and carbon neutrality. With the promising self-preheating combustion technology, which had the potential of broad-load regulation and rapid response to load changes, this study mainly investigated the different load operation and rapid load-change characteristics of pulverized coal combustion. Four effective load-stabilization bases were proposed according to preheating temperature, coal gas composition (calorific value), combustion temperature (spatial mean temperature and mean square temperature fluctuation coefficient), and flue gas emissions (CO and NOx concentrations), on the basis of which the load-change rates were calculated to assess the load response characteristics. Due to the improvement of the physicochemical properties of pulverized coal after preheating, stable ignition and combustion conditions could be obtained even at a low load of 25%, with a combustion efficiency of over 97.5%, and NOx emission reached the lowest at 50% load, with the concentration of 50.97 mg/Nm3 (@6%O2). Additionally, the load ramp-up stage displayed higher load-change rates than the load ramp-down stage, with maximum rates of 3.30 %/min and 3.01 %/min, respectively. Furthermore, the driving force formed by high step load was conducive to the increase of load-change rate. The rates based on the preheating indicator attained the highest value of 3.30 %/min, while the rates based on the combustion indicator peaked at 2.71 %/min. In comparison, the combustion indicator accurately described the system’s combustion state and load changes, whereas the preheating indicator was easier to acquire, with a higher load-change rate, hence the appropriate evaluation strategy should depend on the actual situation. This study verified a feasible method for deep and flexible peak shaving of coal-fired power units, further providing basic data and technical supports for future engineering applications.

Keywords: clean coal combustion, load-change rate, peak shaving, self-preheating

Procedia PDF Downloads 45
631 Tale of Massive Distressed Migration from Rural to Urban Areas: A Study of Mumbai City

Authors: Vidya Yadav

Abstract:

Migration is the demographic process that links rural to urban areas, generating or spurring the growth of cities. Evidence shows the role of the city as a production processes. It looks the city as a power of centre, and a centre of change. It has been observed that not only the professionals want to settle down in an urban area but rural labourers are also coming to cities for employment. These are the people who are compelled to migrate to metropolises because of lack of employment opportunities in their place of residence. However, the cities also fail to provide adequate employment because of limited job opportunity creation and capital-intensive industrialization. So these masses of incoming migrants are force to take up whatever employment absorption is available to them particularly in urban informal activities. Ultimately with this informal job they are compelled to stay in the slum areas, which is another form of deprived housing colonies. The paper seeks to examine the evidences of poverty induced migration from rural to urban areas (particularly in urban agglomeration). The present paper utilizes an abundant rich source of census migration data (D-Series) of 1991-2001. Result shows that Mumbai remain as the most attractive place to migrate. The migrants are mainly from the major states like Uttar Pradesh, Bihar, West Bengal, Jharkhand, Odisha, and Rajasthan. Male dominated migration is related mostly for employment and females due to marriages. The picture of occupational absorption of migrants who moved for employment, cross classified with educational status. Result shows that illiterate males are primarily engaged in low grade production processing work. Illiterate’s females engaged in service sectors; but these are actually very low grade services in urban informal sectors in India like maid servants, domestic help, hawkers, vendors or vegetables sellers. Among the higher educational level, a small percentage of males and females got absorbed in professional or clerical work but the percentage has been increased in the period 1991-2001.

Keywords: informal, job, migration, urban

Procedia PDF Downloads 256
630 Optimal Tetra-Allele Cross Designs Including Specific Combining Ability Effects

Authors: Mohd Harun, Cini Varghese, Eldho Varghese, Seema Jaggi

Abstract:

Hybridization crosses find a vital role in breeding experiments to evaluate the combining abilities of individual parental lines or crosses for creation of lines with desirable qualities. There are various ways of obtaining progenies and further studying the combining ability effects of the lines taken in a breeding programme. Some of the most common methods are diallel or two-way cross, triallel or three-way cross, tetra-allele or four-way cross. These techniques help the breeders to improve the quantitative traits which are of economical as well as nutritional importance in crops and animals. Amongst these methods, tetra-allele cross provides extra information in terms of the higher specific combining ability (sca) effects and the hybrids thus produced exhibit individual as well as population buffering mechanism because of the broad genetic base. Most of the common commercial hybrids in corn are either three-way or four-way cross hybrids. Tetra-allele cross came out as the most practical and acceptable scheme for the production of slaughter pigs having fast growth rate, good feed efficiency, and carcass quality. Tetra-allele crosses are mostly used for exploitation of heterosis in case of commercial silkworm production. Experimental designs involving tetra-allele crosses have been studied extensively in literature. Optimality of designs has also been considered as a researchable issue. In practical situations, it is advisable to include sca effects in the model as this information is needed by the breeder to improve economically and nutritionally important quantitative traits. Thus, a model that provides information regarding the specific traits by utilizing sca effects along with general combining ability (gca) effects may help the breeders to deal with the problem of various stresses. In this paper, a model for experimental designs involving tetra-allele crosses that incorporates both gca and sca has been defined. Optimality aspects of such designs have been discussed incorporating sca effects in the model. Orthogonality conditions have been derived for block designs ensuring estimation of contrasts among the gca effects, after eliminating the nuisance factors, independently from sca effects. User friendly SAS macro and web solution (webPTC) have been developed for the generation and analysis of such designs.

Keywords: general combining ability, optimality, specific combining ability, tetra-allele cross, webPTC

Procedia PDF Downloads 105
629 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 501