Search results for: linearly constrained minimum variance
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3566

Search results for: linearly constrained minimum variance

356 Investigation a New Approach "AGM" to Solve of Complicate Nonlinear Partial Differential Equations at All Engineering Field and Basic Science

Authors: Mohammadreza Akbari, Pooya Soleimani Besheli, Reza Khalili, Davood Domiri Danji

Abstract:

In this conference, our aims are accuracy, capabilities and power at solving of the complicated non-linear partial differential. Our purpose is to enhance the ability to solve the mentioned nonlinear differential equations at basic science and engineering field and similar issues with a simple and innovative approach. As we know most of engineering system behavior in practical are nonlinear process (especially basic science and engineering field, etc.) and analytical solving (no numeric) these problems are difficult, complex, and sometimes impossible like (Fluids and Gas wave, these problems can't solve with numeric method, because of no have boundary condition) accordingly in this symposium we are going to exposure an innovative approach which we have named it Akbari-Ganji's Method or AGM in engineering, that can solve sets of coupled nonlinear differential equations (ODE, PDE) with high accuracy and simple solution and so this issue will emerge after comparing the achieved solutions by Numerical method (Runge-Kutta 4th). Eventually, AGM method will be proved that could be created huge evolution for researchers, professors and students in whole over the world, because of AGM coding system, so by using this software we can analytically solve all complicated linear and nonlinear partial differential equations, with help of that there is no difficulty for solving all nonlinear differential equations. Advantages and ability of this method (AGM) as follow: (a) Non-linear Differential equations (ODE, PDE) are directly solvable by this method. (b) In this method (AGM), most of the time, without any dimensionless procedure, we can solve equation(s) by any boundary or initial condition number. (c) AGM method always is convergent in boundary or initial condition. (d) Parameters of exponential, Trigonometric and Logarithmic of the existent in the non-linear differential equation with AGM method no needs Taylor expand which are caused high solve precision. (e) AGM method is very flexible in the coding system, and can solve easily varieties of the non-linear differential equation at high acceptable accuracy. (f) One of the important advantages of this method is analytical solving with high accuracy such as partial differential equation in vibration in solids, waves in water and gas, with minimum initial and boundary condition capable to solve problem. (g) It is very important to present a general and simple approach for solving most problems of the differential equations with high non-linearity in engineering sciences especially at civil engineering, and compare output with numerical method (Runge-Kutta 4th) and Exact solutions.

Keywords: new approach, AGM, sets of coupled nonlinear differential equation, exact solutions, numerical

Procedia PDF Downloads 450
355 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 203
354 Impact of Informal Institutions on Development: Analyzing the Socio-Legal Equilibrium of Relational Contracts in India

Authors: Shubhangi Roy

Abstract:

Relational Contracts (informal understandings not enforceable by law) are a common feature of most economies. However, their dominance is higher in developing countries. Such informality of economic sectors is often co-related to lower economic growth. The aim of this paper is to investigate whether informal arrangements i.e. relational contracts are a cause or symptom of lower levels of economic and/or institutional development. The methodology followed involves an initial survey of 150 test subjects in Northern India. The subjects are all members of occupations where they frequently transact ensuring uniformity in transaction volume. However, the subjects are from varied socio-economic backgrounds to ensure sufficient variance in transaction values allowing us to understand the relationship between the amount of money involved to the method of transaction used, if any. Questions asked are quantitative and qualitative with an aim to observe both the behavior and motivation behind such behavior. An overarching similarity observed during the survey across all subjects’ responses is that in an economy like India with pervasive corruption and delayed litigation, economy participants have created alternative social sanctions to deal with non-performers. In a society that functions predominantly on caste, class and gender classifications, these sanctions could, in fact, be more cumbersome for a potential rule-breaker than the legal ramifications. It, therefore, is a symptom of weak formal regulatory enforcement and dispute settlement mechanism. Additionally, the study bifurcates such informal arrangements into two separate systems - a) when it exists in addition to and augments a legal framework creating an efficient socio-legal equilibrium or; b) in conflict with the legal system in place. This categorization is an important step in regulating informal arrangements. Instead of considering the entire gamut of such arrangements as counter-development, it helps decision-makers understand when to dismantle (latter) and when to pivot around existing informal systems (former). The paper hypothesizes that those social arrangements that support the formal legal frameworks allow for cheaper enforcement of regulations with lower enforcement costs burden on the state mechanism. On the other hand, norms which contradict legal rules will undermine the formal framework. Law infringement, in presence of these norms, will have no impact on the reputation of the business or individual outside of the punishment imposed under the law. It is especially exacerbated in the Indian legal system where enforcement of penalties for non-performance of contracts is low. In such a situation, the social norm will be adhered to more strictly by the individuals rather than the legal norms. This greatly undermines the role of regulations. The paper concludes with recommendations that allow policy-makers and legal systems to encourage the former category of informal arrangements while discouraging norms that undermine legitimate policy objectives. Through this investigation, we will be able to expand our understanding of tools of market development beyond regulations. This will allow academics and policymakers to harness social norms for less disruptive and more lasting growth.

Keywords: distribution of income, emerging economies, relational contracts, sample survey, social norms

Procedia PDF Downloads 157
353 Development of Special Education in Moldova: Paradoxes of Inclusion

Authors: Liya Kalinnikova Magnusson

Abstract:

The present and ongoing research investigation are focusing on special educational origins in Moldova for children with disabilities and its development towards inclusion. The research is coordinated with related research on inclusion in Ukraine and other countries. The research interest in these issues in Moldova is caused by several reasons. The first one is based upon one of the intensive processes of deconstruction of special education institutions in Moldova since 1989. A large number of children with disabilities have been dropping out of these institutions: from 11400 students in 1989 to 5800 students in 1996, corresponding to 1% of all school-age Moldovan learners. Despite the fact that a huge number of students was integrated into regular schools and the dynamics of this data across the country was uneven (the opposite, the dynamics of exclusion was raised in Trans-Dniester on the border of Moldova), the volume of the change was evident and traditional special educational provision was under stable decline. The second reason is tied to transitional challenges, which Moldova met under the force to economic liberalisation that led the country to poverty. Deinstitutionalization of the entire state system took place in the situation of economic polarization of the society. The level of social benefits was dramatically diminished, increasing inequality. The most vulnerable from the comprehensive income consideration were families with many children, children with disabilities, children with health problems, etc.: each third child belonged to the poorest population. In 2000-2001: 87,4% of all families with children had incomes below the minimum wage. The research question raised based upon these considerations has been addressed to the investigation of particular patterns of the origins of special education and its development towards inclusion in Moldova from 1980 until the present date: what is the pattern of special education origins and what are particular arrangements of special education development towards inclusion against inequality? This is a qualitative study, with relevant peer review resources connected to the research question and national documents of educational reforms towards inclusion retrospectively and contemporary, analysed by a content analysis approach. This study utilises long term statistics completed by the respective international agencies as a result of regular monitoring of the implementation of educational reforms. The main findings were composed in three big themes: adoption of the Soviet pattern of special education, ‘endemic stress’ of breaking the pattern, and ‘paradoxes of resolution’.

Keywords: special education, statistics, educational reforms, inclusion, children with disabilities, content analysis

Procedia PDF Downloads 153
352 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation

Authors: Afshin Kadri

Abstract:

Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.

Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter

Procedia PDF Downloads 63
351 Sustainable Treatment of Vegetable Oil Industry Wastewaters by Xanthomonas campestris

Authors: Bojana Ž. Bajić, Siniša N. Dodić, Vladimir S. Puškaš, Jelena M. Dodić

Abstract:

Increasing industrialization as a response to the demands of the consumer society greatly exploits resources and generates large amounts of waste effluents in addition to the desired product. This means it is a priority to implement technologies with the maximum utilization of raw materials and energy, minimum generation of waste effluents and/or their recycling (secondary use). Considering the process conditions and the nature of the raw materials used by the vegetable oil industry, its wastewaters can be used as substrates for the biotechnological production which requires large amounts of water. This way the waste effluents of one branch of industry become raw materials for another branch which produces a new product while reducing wastewater pollution and thereby reducing negative environmental impacts. Vegetable oil production generates wastewaters during the process of rinsing oils and fats which contain mainly fatty acid pollutants. The vegetable oil industry generates large amounts of waste effluents, especially in the processes of degumming, deacidification, deodorization and neutralization. Wastewaters from the vegetable oil industry are generated during the whole year in significant amounts, based on the capacity of the vegetable oil production. There are no known alternative applications for these wastewaters as raw materials for the production of marketable products. Since the literature has no data on the potential negative impact of fatty acids on the metabolism of the bacterium Xanthomonas campestris, these wastewaters were considered as potential raw materials for the biotechnological production of xanthan. In this research, vegetable oil industry wastewaters were used as the basis for the cultivation media for xanthan production with Xanthomonas campestris ATCC 13951. Examining the process of biosynthesis of xanthan on vegetable oil industry wastewaters as the basis for the cultivation media was performed to obtain insight into the possibility of its use in the aforementioned biotechnological process. Additionally, it was important to experimentally determine the absence of substances that have an inhibitory effect on the metabolism of the production microorganism. Xanthan content, rheological parameters of the cultivation media, carbon conversion into xanthan and conversions of the most significant nutrients for biosynthesis (carbon, nitrogen and phosphorus sources) were determined as indicators of the success of biosynthesis. The obtained results show that biotechnological production of the biopolymer xanthan by bacterium Xanthomonas campestris on vegetable oil industry wastewaters based cultivation media simultaneously provides preservation of the environment and economic benefits which is a sustainable solution to the problem of wastewater treatment.

Keywords: biotechnology, sustainable bioprocess, vegetable oil industry wastewaters, Xanthomonas campestris

Procedia PDF Downloads 144
350 Teaching Accounting through Critical Accounting Research: The Origin and Its Relevance to the South African Curriculum

Authors: Rosy Makeresemese Qhosola

Abstract:

South Africa has maintained the effort to uphold its guiding principles in terms of its constitution. The constitution upholds principles such as equity, social justice, peace, freedom and hope, to mention but a few. So, such principles are made to form the basis for any legislation and policies that are in place to guide all fields/departments of government. Education is one of those departments or fields and is expected to abide by such principles as outlined in their policies. Therefore, as expected education policies and legislation outline their intentions to ensure the development of students’ clear critical thinking capacity as well as their creative capacities by creating learning contexts and opportunities that accommodate the effective teaching and learning strategies, that are learner centered and are compatible with the prescripts of a democratic constitution of the country. The paper aims at exploring and analyzing the progress of conventional accounting in terms of its adherence to the effective use of principles of good teaching, as per policy expectations in South Africa. The progress is traced by comparing conventional accounting to Critical Accounting Research (CAR), where the history of accounting as intended in the curriculum of SA and CAR are highlighted. Critical Accounting Research framework is used as a lens and mode of teaching in this paper, since it can create a space for the learning of accounting that is optimal marked by the use of more learner-centred methods of teaching. The Curriculum of South Africa also emphasises the use of more learner-centred methods of teaching that encourage an active and critical approach to learning, rather than rote and uncritical learning of given truths. The study seeks to maintain that conventional accounting is in contrast with principles of good teaching as per South African policy expectations. The paper further maintains that, the possible move beyond it and the adherence to the effective use of good teaching, could be when CAR forms the basis of teaching. Data is generated through Participatory Action Research where the meetings, dialogues and discussions with the focused groups are conducted, which consists of lecturers, students, subject heads, coordinators and NGO’s as well as departmental officials. The results are analysed through Critical Discourse Analysis since it allows for the use of text by participants. The study concludes that any teacher who aspires to achieve in the teaching and learning of accounting should first meet the minimum requirements as stated in the NQF level 4, which forms the basic principles of good teaching and are in line with Critical Accounting Research.

Keywords: critical accounting research, critical discourse analysis, participatory action research, principles of good teaching

Procedia PDF Downloads 293
349 Analysis of the Statistical Characterization of Significant Wave Data Exceedances for Designing Offshore Structures

Authors: Rui Teixeira, Alan O’Connor, Maria Nogal

Abstract:

The statistical theory of extreme events is progressively a topic of growing interest in all the fields of science and engineering. The changes currently experienced by the world, economic and environmental, emphasized the importance of dealing with extreme occurrences with improved accuracy. When it comes to the design of offshore structures, particularly offshore wind turbines, the importance of efficiently characterizing extreme events is of major relevance. Extreme events are commonly characterized by extreme values theory. As an alternative, the accurate modeling of the tails of statistical distributions and the characterization of the low occurrence events can be achieved with the application of the Peak-Over-Threshold (POT) methodology. The POT methodology allows for a more refined fit of the statistical distribution by truncating the data with a minimum value of a predefined threshold u. For mathematically approximating the tail of the empirical statistical distribution the Generalised Pareto is widely used. Although, in the case of the exceedances of significant wave data (H_s) the 2 parameters Weibull and the Exponential distribution, which is a specific case of the Generalised Pareto distribution, are frequently used as an alternative. The Generalized Pareto, despite the existence of practical cases where it is applied, is not completely recognized as the adequate solution to model exceedances over a certain threshold u. References that set the Generalised Pareto distribution as a secondary solution in the case of significant wave data can be identified in the literature. In this framework, the current study intends to tackle the discussion of the application of statistical models to characterize exceedances of wave data. Comparison of the application of the Generalised Pareto, the 2 parameters Weibull and the Exponential distribution are presented for different values of the threshold u. Real wave data obtained in four buoys along the Irish coast was used in the comparative analysis. Results show that the application of the statistical distributions to characterize significant wave data needs to be addressed carefully and in each particular case one of the statistical models mentioned fits better the data than the others. Depending on the value of the threshold u different results are obtained. Other variables of the fit, as the number of points and the estimation of the model parameters, are analyzed and the respective conclusions were drawn. Some guidelines on the application of the POT method are presented. Modeling the tail of the distributions shows to be, for the present case, a highly non-linear task and, due to its growing importance, should be addressed carefully for an efficient estimation of very low occurrence events.

Keywords: extreme events, offshore structures, peak-over-threshold, significant wave data

Procedia PDF Downloads 261
348 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength

Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph

Abstract:

Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.

Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage

Procedia PDF Downloads 222
347 Assessment of Environmental Mercury Contamination from an Old Mercury Processing Plant 'Thor Chemicals' in Cato Ridge, KwaZulu-Natal, South Africa

Authors: Yohana Fessehazion

Abstract:

Mercury is a prominent example of a heavy metal contaminant in the environment, and it has been extensively investigated for its potential health risk in humans and other organisms. In South Africa, massive mercury contamination happened in1980s when the England-based mercury reclamation processing plant relocated to Cato Ridge, KwaZulu-Natal Province, and discharged mercury waste into the Mngceweni River. This mercury waste discharge resulted in high mercury concentration that exceeded the acceptable levels in Mngceweni River, Umgeni River, and human hair of the nearby villagers. This environmental issue raised the alarm, and over the years, several environmental assessments were reported the dire environmental crises resulting from the Thor Chemicals (now known as Metallica Chemicals) and urged the immediate removal of the around 3,000 tons of mercury waste stored in the factory storage facility over two decades. Recently theft of some containers with the toxic substance from the Thor Chemicals warehouse and the subsequent fire that ravaged the facility furtherly put the factory on the spot escalating the urgency of left behind deadly mercury waste removal. This project aims to investigate the mercury contamination leaking from an old Thor Chemicals mercury processing plant. The focus will be on sediments, water, terrestrial plants, and aquatic weeds such as the prominent water hyacinth weeds in the nearby water systems of Mngceweni River, Umgeni River, and Inanda Dam as a bio-indicator and phytoremediator for mercury pollution. Samples will be collected in spring around October when the condition is favourable for microbial activity to methylate mercury incorporated in sediments and blooming season for some aquatic weeds, particularly water hyacinth. Samples of soil, sediment, water, terrestrial plant, and aquatic weed will be collected per sample site from the point of source (Thor Chemicals), Mngceweni River, Umgeni River, and the Inanda Dam. One-way analysis of variance (ANOVA) tests will be conducted to determine any significant differences in the Hg concentration among all sampling sites, followed by Least Significant Difference post hoc test to determine if mercury contamination varies with the gradient distance from the source point of pollution. The flow injection atomic spectrometry (FIAS) analysis will also be used to compare the mercury sequestration between the different plant tissues (roots and stems). The principal component analysis is also envisaged for use to determine the relationship between the source of mercury pollution and any of the sampling points (Umgeni and Mngceweni Rivers and the Inanda Dam). All the Hg values will be expressed in µg/L or µg/g in order to compare the result with the previous studies and regulatory standards. Sediments are expected to have relatively higher levels of Hg compared to the soils, and aquatic macrophytes, water hyacinth weeds are expected to accumulate a higher concentration of mercury than terrestrial plants and crops.

Keywords: mercury, phytoremediation, Thor chemicals, water hyacinth

Procedia PDF Downloads 205
346 Comparison of Traditional and Green Building Designs in Egypt: Energy Saving

Authors: Hala M. Abdel Mageed, Ahmed I. Omar, Shady H. E. Abdel Aleem

Abstract:

This paper describes in details a commercial green building that has been designed and constructed in Marsa Matrouh, Egypt. The balance between homebuilding and the sustainable environment has been taken into consideration in the design and construction of this building. The building consists of one floor with 3 m height and 2810 m2 area while the envelope area is 1400 m2. The building construction fulfills the natural ventilation requirements. The glass curtain walls are about 50% of the building and the windows area is 300 m2. 6 mm greenish gray tinted temper glass as outer board lite, 6 mm safety glass as inner board lite and 16 mm thick dehydrated air spaces are used in the building. Visible light with 50% transmission, 0.26 solar factor, 0.67 shading coefficient and 1.3 W/m2.K thermal insulation U-value are implemented to realize the performance requirements. Optimum electrical distribution for lighting system, air conditions and other electrical loads has been carried out. Power and quantity of each type of the lighting system lamps and the energy consumption of the lighting system are investigated. The design of the air conditions system is based on summer and winter outdoor conditions. Ventilated, air conditioned spaces and fresh air rates are determined. Variable Refrigerant Flow (VRF) is the air conditioning system used in this building. The VRF outdoor units are located on the roof of the building and connected to indoor units through refrigerant piping. Indoor units are distributed in all building zones through ducts and air outlets to ensure efficient air distribution. The green building energy consumption is evaluated monthly all over one year and compared with the consumed energy in the non-green conditions using the Hourly Analysis Program (HAP) model. The comparison results show that the total energy consumed per year in the green building is about 1,103,221 kWh while the non-green energy consumption is about 1,692,057 kWh. In other words, the green building total annual energy cost is reduced from 136,581 $ to 89,051 $. This means that, the energy saving and consequently the money-saving of this green construction is about 35%. In addition, 13 points are awarded by applying one of the most popular worldwide green energy certification programs (Leadership in Energy and Environmental Design “LEED”) as a rating system for the green construction. It is concluded that this green building ensures sustainability, saves energy and offers an optimum energy performance with minimum cost.

Keywords: energy consumption, energy saving, green building, leadership in energy and environmental design, sustainability

Procedia PDF Downloads 291
345 Intraspecific Biochemical Diversity of Dalmatian Pyrethrum Across the Different Bioclimatic Regions of Its Natural Distribution Area

Authors: Martina Grdiša, Filip Varga, Nina Jeran, Ante Turudić, Zlatko Šatović

Abstract:

Dalmatian pyrethrum (Tanacetum cinerariifolium (Trevir.) Sch. Bip.) is a plant species that occurs naturally in the eastern Mediterranean. It is of immense economic importance as it synthesizes and accumulates the phytochemical compound pyrethrin. Pyrethrin consists of several monoterpene esters (pyrethrin I and II, cinerin I and II and jasmolin I and II), which have insecticidal and repellent activity through their synergistic action. In this study, 15 natural Dalmatian pyrethrum populations were sampled along their natural range in Croatia, Bosnia and Herzegovina and Montenegro to characterize and compare their pyrethrin profiles and to define the bioclimatic factors associated with the accumulation of each pyrethrin compound. Pyrethrins were extracted from the dried flower heads of Dalmatian pyrethrum using ultrasound-assisted extraction and the amount of each compound was quantified using high-performance liquid chromatography coupled to DAD-UV /VIS. The biochemical data were subjected to analysis of variance, correlation analysis and multivariate analysis. Quantitative variability within and among populations was found, with population P15 Vranjske Njive, Podgorica having the significantly highest pyrethrin I content (66.47% of total pyrethrin content), while the highest levels of total pyrethrin were found in P14 Budva (1.27% of dry flower weight; DW), followed by P08 Korčula (1.15% DW). Based on the environmental conditions at the sampling sites of the populations, five bioclimatic groups were distinguished, referred to as A, B, C, D, and E, each with rare chemical profile. The first group (A) consisted of the northern Adriatic population P01 Vrbnik, Krk and the population P06 Sevid - the coastal population of the central Adriatic, and generally differed significantly from the other bioclimatic groups by higher average jasmolin II values (2.13% of total pyrethrin). The second group (B) consisted of two central Adriatic island populations (P02 Telašćica, Dugi otok and P03 Žman, Dugi otok), while the remaining central Adriatic island populations were grouped in bioclimatic group C, which was characterized by the significantly highest average pyrethrin II (48.52% of total pyrethrin) and cinerin II (5.31% DW) content. The South Adriatic inland populations P10 Srđ and P11 Trebinje (Bosnia and Herzegovina), and the populations from Montenegro (P12 Grahovo, P13 Lovćen, P14 Budva and P15 Vranjske Njive, Podgorica) formed bioclimatic group E. This bioclimatic group was characterized by the highest average values for pyrethrin I (53.07 % of total pyrethrin), total pyrethrin content (1.06 % DW) and the ratio of pyrethrin I and II (1.85). Slightly lower values (although not significant) for the latter traits were detected in bioclimatic group D (southern Adriatic island populations P07 Vis, P08 Korčula and P09 Mljet). A weak but significant correlation was found between the levels of some pyrethrin compounds and bioclimatic variables (e.g., BIO03 Isothermality and BIO04 Temperature Seasonality), which explains part of the variability observed in the populations studied. This suggests the interconnection between bioclimatic variables and biochemical profiles either through the selection of adapted genotypes or through the ability of species to alter the expression of biochemical traits in response to environmental changes.

Keywords: biopesticides, biochemical variability, pyrethrin, Tanacetum cinerariifolium

Procedia PDF Downloads 143
344 Modeling Spatio-Temporal Variation in Rainfall Using a Hierarchical Bayesian Regression Model

Authors: Sabyasachi Mukhopadhyay, Joseph Ogutu, Gundula Bartzke, Hans-Peter Piepho

Abstract:

Rainfall is a critical component of climate governing vegetation growth and production, forage availability and quality for herbivores. However, reliable rainfall measurements are not always available, making it necessary to predict rainfall values for particular locations through time. Predicting rainfall in space and time can be a complex and challenging task, especially where the rain gauge network is sparse and measurements are not recorded consistently for all rain gauges, leading to many missing values. Here, we develop a flexible Bayesian model for predicting rainfall in space and time and apply it to Narok County, situated in southwestern Kenya, using data collected at 23 rain gauges from 1965 to 2015. Narok County encompasses the Maasai Mara ecosystem, the northern-most section of the Mara-Serengeti ecosystem, famous for its diverse and abundant large mammal populations and spectacular migration of enormous herds of wildebeest, zebra and Thomson's gazelle. The model incorporates geographical and meteorological predictor variables, including elevation, distance to Lake Victoria and minimum temperature. We assess the efficiency of the model by comparing it empirically with the established Gaussian process, Kriging, simple linear and Bayesian linear models. We use the model to predict total monthly rainfall and its standard error for all 5 * 5 km grid cells in Narok County. Using the Monte Carlo integration method, we estimate seasonal and annual rainfall and their standard errors for 29 sub-regions in Narok. Finally, we use the predicted rainfall to predict large herbivore biomass in the Maasai Mara ecosystem on a 5 * 5 km grid for both the wet and dry seasons. We show that herbivore biomass increases with rainfall in both seasons. The model can handle data from a sparse network of observations with many missing values and performs at least as well as or better than four established and widely used models, on the Narok data set. The model produces rainfall predictions consistent with expectation and in good agreement with the blended station and satellite rainfall values. The predictions are precise enough for most practical purposes. The model is very general and applicable to other variables besides rainfall.

Keywords: non-stationary covariance function, gaussian process, ungulate biomass, MCMC, maasai mara ecosystem

Procedia PDF Downloads 287
343 Improving Student Retention: Enhancing the First Year Experience through Group Work, Research and Presentation Workshops

Authors: Eric Bates

Abstract:

Higher education is recognised as being of critical importance in Ireland and has been linked as a vital factor to national well-being. Statistics show that Ireland has one of the highest rates of higher education participation in Europe. However, student retention and progression, especially in Institutes of Technology, is becoming an issue as rates on non-completion rise. Both within Ireland and across Europe student retention is seen as a key performance indicator for higher education and with these increasing rates the Irish higher education system needs to be flexible and adapt to the situation it now faces. The author is a Programme Chair on a Level 6 full time undergraduate programme and experience to date has shown that the first year undergraduate students take some time to identify themselves as a group within the setting of a higher education institute. Despite being part of a distinct class on a specific programme some individuals can feel isolated as he or she take the first step into higher education. Such feelings can contribute to students eventually dropping out. This paper reports on an ongoing initiative that aims to accelerate the bonding experience of a distinct group of first year undergraduates on a programme which has a high rate of non-completion. This research sought to engage the students in dynamic interactions with their peers to quickly evolve a group sense of coherence. Two separate modules – a Research Module and a Communications module - delivered by the researcher were linked across two semesters. Students were allocated into random groups and each group was given a topic to be researched. There were six topics – essentially the six sub-headings on the DIT Graduate Attribute Statement. The research took place in a computer lab and students also used the library. The output from this was a document that formed part of the submission for the Research Module. In the second semester the groups then had to make a presentation of their findings where each student spoke for a minimum amount of time. Presentation workshops formed part of that module and students were given the opportunity to practice their presentation skills. These presentations were video recorded to enable feedback to be given. Although this was a small scale study preliminary results found a strong sense of coherence among this particular cohort and feedback from the students was very positive. Other findings indicate that spreading the initiative across two semesters may have been an inhibitor. Future challenges include spreading such Initiatives College wide and indeed sector wide.

Keywords: first year experience, student retention, group work, presentation workshops

Procedia PDF Downloads 222
342 Contextual Factors of Innovation for Improving Commercial Banks' Performance in Nigeria

Authors: Tomola Obamuyi

Abstract:

The banking system in Nigeria adopted innovative banking, with the aim of enhancing financial inclusion, and making financial services readily and cheaply available to majority of the people, and to contribute to the efficiency of the financial system. Some of the innovative services include: Automatic Teller Machines (ATMs), National Electronic Fund Transfer (NEFT), Point of Sale (PoS), internet (Web) banking, Mobile Money payment (MMO), Real-Time Gross Settlement (RTGS), agent banking, among others. The introduction of these payment systems is expected to increase bank efficiency and customers' satisfaction, culminating in better performance for the commercial banks. However, opinions differ on the possible effects of the various innovative payment systems on the performance of commercial banks in the country. Thus, this study empirically determines how commercial banks use innovation to gain competitive advantage in the specific context of Nigeria's finance and business. The study also analyses the effects of financial innovation on the performance of commercial banks, when different periods of analysis are considered. The study employed secondary data from 2009 to 2018, the period that witnessed aggressive innovation in the financial sector of the country. The Vector Autoregression (VAR) estimation technique forecasts the relative variance of each random innovation to the variables in the VAR, examine the effect of standard deviation shock to one of the innovations on current and future values of the impulse response and determine the causal relationship between the variables (VAR granger causality test). The study also employed the Multi-Criteria Decision Making (MCDM) to rank the innovations and the performance criteria of Return on Assets (ROA) and Return on Equity (ROE). The entropy method of MCDM was used to determine which of the performance criteria better reflect the contributions of the various innovations in the banking sector. On the other hand, the Range of Values (ROV) method was used to rank the contributions of the seven innovations to performance. The analysis was done based on medium term (five years) and long run (ten years) of innovations in the sector. The impulse response function derived from the VAR system indicated that the response of ROA to the values of cheques transaction, values of NEFT transactions, values of POS transactions was positive and significant in the periods of analysis. The paper also confirmed with entropy and range of value that, in the long run, both the CHEQUE and MMO performed best while NEFT was next in performance. The paper concluded that commercial banks would enhance their performance by continuously improving on the services provided through Cheques, National Electronic Fund Transfer and Point of Sale since these instruments have long run effects on their performance. This will increase the confidence of the populace and encourage more usage/patronage of these services. The banking sector will in turn experience better performance which will improve the economy of the country. Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression,

Keywords: Bank performance, financial innovation, multi-criteria decision making, vector autoregression

Procedia PDF Downloads 107
341 Synthesis, Physicochemical Characterization and Study of the Antimicrobial Activity of Chlorobutanol

Authors: N. Hadhoum, B. Guerfi, T. M. Sider, Z. Yassa, T. Djerboua, M. Boursouti, M. Mamou, F. Z. Hadjadj Aoul, L. R. Mekacher

Abstract:

Introduction and objectives: Chlorobutanol is a raw material, mainly used as an antiseptic and antimicrobial preservative in injectable and ophthalmic preparations. The main objective of our study was the synthesis and evaluation of the antimicrobial activity of chlorobutanol hemihydrates. Material and methods: Chlorobutanol was synthesized according to the nucleophilic addition reaction of chloroform to acetone, identified by an infrared absorption using Spectrum One FTIR spectrometer, melting point, Scanning electron microscopy and colorimetric reactions. The dosage of carvedilol active substance was carried out by assaying the degradation products of chlorobutanol in a basic solution. The chlorobutanol obtained was subjected to bacteriological tests in order to study its antimicrobial activity. The antibacterial activity was evaluated against strains such as Escherichia coli (ATCC 25 922), Staphylococcus aureus (ATCC 25 923) and Pseudomonas aeroginosa (ATCC = American type culture collection). The antifungal activity was evaluated against human pathogenic fungal strains, such as Candida albicans and Aspergillus niger provided by the parasitology laboratory of the Hospital of Tizi-Ouzou, Algeria. Results and discussion: Chlorobutanol was obtained in an acceptable yield. The characterization tests of the product obtained showed a white and crystalline appearance (confirmed by scanning electron microscopy), solubilities (in water, ethanol and glycerol), and a melting temperature in accordance with the requirements of the European pharmacopoeia. The colorimetric reactions were directed towards the presence of a trihalogenated carbon and an alcohol function. The spectral identification (IR) showed the presence of characteristic chlorobutanol peaks and confirmed the structure of the latter. The microbiological study revealed an antimicrobial effect on all strains tested (Sataphylococcus aureus (MIC = 1250 µg/ml), E. coli (MIC = 1250 µg/ml), Pseudomonas aeroginosa (MIC = 1250 µg/ml), Candida albicans (MIC =2500 µg/ml), Aspergillus niger (MIC =2500 µg/ml)) with MIC values close to literature data. Conclusion: Thus, on the whole, the synthesized chlorobutanol satisfied the requirements of the European Pharmacopoeia, and possesses antibacterial and antifungal activity; nevertheless, it is necessary to insist on the purification step of the product in order to eliminate the maximum impurities.

Keywords: antimicrobial agent, bacterial and fungal strains, chlorobutanol, MIC, minimum inhibitory concentration

Procedia PDF Downloads 158
340 Phonological Processing and Its Role in Pseudo-Word Decoding in Children Learning to Read Kannada Language between 5.6 to 8.6 Years

Authors: Vangmayee. V. Subban, Somashekara H. S, Shwetha Prabhu, Jayashree S. Bhat

Abstract:

Introduction and Need: Phonological processing is critical in learning to read alphabetical and non-alphabetical languages. However, its role in learning to read Kannada an alphasyllabary is equivocal. The literature has focused on the developmental role of phonological awareness on reading. To the best of authors knowledge, the role of phonological memory and phonological naming has not been addressed in alphasyllabary Kannada language. Therefore, there is a need to evaluate the comprehensive role of the phonological processing skills in Kannada on word decoding skills during the early years of schooling. Aim and Objectives: The present study aimed to explore the phonological processing abilities and their role in learning to decode pseudowords in children learning to read the Kannada language during initial years of formal schooling between 5.6 to 8.6 years. Method: In this cross sectional study, 60 typically developing Kannada speaking children, 20 each from Grade I, Grade II, and Grade III between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. Phonological processing abilities were assessed using an assessment tool specifically developed to address the objectives of the present research. The assessment tool was content validated by subject experts and had good inter and intra-subject reliability. Phonological awareness was assessed at syllable level using syllable segmentation, blending, and syllable stripping at initial, medial and final position. Phonological memory was assessed using pseudoword repetition task and phonological naming was assessed using rapid automatized naming of objects. Both phonological awareneness and phonological memory measures were scored for the accuracy of the response, whereas Rapid Automatized Naming (RAN) was scored for total naming speed. Results: The mean scores comparison using one-way ANOVA revealed a significant difference (p ≤ 0.05) between the groups on all the measures of phonological awareness, pseudoword repetition, rapid automatized naming, and pseudoword reading. Subsequent post-hoc grade wise comparison using Bonferroni test revealed significant differences (p ≤ 0.05) between each of the grades for all the tasks except (p ≥ 0.05) for syllable blending, syllable stripping, and pseudoword repetition between Grade II and Grade III. The Pearson correlations revealed a highly significant positive correlation (p=0.000) between all the variables except phonological naming which had significant negative correlations. However, the correlation co-efficient was higher for phonological awareness measures compared to others. Hence, phonological awareness was chosen a first independent variable to enter in the hierarchical regression equation followed by rapid automatized naming and finally, pseudoword repetition. The regression analysis revealed syllable awareness as a single most significant predictor of pseudoword reading by explaining the unique variance of 74% and there was no significant change in R² when RAN and pseudoword repetition were added subsequently to the regression equation. Conclusion: Present study concluded that syllable awareness matures completely by Grade II, whereas the phonological memory and phonological naming continue to develop beyond Grade III. Amongst phonological processing skills, phonological awareness, especially syllable awareness is crucial for word decoding than phonological memory and naming during initial years of schooling.

Keywords: phonological awareness, phonological memory, phonological naming, phonological processing, pseudo-word decoding

Procedia PDF Downloads 169
339 The Application of Animal Welfare Certification System for Farm Animal in South Korea

Authors: Ahlyum Mun, Ji-Young Moon, Moon-Seok Yoon, Dong-Jin Baek, Doo-Seok Seo, Oun-Kyong Moon

Abstract:

There is a growing public concern over the standards of farm animal welfare, with higher standards of food safety. In addition, the recent low incidence of Avian Influenza in laying hens among certificated farms is receiving attention. In this study, we introduce animal welfare systems covering the rearing, transport and slaughter of farm animals in South Korea. The concepts of animal welfare farm certification are based on ensuring the five freedoms of animal. The animal welfare is also achieved by observing the condition of environment including shelter and resting area, feeding and water and the care for the animal health. The certification of farm animal welfare is handled by the Animal Protection & Welfare Division of Animal and Plant Quarantine Agency (APQA). Following the full amendment of Animal Protection Law in 2011, animal welfare farm certification program has been implemented since 2012. The certification system has expanded to cover laying hen, swine, broiler, beef cattle and dairy cow, goat and duck farms. Livestock farmers who want to be certified must apply for certification at the APQA. Upon receipt of the application, the APQA notifies the applicant of the detailed schedule of the on-site examination after reviewing the document and conducts the on-site inspection according to the evaluation criteria of the welfare standard. If the on-site audit results meet the certification criteria, APQA issues a certificate. The production process of certified farms is inspected at least once a year for follow-up management. As of 2017, a total of 145 farms have been certified (95 laying hen farms, 12 swine farms, 30 broiler farms and 8 dairy cow farms). In addition, animal welfare transportation vehicles and slaughterhouses have been designated since 2013 and currently 6 slaughterhouses have been certified. Animal Protection Law has been amended so that animal welfare certification marks can be affixed only to livestock products produced by animal welfare farms, transported through animal welfare vehicles and slaughtered at animal welfare slaughterhouses. The whole process including rearing–transportation- slaughtering completes the farm animal welfare system. APQA established its second 5-year animal welfare plan (2014-2019) that includes setting a minimum standard of animal welfare applicable to all livestock farms, transportation vehicles and slaughterhouses. In accordance with this plan, we will promote the farm animal welfare policy in order to truly advance the Korean livestock industry.

Keywords: animal welfare, farm animal, certification system, South Korea

Procedia PDF Downloads 387
338 Ensuring Continuity in Subcutaneous Depot Medroxy Progesterone Acetate (DMPA-SC) Contraception Service Provision Using Effective Commodity Management Practices

Authors: Oluwaseun Adeleke, Samuel O. Ikani, Fidelis Edet, Anthony Nwala, Mopelola Raji, Simeon Christian Chukwu

Abstract:

Background: The Delivering Innovations in Selfcare (DISC) project aims to increase access to self-care options for women of reproductive age, starting with self-inject subcutaneous depot medroxyprogesterone acetate (DMPA-SC) contraception services. However, the project has faced challenges in ensuring the continuous availability of the commodity in health facilities. Although most states in the country rely on the federal ministry of Health for supplies, some are gradually funding the procurement of Family Planning (FP) commodities. This attempt is, however, often accompanied by procurement delays and purchases inadequate to meet demand. This dilemma was further exacerbated by the commencement of demand generation activities by the project in supported states which geometrically increased commodity utilization rates and resulted in receding stock and occasional service disruptions. Strategies: The project deployed various strategies were implemented to ensure the continuous availability of commodities. These include facilitating inter-facility transfer, monthly tracking of commodity utilization, and alerting relevant authorities when stock levels reach a minimum. And supporting state-level procurement of DMPA-SC commodities through catalytic interventions. Results: Effective monitoring of commodity inventory at the facility level and strategic engagement with federal and state-level logistics units have proven successful in mitigating stock-out of commodities. It has helped secure up to 13,000 units of DMPA-SC commodities from federal logistics units and enabled state units to prioritize supported sites. This has ensured the continuity of DMPA-SC services and an increasing trend in the practice of self-injection. Conclusion: A functional supply chain is crucial to achieving commodity security, and without it, health programs cannot succeed. Stakeholder engagement, stock management and catalytic interventions have provided both short- and long-term measures to mitigate stock-outs and ensured a consistent supply of commodities to clients.

Keywords: family planning, contraception, DMPA-SC, self-care, self-injection, commodities, stock-out

Procedia PDF Downloads 77
337 Private Coded Computation of Matrix Multiplication

Authors: Malihe Aliasgari, Yousef Nejatbakhsh

Abstract:

The era of Big Data and the immensity of real-life datasets compels computation tasks to be performed in a distributed fashion, where the data is dispersed among many servers that operate in parallel. However, massive parallelization leads to computational bottlenecks due to faulty servers and stragglers. Stragglers refer to a few slow or delay-prone processors that can bottleneck the entire computation because one has to wait for all the parallel nodes to finish. The problem of straggling processors, has been well studied in the context of distributed computing. Recently, it has been pointed out that, for the important case of linear functions, it is possible to improve over repetition strategies in terms of the tradeoff between performance and latency by carrying out linear precoding of the data prior to processing. The key idea is that, by employing suitable linear codes operating over fractions of the original data, a function may be completed as soon as enough number of processors, depending on the minimum distance of the code, have completed their operations. The problem of matrix-matrix multiplication in the presence of practically big sized of data sets faced with computational and memory related difficulties, which makes such operations are carried out using distributed computing platforms. In this work, we study the problem of distributed matrix-matrix multiplication W = XY under storage constraints, i.e., when each server is allowed to store a fixed fraction of each of the matrices X and Y, which is a fundamental building of many science and engineering fields such as machine learning, image and signal processing, wireless communication, optimization. Non-secure and secure matrix multiplication are studied. We want to study the setup, in which the identity of the matrix of interest should be kept private from the workers and then obtain the recovery threshold of the colluding model, that is, the number of workers that need to complete their task before the master server can recover the product W. The problem of secure and private distributed matrix multiplication W = XY which the matrix X is confidential, while matrix Y is selected in a private manner from a library of public matrices. We present the best currently known trade-off between communication load and recovery threshold. On the other words, we design an achievable PSGPD scheme for any arbitrary privacy level by trivially concatenating a robust PIR scheme for arbitrary colluding workers and private databases and the proposed SGPD code that provides a smaller computational complexity at the workers.

Keywords: coded distributed computation, private information retrieval, secret sharing, stragglers

Procedia PDF Downloads 111
336 Effect of Malnutrition at Admission on Length of Hospital Stay among Adult Surgical Patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia: Prospective Cohort Study, 2022

Authors: Yoseph Halala Handiso, Zewdi Gebregziabher

Abstract:

Background: Malnutrition in hospitalized patients remains a major public health problem in both developed and developing countries. Despite the fact that malnourished patients are more prone to stay longer in hospital, there is limited data regarding the magnitude of malnutrition and its effect on length of stay among surgical patients in Ethiopia, while nutritional assessment is also often a neglected component of the health service practice. Objective: This study aimed to assess the prevalence of malnutrition at admission and its effect on the length of hospital stay among adult surgical patients in Wolaita Sodo University Comprehensive Specialized Hospital, South Ethiopia, 2022. Methods: A facility-based prospective cohort study was conducted among 398 adult surgical patients admitted to the hospital. Participants in the study were chosen using a convenient sampling technique. Subjective global assessment was used to determine the nutritional status of patients with a minimum stay of 24 hours within 48 hours after admission (SGA). Data were collected using the open data kit (ODK) version 2022.3.3 software, while Stata version 14.1 software was employed for statistical analysis. The Cox regression model was used to determine the effect of malnutrition on the length of hospital stay (LOS) after adjusting for several potential confounders taken at admission. Adjusted hazard ratio (HR) with a 95% confidence interval was used to show the effect of malnutrition. Results: The prevalence of hospital malnutrition at admission was 64.32% (95% CI: 59%-69%) according to the SGA classification. Adult surgical patients who were malnourished at admission had higher median LOS (12 days: 95% CI: 11-13) as compared to well-nourished patients (8 days: 95% CI: 8-9), means adult surgical patients who were malnourished at admission were at higher risk of reduced chance of discharge with improvement (prolonged LOS) (AHR: 0.37, 95% CI: 0.29-0.47) as compared to well-nourished patients. Presence of comorbidity (AHR: 0.68, 95% CI: 0.50-90), poly medication (AHR: 0.69, 95% CI: 0.55-0.86), and history of admission (AHR: 0.70, 95% CI: 0.55-0.87) within the previous five years were found to be the significant covariates of the length of hospital stay (LOS). Conclusion: The magnitude of hospital malnutrition at admission was found to be high. Malnourished patients at admission had a higher risk of prolonged length of hospital stay as compared to well-nourished patients. The presence of comorbidity, polymedication, and history of admission were found to be the significant covariates of LOS. All stakeholders should give attention to reducing the magnitude of malnutrition and its covariates to improve the burden of LOS.

Keywords: effect of malnutrition, length of hospital stay, surgical patients, Ethiopia

Procedia PDF Downloads 53
335 Assessing the Severity of Traffic Related Air Pollution in South-East London to School Pupils

Authors: Ho Yin Wickson Cheung, Liora Malki-Epshtein

Abstract:

Outdoor air pollution presents a significant challenge for public health globally, especially in urban areas, with road traffic acting as the primary contributor to air pollution. Several studies have documented the antagonistic relation between traffic-related air pollution (TRAP) and the impact on health, especially to the vulnerable group of population, particularly young pupils. Generally, TRAP could cause damage to their brain, restricting the ability of children to learn and, more importantly, causing detrimental respiratory issues in later life. Butlittle is known about the specific exposure of children at school during the school day and the impact this may have on their overall exposure to pollution at a crucial time in their development. This project has set out to examine the air quality across primary schools in South-East London and assesses the variability of data found based on their geographic location and surroundings. Nitrogen dioxide, PM contaminants, and carbon dioxide were collected with diffusion tubes and portable monitoring equipment for eight schools across three local areas, that are Greenwich, Lewisham, and Tower Hamlets. This study first examines the geographical features of the schools surrounding (E.g., coverage of urban road structure and green infrastructure), then utilize three different methods to capture pollutants data. Moreover, comparing the obtained results with existing data from monitoring stations to understand the differences in air quality before and during the pandemic. Furthermore, most studies in this field have unfortunately neglected human exposure to pollutants and calculated based on values from fixed monitoring stations. Therefore, this paper introduces an alternative approach by calculating human exposure to air pollution from real-time data obtained when commuting within related areas (Driving routes and field walking). It is found that schools located highly close to motorways are generally not suffering from the most air pollution contaminants. Instead, one with the worst traffic congested routes nearby might also result in poor air quality. Monitored results also indicate that the annual air pollution values have slightly decreased during the pandemic. However, the majority of the data is currently still exceeding the WHO guidelines. Finally, the total human exposures for NO2 during commuting in the two selected routes were calculated. Results illustrated the total exposure for route 1 were 21,730 μm/m3 and 28,378.32 μm/m3, and for route 2 were 30,672 μm/m3 and 16,473 μm/m3. The variance that occurred might be due to the difference in traffic volume that requires further research. Exposure for NO2 during commuting was plotted with detailed timesteps that have shown their peak usually occurred while commuting. These have consolidated the initial assumption to the extremeness of TRAP. To conclude, this paper has yielded significant benefits to understanding air quality across schools in London with the new approach of capturing human exposure (Driving routes). Confirming the severity of air pollution and promoting the necessity of considering environmental sustainability for policymakers during decision making to protect society's future pillars.

Keywords: air pollution, schools, pupils, congestion

Procedia PDF Downloads 109
334 Analysis of Delays during Initial Phase of Construction Projects and Mitigation Measures

Authors: Sunaitan Al Mutairi

Abstract:

A perfect start is a key factor for project completion on time. The study examined the effects of delayed mobilization of resources during the initial phases of the project. This paper mainly highlights the identification and categorization of all delays during the initial construction phase and their root cause analysis with corrective/control measures for the Kuwait Oil Company oil and gas projects. A relatively good percentage of the delays identified during the project execution (Contract award to end of defects liability period) attributed to mobilization/preliminary activity delays. Data analysis demonstrated significant increase in average project delay during the last five years compared to the previous period. Contractors had delays/issues during the initial phase, which resulted in slippages and progressively increased, resulting in time and cost overrun. Delays/issues not mitigated on time during the initial phase had very high impact on project completion. Data analysis of the delays for the past five years was carried out using trend chart, scatter plot, process map, box plot, relative importance index and Pareto chart. Construction of any project inside the Gathering Centers involves complex management skills related to work force, materials, plant, machineries, new technologies etc. Delay affects completion of projects and compromises quality, schedule and budget of project deliverables. Works executed as per plan during the initial phase and start-up duration of the project construction activities resulted in minor slippages/delays in project completion. In addition, there was a good working environment between client and contractor resulting in better project execution and management. Mainly, the contractor was on the front foot in the execution of projects, which had minimum/no delays during the initial and construction period. Hence, having a perfect start during the initial construction phase shall have a positive influence on the project success. Our research paper studies each type of delay with some real example supported by statistic results and suggests mitigation measures. Detailed analysis carried out with all stakeholders based on impact and occurrence of delays to have a practical and effective outcome to mitigate the delays. The key to improvement is to have proper control measures and periodic evaluation/audit to ensure implementation of the mitigation measures. The focus of this research is to reduce the delays encountered during the initial construction phase of the project life cycle.

Keywords: construction activities delays, delay analysis for construction projects, mobilization delays, oil & gas projects delays

Procedia PDF Downloads 309
333 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 425
332 The Connection Between the Semiotic Theatrical System and the Aesthetic Perception

Authors: Păcurar Diana Istina

Abstract:

The indissoluble link between aesthetics and semiotics, the harmonization and semiotic understanding of the interactions between the viewer and the object being looked at, are the basis of the practical demonstration of the importance of aesthetic perception within the theater performance. The design of a theater performance includes several structures, some considered from the beginning, art forms (i.e., the text), others being represented by simple, common objects (e.g., scenographic elements), which, if reunited, can trigger a certain aesthetic perception. The audience is delivered, by the team involved in the performance, a series of auditory and visual signs with which they interact. It is necessary to explain some notions about the physiological support of the transformation of different types of stimuli at the level of the cerebral hemispheres. The cortex considered the superior integration center of extransecal and entanged stimuli, permanently processes the information received, but even if it is delivered at a constant rate, the generated response is individualized and is conditioned by a number of factors. Each changing situation represents a new opportunity for the viewer to cope with, developing feelings of different intensities that influence the generation of meanings and, therefore, the management of interactions. In this sense, aesthetic perception depends on the detection of the “correctness” of signs, the forms of which are associated with an aesthetic property. Fairness and aesthetic properties can have positive or negative values. Evaluating the emotions that generate judgment and implicitly aesthetic perception, whether we refer to visual emotions or auditory emotions, involves the integration of three areas of interest: Valence, arousal and context control. In this context, superior human cognitive processes, memory, interpretation, learning, attribution of meanings, etc., help trigger the mechanism of anticipation and, no less important, the identification of error. This ability to locate a short circuit produced in a series of successive events is fundamental in the process of forming an aesthetic perception. Our main purpose in this research is to investigate the possible conditions under which aesthetic perception and its minimum content are generated by all these structures and, in particular, by interactions with forms that are not commonly considered aesthetic forms. In order to demonstrate the quantitative and qualitative importance of the categories of signs used to construct a code for reading a certain message, but also to emphasize the importance of the order of using these indices, we have structured a mathematical analysis that has at its core the analysis of the percentage of signs used in a theater performance.

Keywords: semiology, aesthetics, theatre semiotics, theatre performance, structure, aesthetic perception

Procedia PDF Downloads 78
331 South African Multiple Deprivation-Concentration Index Quantiles Differentiated by Components of Success and Impediment to Tuberculosis Control Programme Using Mathematical Modelling in Rural O. R. Tambo District Health Facilities

Authors: Ntandazo Dlatu, Benjamin Longo-Mbenza, Andre Renzaho, Ruffin Appalata, Yolande Yvonne Valeria Matoumona Mavoungou, Mbenza Ben Longo, Kenneth Ekoru, Blaise Makoso, Gedeon Longo Longo

Abstract:

Background: The gap between complexities related to the integration of Tuberculosis /HIV control and evidence-based knowledge motivated the initiation of the study. Therefore, the objective of this study was to explore correlations between national TB management guidelines, multiple deprivation indexes, quantiles, components and levels of Tuberculosis control programme using mathematical modeling in rural O.R. Tambo District Health Facilities, South Africa. Methods: The study design used mixed secondary data analysis and cross-sectional analysis between 2009 and 2013 across O.R Tambo District, Eastern Cape, South Africa using univariate/ bivariate analysis, linear multiple regression models, and multivariate discriminant analysis. Health inequalities indicators and component of an impediment to the tuberculosis control programme were evaluated. Results: In total, 62 400 records for TB notification were analyzed for the period 2009-2013. There was a significant but negative between Financial Year Expenditure (r= -0.894; P= 0.041) Seropositive HIV status(r= -0.979; P= 0.004), Population Density (r = -0.881; P= 0.048) and the number of TB defaulter in all TB cases. It was shown unsuccessful control of TB management program through correlations between numbers of new PTB smear positive, TB defaulter new smear-positive, TB failure all TB, Pulmonary Tuberculosis case finding index and deprivation-concentration-dispersion index. It was shown successful TB program control through significant and negative associations between declining numbers of death in co-infection of HIV and TB, TB deaths all TB and SMIAD gradient/ deprivation-concentration-dispersion index. The multivariate linear model was summarized by unadjusted r of 96%, adjusted R2 of 95 %, Standard Error of estimate of 0.110, R2 changed of 0.959 and significance for variance change for P=0.004 to explain the prediction of TB defaulter in all TB with equation y= 8.558-0.979 x number of HIV seropositive. After adjusting for confounding factors (PTB case finding the index, TB defaulter new smear-positive, TB death in all TB, TB defaulter all TB, and TB failure in all TB). The HIV and TB death, as well as new PTB smear positive, were identified as the most important, significant, and independent indicator to discriminate most deprived deprivation index far from other deprivation quintiles 2-5 using discriminant analysis. Conclusion: Elimination of poverty such as overcrowding, lack of sanitation and environment of highest burden of HIV might end the TB threat in O.R Tambo District, Eastern Cape, South Africa. Furthermore, ongoing adequate budget comprehensive, holistic and collaborative initiative towards Sustainable Developmental Goals (SDGs) is necessary for complete elimination of TB in poor O.R Tambo District.

Keywords: tuberculosis, HIV/AIDS, success, failure, control program, health inequalities, South Africa

Procedia PDF Downloads 154
330 Application of Forensic Entomology to Estimate the Post Mortem Interval

Authors: Meriem Taleb, Ghania Tail, Fatma Zohra Kara, Brahim Djedouani, T. Moussa

Abstract:

Forensic entomology has grown immensely as a discipline in the past thirty years. The main purpose of forensic entomology is to establish the post mortem interval or PMI. Three days after the death, insect evidence is often the most accurate and sometimes the only method of determining elapsed time since death. This work presents the estimation of the PMI in an experiment to test the reliability of the accumulated degree days (ADD) method and the application of this method in a real case. The study was conducted at the Laboratory of Entomology at the National Institute for Criminalistics and Criminology of the National Gendarmerie, Algeria. The domestic rabbit Oryctolagus cuniculus L. was selected as the animal model. On 08th July 2012, the animal was killed. Larvae were collected and raised to adulthood. Estimation of oviposition time was calculated by summing up average daily temperatures minus minimum development temperature (also specific to each species). When the sum is reached, it corresponds to the oviposition day. Weather data were obtained from the nearest meteorological station. After rearing was accomplished, three species emerged: Lucilia sericata, Chrysomya albiceps, and Sarcophaga africa. For Chrysomya albiceps species, a cumulation of 186°C is necessary. The emergence of adults occured on 22nd July 2012. A value of 193.4°C is reached on 9th August 2012. Lucilia sericata species require a cumulation of 207°C. The emergence of adults occurred on 23rd, July 2012. A value of 211.35°C is reached on 9th August 2012. We should also consider that oviposition may occur more than 12 hours after death. Thus, the obtained PMI is in agreement with the actual time of death. We illustrate the use of this method during the investigation of a case of a decaying human body found on 03rd March 2015 in Bechar, South West of Algerian desert. Maggots were collected and sent to the Laboratory of Entomology. Lucilia sericata adults were identified on 24th March 2015 after emergence. A sum of 211.6°C was reached on 1st March 2015 which corresponds to the estimated day of oviposition. Therefore, the estimated date of death is 1st March 2015 ± 24 hours. The estimated PMI by accumulated degree days (ADD) method seems to be very precise. Entomological evidence should always be used in homicide investigations when the time of death cannot be determined by other methods.

Keywords: forensic entomology, accumulated degree days, postmortem interval, diptera, Algeria

Procedia PDF Downloads 288
329 Farmers Willingness to Pay for Irrigated Maize Production in Rural Kenya

Authors: Dennis Otieno, Lilian Kirimi Nicholas Odhiambo, Hillary Bii

Abstract:

Kenya is considered to be a middle level income country and usuaaly does not meet household food security needs especially in North and South eastern parts. Approximately half of the population is living under the poverty line (www, CIA 1, 2012). Agriculture is the largest sector in the country, employing 80% of the population. These are thereby directly dependent on the sufficiency of outputs received. This makes efficient, easy-accessible and cheap agricultural practices an important matter in order to improve food security. Maize is the prime staple food commodity in Kenya and represents a substantial share of people’s nutritional intake. This study is the result of questionnaire based interviews, Key informant and focus group discussion involving 220 small scale maize farmers Kenyan. The study was located to two separated areas; Lower Kuja, Bunyala, Nandi, Lower Nzoia, Perkerra, Mwea Bura, Hola and Galana Kulalu in Kenya. The questionnaire captured the farmers’ use and perceived importance of the use irrigation services and irrigated maize production. Viability was evaluated using the four indices which were all positive with NPV giving positive cash flows in less than 21 years at most for one season output. The mean willingness to pay was found to be KES 3082 and willingness to pay increased with increase in irrigation premiums. The economic value of water was found to be greater than the willingness to pay implying that irrigated maize production is sustainable. Farmers stated that viability was influenced by high output levels, good produce quality, crop of choice, availability of sufficient water and enforcement the last two factors had a positive influence while the other had negative effect on the viability of irrigated maize. A regression was made over the correlation between the willingness to pay for irrigated maize production using scheme and plot level factors. Farmers that already use other inputs such as animal manure, hired labor and chemical fertilizer should also have a demand for improved seeds according to Liebig's law of minimum and expansion path theory. The regression showed that premiums, and high yields have a positive effect on willingness to pay while produce quality, efficient fertilizer use, and crop season have a negative effect.

Keywords: maize, food security, profits, sustainability, willingness to pay

Procedia PDF Downloads 212
328 Inner and Outer School Contextual Factors Associated with Poor Performance of Grade 12 Students: A Case Study of an Underperforming High School in Mpumalanga, South Africa

Authors: Victoria L. Nkosi, Parvaneh Farhangpour

Abstract:

Often a Grade 12 certificate is perceived as a passport to tertiary education and the minimum requirement to enter the world of work. In spite of its importance, many students do not make this milestone in South Africa. It is important to find out why so many students still fail in spite of transformation in the education system in the post-apartheid era. Given the complexity of education and its context, this study adopted a case study design to examine one historically underperforming high school in Bushbuckridge, Mpumalanga Province, South Africa in 2013. The aim was to gain a understanding of the inner and outer school contextual factors associated with the high failure rate among Grade 12 students.  Government documents and reports were consulted to identify factors in the district and the village surrounding the school and a student survey was conducted to identify school, home and student factors. The randomly-sampled half of the population of Grade 12 students (53) participated in the survey and quantitative data are analyzed using descriptive statistical methods. The findings showed that a host of factors is at play. The school is located in a village within a municipality which has been one of the poorest three municipalities in South Africa and the lowest Grade 12 pass rate in the Mpumalanga province.   Moreover, over half of the families of the students are single parents, 43% are unemployed and the majority has a low level of education. In addition, most families (83%) do not have basic study materials such as a dictionary, books, tables, and chairs. A significant number of students (70%) are over-aged (+19 years old); close to half of them (49%) are grade repeaters. The school itself lacks essential resources, namely computers, science laboratories, library, and enough furniture and textbooks. Moreover, teaching and learning are negatively affected by the teachers’ occasional absenteeism, inadequate lesson preparation, and poor communication skills. Overall, the continuous low performance of students in this school mirrors the vicious circle of multiple negative conditions present within and outside of the school. The complexity of factors associated with the underperformance of Grade 12 students in this school calls for a multi-dimensional intervention from government and stakeholders. One important intervention should be the placement of over-aged students and grade-repeaters in suitable educational institutions for the benefit of other students.

Keywords: inner context, outer context, over-aged students, vicious cycle

Procedia PDF Downloads 195
327 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 165