Search results for: particle size distribution (PSD)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10800

Search results for: particle size distribution (PSD)

1290 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt

Authors: Basma Yassa

Abstract:

Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.

Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design

Procedia PDF Downloads 46
1289 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 374
1288 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
1287 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet

Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez

Abstract:

Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.

Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles

Procedia PDF Downloads 34
1286 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 331
1285 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity

Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet

Abstract:

In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.

Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking

Procedia PDF Downloads 347
1284 Effect of Non-metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: Case Study

Authors: Tomasz Dubiel, Tadeusz Balawender, Miroslaw Osetek

Abstract:

The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross-section was 0.531 [mm] in width and 1.523 [mm] in length; the defect was continuous along the entire axis of the bolt. In analysis, a FEM simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.

Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head

Procedia PDF Downloads 155
1283 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization

Authors: Jessica Gu, Yu Chen

Abstract:

Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.

Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution

Procedia PDF Downloads 241
1282 The Effect of Cooperative Learning on Academic Achievement of Grade Nine Students in Mathematics: The Case of Mettu Secondary and Preparatory School

Authors: Diriba Gemechu, Lamessa Abebe

Abstract:

The aim of this study was to examine the effect of cooperative learning method on student’s academic achievement and on the achievement level over a usual method in teaching different topics of mathematics. The study also examines the perceptions of students towards cooperative learning. Cooperative learning is the instructional strategy in which pairs or small groups of students with different levels of ability work together to accomplish a shared goal. The aim of this cooperation is for students to maximize their own and each other learning, with members striving for joint benefit. The teacher’s role changes from wise on the wise to guide on the side. Cooperative learning due to its influential aspects is the most prevalent teaching-learning technique in the modern world. Therefore the study was conducted in order to examine the effect of cooperative learning on the academic achievement of grade 9 students in Mathematics in case of Mettu secondary school. Two sample sections are randomly selected by which one section served randomly as an experimental and the other as a comparison group. Data gathering instruments are achievement tests and questionnaires. A treatment of STAD method of cooperative learning was provided to the experimental group while the usual method is used in the comparison group. The experiment lasted for one semester. To determine the effect of cooperative learning on the student’s academic achievement, the significance of difference between the scores of groups at 0.05 levels was tested by applying t test. The effect size was calculated to see the strength of the treatment. The student’s perceptions about the method were tested by percentiles of the questionnaires. During data analysis, each group was divided into high and low achievers on basis of their previous Mathematics result. Data analysis revealed that both the experimental and comparison groups were almost equal in Mathematics at the beginning of the experiment. The experimental group out scored significantly than comparison group on posttest. Additionally, the comparison of mean posttest scores of high achievers indicates significant difference between the two groups. The same is true for low achiever students of both groups on posttest. Hence, the result of the study indicates the effectiveness of the method for Mathematics topics as compared to usual method of teaching.

Keywords: academic achievement, comparison group, cooperative learning, experimental group

Procedia PDF Downloads 246
1281 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil

Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira

Abstract:

The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.

Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies

Procedia PDF Downloads 298
1280 Study of Motion of Impurity Ions in Poly(Vinylidene Fluoride) from View Point of Microstructure of Polymer Solid

Authors: Yuichi Anada

Abstract:

Electrical properties of polymer solid is characterized by dielectric relaxation phenomenon. Complex permittivity shows a high dependence on frequency of external stimulation in the broad frequency range from 0.1mHz to 10GHz. The complex-permittivity dispersion gives us a lot of useful information about the molecular motion of polymers and the structure of polymer aggregates. However, the large dispersion of permittivity at low frequencies due to DC conduction of impurity ions often covers the dielectric relaxation in polymer solid. In experimental investigation, many researchers have tried to remove the DC conduction experimentally or analytically for a long time. On the other hand, our laboratory chose another way of research for this problem from the point of view of a reversal in thinking. The way of our research is to use the impurity ions in the DC conduction as a probe to detect the motion of polymer molecules and to investigate the structure of polymer aggregates. In addition to the complex permittivity, the electric modulus and the conductivity relaxation time are strong tools for investigating the ionic motion in DC conduction. In a non-crystalline part of melt-crystallized polymers, free spaces with inhomogeneous size exist between crystallites. As the impurity ions exist in the non-crystalline part and move through these inhomogeneous free spaces, the motion of ions reflects the microstructure of non-crystalline part. The ionic motion of impurity ions in poly(vinylidene fluoride) (PVDF) is investigated in this study. Frequency dependence of the loss permittivity of PVDF shows a characteristic of the direct current (DC) conduction below 1 kHz of frequency at 435 K. The electric modulus-frequency curve shows a characteristic of the dispersion with the single conductivity relaxation time. Namely, it is the Debye-type dispersion. The conductivity relaxation time analyzed from this curve is 0.00003 s at 435 K. From the plot of conductivity relaxation time of PVDF together with the other polymers against permittivity, it was found that there are two group of polymers; one of the group is characterized by small conductivity relaxation time and large permittivity, and another is characterized by large conductivity relaxation time and small permittivity.

Keywords: conductivity relaxation time, electric modulus, ionic motion, permittivity, poly(vinylidene fluoride), DC conduction

Procedia PDF Downloads 170
1279 Time and Energy Saving Kitchen Layout

Authors: Poonam Magu, Kumud Khanna, Premavathy Seetharaman

Abstract:

The two important resources of any worker performing any type of work at any workplace are time and energy. These are important inputs of the worker and need to be utilised in the best possible manner. The kitchen is an important workplace where the homemaker performs many essential activities. Its layout should be so designed that optimum use of her resources can be achieved.Ideally, the shape of the kitchen, as determined by the physical space enclosed by the four walls, can be square, rectangular or irregular. But it is the shape of the arrangement of counter that one normally refers to while talking of the layout of the kitchen. The arrangement can be along a single wall, along two opposite walls, L shape, U shape or even island. A study was conducted in 50 kitchens belonging to middle income group families. These were DDA built kitchens located in North, South, East and West Delhi.The study was conducted in three phases. In the first phase, 510 non working homemakers were interviewed. The data related to personal characteristics of the homemakers was collected. Additional information was also collected regarding the kitchens-the size, shape , etc. The homemakers were also questioned about various aspects related to meal preparation-people performing the task, number of items cooked, areas used for meal preparation , etc. In the second phase, a suitable technique was designed for conducting time and motion study in the kitchen while the meal was being prepared. This technique was called Path Process Chart. The final phase was carried out in 50 kitchens. The criterion for selection was that all items for a meal should be cooked at the same time. All the meals were cooked by the homemakers in their own kitchens. The meal preparation was studied using the Path Process Chart technique. The data collected was analysed and conclusions drawn. It was found that of all the shapes, it was the kitchen with L shape arrangement in which, on an average a homemaker spent minimum time on meal preparation and also travelled the minimum distance. Thus, the average distance travelled in a L shaped layout was 131.1 mts as compared to 181.2 mts in an U shaped layout. Similarly, 48 minutes was the average time spent on meal preparation in L shaped layout as compared to 53 minutes in U shaped layout. Thus, the L shaped layout was more time and energy saving layout as compared to U shaped.

Keywords: kitchen layout, meal preparation, path process chart technique, workplace

Procedia PDF Downloads 206
1278 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 141
1277 Assessment of Incomplete Childhood Immunization Determinants in Ethiopia: A Nationwide Multilevel Study

Authors: Mastewal Endeshaw Getnet

Abstract:

Imunization is one of the most cost-effective and extensively adopted public health strategies for preventing child disability and mortality. Expanded Program on Immunization (EPI) was launched in 1974 with the goal of providing life-saving vaccines to all children in all and building on the success of the global smallpox eradication program. According to World Health Organization report, by 2020, all countries should have achieved 90% vaccination coverage. Many developing countries still have not achieved the goal. Ethiopia is one of Africa's developing countries. The Ethiopian Ministry of health (MoH) launched the EPI program in 1980, with the goal of achieving 90% coverage among children under the age of 1 year by 1990. Among children aged 12-23 months, complete immunization coverage was 47% based on the Ethiopian Demographic and Health Survey (EDAS) 2019 report. The coverage varies depending on the administrative region, ranging from 21% in Afar region to 89% in Amhara region, Ethiopia. Therefore, identifying risk factors for incomplete immunization among children is a key challenge, particularly in Ethiopia, which has a large geographical diversity and a predicted with 119.96 million projected population size in the year 2022. Despite its critical and challenging issue, this issue is still open and has not yet been fully investigated. Recently, a few previous studies have been conducted on the assessment of incomplete children immunization determinants. However, the majority of the studies were cross-sectional surveys that assessed only EPI coverage. Motivated by the above investigation, this study focuses on investigating determinants associated with incomplete immunization among Ethiopian children to facilitate the rate of full immunization coverage. Moreover, we consider both individual immunization and service performance-related factors to investigate incomplete children's determinants. Consequently, we adopted an ecological model in this study. Individual and environmental factors are combined in the Ecological model, which provides multilevel framework for exploring different determinants related with health behaviors. The Ethiopian Demographic and Health Survey will be used as a source of data from 2021 to achieve the objective of this study. The findings of this study will be useful to the Ethiopian government and other public health institutes to improve the coverage score of childhood immunization based on the identified risk determinants.

Keywords: incomplete immunization, children, ethiopia, ecological model

Procedia PDF Downloads 41
1276 Beyond the “Breakdown” of Karman Vortex Street

Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal

Abstract:

A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.

Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown

Procedia PDF Downloads 404
1275 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet

Abstract:

The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.

Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines

Procedia PDF Downloads 175
1274 Evaluating the Characteristics of Paediatric Accidental Poisonings

Authors: Grace Fangmin Tan, Elaine Yiling Tay, Elizabeth Huiwen Tham, Andrea Wei Ching Yeo

Abstract:

Background: While accidental poisonings in children may seem unavoidable, knowledge of circumstances surrounding such incidents and identification of risk factors is important in the development of secondary prevention strategies. Some risk factors include age of the child, lack of adequate supervision and improper storage of substances. The aim of this study is to assess risk factors and circumstances influencing outcomes in these children. Methodology: A retrospective medical record review of all accidental poisoning cases presenting to the Children’s Emergency at National University Hospital (NUH), Singapore between January 2014 and December 2015 was conducted. Information on demographics, poisoning circumstances and clinical outcomes were collected. Results: Ninety-nine of a total of 186 poisoning cases were accidental ingestions, with a mean age of 4.7 (range 0.4 to 18.3 years). The gender distribution is rather equal with 52(52.5%) females and 47(47.5%) males. Seventy-nine (79.8%) were self-administered by the child and in 20 cases (20.2%), the substance was administered erroneously by caregivers 12/20 (60.0%) of whom were given the wrong drug dose while 8/20 (40.0%) were given the wrong substance. Self-administration was associated with presentation to the ED within 12 hours (p=0.027, OR 6.65, 95% CI 1.24-35.72). Notably, 94.9% of the cases involved substances kept within reach of the child. Sixty-nine (82.1%) had the substance kept in the original container, 3(3.6%) in food containers, 8(9.5%) in other containers and 4(4.8%) without a container. Of the 50 cases with information on labelling, 40/50(80.0%) were accurately labelled, 2/50 (4.0%) wrongly labelled, and 8/50 (16.0%) were unlabelled. Implicated substances included personal care products (11.1%), household cleaning products (3.0%), and different classes of drugs such as paracetamol (22.2%), antihistamines (17.2%) and sympathomimetics (8.1%). Children < 3 years of age were 4.8 times more likely to be poisoned by household substances than children >3 years of age (p=0.009, 95% CI 1.48-15.77). Prehospital interventions were more likely to have been done in poisoning with household substances (p=0.005, OR 6.12 95% CI 1.73-21.68). Fifty-nine (59.6%) were asymptomatic, 34 (34.3%) had a Poisoning Severity Score (PSS) grade of 1 (minor) and 6 (6.1%) grade 2 (moderate). Older children were 9.3 times more likely to be symptomatic (p<0.001, 95% CI 3.15-27.25). Thirty (32%) required admission. Conclusion: A significant proportion of accidental poisoning cases were due to medication administration errors by caregivers, which should be preventable. Risk factors for accidental poisoning included lack of adequate caregiver supervision, improper labelling and young age of the child. There is an urgent need to improve caregiver counselling during medication dispensing as well as to educate caregivers on basic child safety measures in the home to prevent future accidental poisonings.

Keywords: accidental, caregiver, paediatrics, poisoning

Procedia PDF Downloads 211
1273 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 58
1272 Characterization of Petrophysical Properties of Reservoirs in Bima Formation, Northeastern Nigeria: Implication for Hydrocarbon Exploration

Authors: Gabriel Efomeh Omolaiye, Jimoh Ajadi, Olatunji Seminu, Yusuf Ayoola Jimoh, Ubulom Daniel

Abstract:

Identification and characterization of petrophysical properties of reservoirs in the Bima Formation were undertaken to understand their spatial distribution and impacts on hydrocarbon saturation in the highly heterolithic siliciclastic sequence. The study was carried out using nine well logs from Maiduguri and Baga/Lake sub-basins within the Borno Basin. The different log curves were combined to decipher the lithological heterogeneity of the serrated sand facies and to aid the geologic correlation of sand bodies within the sub-basins. Evaluation of the formation reveals largely undifferentiated to highly serrated and lenticular sand bodies from which twelve reservoirs named Bima Sand-1 to Bima Sand-12 were identified. The reservoir sand bodies are bifurcated by shale beds, which reduced their thicknesses variably from 0.61 to 6.1 m. The shale content in the sand bodies ranged from 11.00% (relatively clean) to high shale content of 88.00%. The formation also has variable porosity values, with calculated total porosity ranged as low as 10.00% to as high as 35.00%. Similarly, effective porosity values spanned between 2.00 to 24.00%. The irregular porosity values also accounted for a wide range of field average permeability estimates computed for the formation, which measured between 0.03 to 319.49 mD. Hydrocarbon saturation (Sh) in the thin lenticular sand bodies also varied from 40.00 to 78.00%. Hydrocarbon was encountered in three intervals in Ga-1, four intervals in Da-1, two intervals in Ar-1, and one interval in Ye-1. Ga-1 well encountered 30.78 m thick of hydrocarbon column in 14 thin sand lobes in Bima Sand-1, with thicknesses from 0.60 m to 5.80 m and average saturation of 51.00%, while Bima Sand-2 intercepted 45.11 m thick of hydrocarbon column in 12 thin sand lobes with an average saturation of 61.00% and Bima Sand-9 has 6.30 m column in 4 thin sand lobes. Da-1 has hydrocarbon in Bima Sand-8 (5.30 m, Sh of 58.00% in 5 sand lobes), Bima Sand-10 (13.50 m, Sh of 52.00% in 6 sand lobes), Bima Sand-11 (6.20 m, Sh of 58.00% in 2 sand lobes) and Bima Sand-12 (16.50 m, Sh of 66% in 6 sand lobes). In the Ar-1 well, hydrocarbon occurs in Bima Sand-3 (2.40 m column, Sh of 48% in a sand lobe) and Bima Sand-9 (6.0 m, Sh of 58% in a sand lobe). Ye-1 well only intersected 0.5 m hydrocarbon in Bima Sand-1 with 78% saturation. Although Bima Formation has variable saturation of hydrocarbon, mainly gas in Maiduguri, and Baga/Lake sub-basins of the research area, its highly thin serrated sand beds, coupled with very low effective porosity and permeability in part, would pose a significant exploitation challenge. The sediments were deposited in a fluvio-lacustrine environment, resulting in a very thinly laminated or serrated alternation of sand and shale beds lithofacies.

Keywords: Bima, Chad Basin, fluvio-lacustrine, lithofacies, serrated sand

Procedia PDF Downloads 171
1271 Exploring the Impact of Domestic Credit Extension, Government Claims, Inflation, Exchange Rates, and Interest Rates on Manufacturing Output: A Financial Analysis.

Authors: Ojo Johnson Adelakun

Abstract:

This study explores the long-term relationships between manufacturing output (MO) and several economic determinants, interest rate (IR), inflation rate (INF), exchange rate (EX), credit to the private sector (CPSM), gross claims on the government sector (GCGS), using monthly data from March 1966 to December 2023. Employing advanced econometric techniques including Fully Modified Ordinary Least Squares (FMOLS), Dynamic Ordinary Least Squares (DOLS), and Canonical Cointegrating Regression (CCR), the analysis provides several key insights. The findings reveal a positive association between interest rates and manufacturing output, which diverges from traditional economic theory that predicts a negative correlation due to increased borrowing costs. This outcome is attributed to the financial resilience of large enterprises, allowing them to sustain investment in production despite higher interest rates. In addition, inflation demonstrates a positive relationship with manufacturing output, suggesting that stable inflation within target ranges creates a favourable environment for investment in productivity-enhancing technologies. Conversely, the exchange rate shows a negative relationship with manufacturing output, reflecting the adverse effects of currency depreciation on the cost of imported raw materials. The negative impact of CPSM underscores the importance of directing credit efficiently towards productive sectors rather than speculative ventures. Moreover, increased government borrowing appears to crowd out private sector credit, negatively affecting manufacturing output. Overall, the study highlights the need for a coordinated policy approach integrating monetary, fiscal, and financial sector strategies. Policymakers should account for the differential impacts of interest rates, inflation, exchange rates, and credit allocation on various sectors. Ensuring stable inflation, efficient credit distribution, and mitigating exchange rate volatility are critical for supporting manufacturing output and promoting sustainable economic growth. This research provides valuable insights into the economic dynamics influencing manufacturing output and offers policy recommendations tailored to South Africa’s economic context.

Keywords: domestic credit, government claims, financial variables, manufacturing output, financial analysis

Procedia PDF Downloads 18
1270 Revealing Thermal Degradation Characteristics of Distinctive Oligo-and Polisaccharides of Prebiotic Relevance

Authors: Attila Kiss, Erzsébet Némedi, Zoltán Naár

Abstract:

As natural prebiotic (non-digestible) carbohydrates stimulate the growth of colon microflora and contribute to maintain the health of the host, analytical studies aiming at revealing the chemical behavior of these beneficial food components came to the forefront of interest. Food processing (especially baking) may lead to a significant conversion of the parent compounds, hence it is of utmost importance to characterize the transformation patterns and the plausible decomposition products formed by thermal degradation. The relevance of this work is confirmed by the wide-spread use of these carbohydrates (fructo-oligosaccharides, cyclodextrins, raffinose and resistant starch) in the food industry. More and more functional foodstuffs are being developed based on prebiotics as bioactive components. 12 different types of oligosaccharides have been investigated in order to reveal their thermal degradation characteristics. Different carbohydrate derivatives (D-fructose and D-glucose oligomers and polymers) have been exposed to elevated temperatures (150 °C 170 °C, 190 °C, 210 °C, and 220 °C) for 10 min. An advanced HPLC method was developed and used to identify the decomposition products of carbohydrates formed as a consequence of thermal treatment. Gradient elution was applied with binary solvent elution (acetonitrile, water) through amine based carbohydrate column. Evaporative light scattering (ELS) proved to be suitable for the reliable detection of the UV/VIS inactive carbohydrate degradation products. These experimental conditions and applied advanced techniques made it possible to survey all the formed intermediers. Change in oligomer distribution was established in cases of all studied prebiotics throughout the thermal treatments. The obtained results indicate increased extent of chain degradation of the carbohydrate moiety at elevated temperatures. Prevalence of oligomers with shorter chain length and even the formation of monomer sugars (D-glucose and D-fructose) might be observed at higher temperatures. Unique oligomer distributions, which have not been described previously are revealed in the case of each studied, specific carbohydrate, which might result in various prebiotic activities. Resistant starches exhibited high stability when being thermal treated. The degradation process has been modeled by a plausible reaction mechanism, in which proton catalyzed degradation and chain cleavage take place.

Keywords: prebiotics, thermal degradation, fructo-oligosaccharide, HPLC, ELS detection

Procedia PDF Downloads 305
1269 Factors Associated with Increase of Diabetic Foot Ulcers in Diabetic Patients in Nyahururu County Hospital

Authors: Daniel Wachira

Abstract:

The study aims to determine factors contributing to increasing rates of DFU among DM patients attending clinics in Nyahururu County referral hospital, Lakipia County. The study objectives include;- To determine the demographic factors contributing to increased rates of DFU among DM patients, determining the sociocultural factors that contribute to increased rates of DFU among DM patients and determining the health facility factors contributing to increased rates of DFU among DM patients attending DM clinic at Nyahururu county referral hospital, Laikipia County. This study will adopt a descriptive cross-sectional study design. It involves the collection of data at a one-time point without follow-up. This method is fast and inexpensive, there is no loss to follow up as the data is collected at one time point and associations between variables can be determined. The study population includes all DM patients with or without DFU. The sampling technique that will be used is the probability sampling method, a simple random method of sampling will be used. The study will employ the use of questionnaires to collect the required information. Questionnaires will be a research administered questionnaires. The questionnaire developed was done in consultation with other research experts (supervisor) to ensure reliability. The questionnaire designed will be pre-tested by hand delivering them to a sample 10% of the sample size at J.M Kariuki Memorial hospital, Nyandarua county and thereafter collecting them dully filled followed by refining of errors to ensure it is valid for collection of data relevant for this study. Refining of errors on the questionnaires to ensure it was valid for collection of data relevant for this study. Data collection will begin after the approval of the project. Questionnaires will be administered only to the participants who met the selection criteria by the researcher and those who agreed to participate in the study to collect key information with regard to the objectives of the study. The study's authority will be obtained from the National Commission of Science and Technology and Innovation. Permission will also be obtained from the Nyahururu County referral hospital administration staff. The purpose of the study will be explained to the respondents in order to secure informed consent, and no names will be written on the questionnaires. All the information will be treated with maximum confidentiality by not disclosing who the respondent was and the information.

Keywords: diabetes, foot ulcer, social factors, hospital factors

Procedia PDF Downloads 16
1268 Assessing the Impact of Adopting Climate Smart Agriculture on Food Security and Multidimensional Poverty: Case of Rural Farm Households in the Central Rift Valley of Ethiopia

Authors: Hussien Ali, Mesfin Menza, Fitsum Hagos, Amare Haileslassie

Abstract:

Climate change has perverse effects on agricultural productivity and natural resource base, negatively affecting the well-being of the households and communities. The government and NGOs promote climate smart agricultural (CSA) practices to help farmers adapt to and mitigate the negative effects of climate change. This study aims to identify widely available CSA practices and examine their impacts on food security and multi-dimensional poverty of rural farm households in the Central Rift Valley, Ethiopia. Using three-stage proportional to size sampling procedure, the study randomly selected 278 households from two kebeles from four districts each. A cross-sectional data of 2020/21 cropping season was collected using structured and pretested survey questionnaire. Food consumption score, dietary diversity score, food insecurity experience scale, and multidimensional poverty index were calculated to measure households’ welfare indicators. Multinomial endogenous switching regression model was used to assess average treatment effects of CSA on these outcome indicators on adopter and non-adopter households. The results indicate that the widely adopted CSA practices in the area are conservation agriculture, soil fertility management, crop diversification, and small-scale irrigation. Adopter households have, on average, statistically higher food consumption score, dietary diversity score and lower food insecurity access scale than non-adopters. Moreover, adopter households, on average, have lower deprivation score in multidimensional poverty compared to non-adopter households. Up scaling the adoption of CSA practices through the improvement of households’ implementation capacity and better information, technical advice, and innovative financing mechanisms is advised. Up scaling CSA practices can further promote achieving global goals such as SDG 1, SDG 2, and SDG 13 targets, aimed to end poverty and hunger and mitigate the adverse impacts of climate change, respectively.

Keywords: climate-smart agriculture, food security, multidimensional poverty, upscaling CSA, Ethiopia

Procedia PDF Downloads 90
1267 Effects of Mild Heat Treatment on the Physical and Microbial Quality of Salak Apricot Cultivar

Authors: Bengi Hakguder Taze, Sevcan Unluturk

Abstract:

Şalak apricot (Prunus armeniaca L., cv. Şalak) is a specific variety grown in Igdir, Turkey. The fruit has distinctive properties distinguish it from other cultivars, such as its unique size, color, taste and higher water content. Drying is the widely used method for preservation of apricots. However, fresh consumption is preferred for Şalak apricot instead of drying due to its low dry matter content. Higher amounts of water in the structure and climacteric nature make the fruit sensitive against rapid quality loss during storage. Hence, alternative processing methods need to be introduced to extend the shelf life of the fresh produce. Mild heat (MH) treatment is of great interest as it can reduce the microbial load and inhibit enzymatic activities. Therefore, the aim of this study was to evaluate the impact of mild heat treatment on the natural microflora found on Şalak apricot surfaces and some physical quality parameters of the fruit, such as color and firmness. For this purpose, apricot samples were treated at different temperatures between 40 and 60 ℃ for different periods ranging between 10 to 60 min using a temperature controlled water bath. Natural flora on the fruit surfaces was examined using standard plating technique both before and after the treatment. Moreover, any changes in color and firmness of the fruit samples were also monitored. It was found that control samples were initially containing 7.5 ± 0.32 log CFU/g of total aerobic plate count (TAPC), 5.8±0.31 log CFU/g of yeast and mold count (YMC), and 5.17 ± 0.22 log CFU/g of coliforms. The highest log reductions in TAPC and YMC were observed as 3.87-log and 5.8-log after the treatments at 60 ℃ and 50 ℃, respectively. Nevertheless, the fruit lost its characteristic aroma at temperatures above 50 ℃. Furthermore, great color changes (ΔE ˃ 6) were observed and firmness of the apricot samples was reduced at these conditions. On the other hand, MH treatment at 41 ℃ for 10 min resulted in 1.6-log and 0.91-log reductions in TAPC and YMC, respectively, with slightly noticeable changes in color (ΔE ˂ 3). In conclusion, application of temperatures higher than 50 ℃ caused undesirable changes in physical quality of Şalak apricots. Although higher microbial reductions were achieved at those temperatures, temperatures between 40 and 50°C should be further investigated considering the fruit quality parameters. Another strategy may be the use of high temperatures for short time periods not exceeding 1-5 min. Besides all, MH treatment with UV-C light irradiation can be also considered as a hurdle strategy for better inactivation results.

Keywords: color, firmness, mild heat, natural flora, physical quality, şalak apricot

Procedia PDF Downloads 137
1266 Influence of Initial Curing Time, Water Content and Apparent Water Content on Geopolymer Modified Sludge Generated in Landslide Area

Authors: Minh Chien Vu, Tomoaki Satomi, Hiroshi Takahashi

Abstract:

As being lack of sufficient strength to support the loading of construction as well as service life cause the clay content and clay mineralogy, soft and highly compressible soils (sludge) constitute a major problem in geotechnical engineering projects. Geopolymer, a kind of inorganic polymer, is a promising material with a wide range of applications and offers a lower level of CO₂ emissions than conventional Portland cement. However, the feasibility of geopolymer in term of modified the soft and highly compressible soil has not been received much attention due to the requirement of heat treatment for activating the fly ash component and the existence of high content of clay-size particles in the composition of sludge that affected on the efficiency of the reaction. On the other hand, the geopolymer modified sludge could be affected by other important factors such as initial curing time, initial water content and apparent water content. Therefore, this paper describes a different potential application of geopolymer: soil stabilization in landslide areas to adapt to the technical properties of sludge so that heavy machines can move on. Sludge condition process is utilized to demonstrate the possibility for stabilizing sludge using fly ash-based geopolymer at ambient curing condition ( ± 20 °C) in term of failure strength, strain and bulk density. Sludge conditioning is a process whereby sludge is treated with chemicals or various other means to improve the dewatering characteristics of sludge before applying in the construction area. The effect of initial curing time, water content and apparent water content on the modification of sludge are the main focus of this study. Test results indicate that the initial curing time has potential for improving failure strain and strength of modified sludge with the specific condition of soft soil. The result further shows that the initial water content over than 50% total mass of sludge could significantly lead to a decrease of strength performance of geopolymer-based modified sludge. The optimum apparent water content of geopolymer modified sludge is strongly influenced by the amount of geopolymer content and initial water content of sludge. The solution to minimize the effect of high initial water content will be considered deeper in the future.

Keywords: landslide, sludge, fly ash, geopolymer, sludge conditioning

Procedia PDF Downloads 116
1265 The Problem of Suffering: Job, The Servant and Prophet of God

Authors: Barbara Pemberton

Abstract:

Now that people of all faiths are experiencing suffering due to many global issues, shared narratives may provide common ground in which true understanding of each other may take root. This paper will consider the all too common problem of suffering and address how adherents of the three great monotheistic religions seek understanding and the appropriate believer’s response from the same story found within their respective sacred texts. Most scholars from each of these three traditions—Judaism, Christianity, and Islam— consider the writings of the Tanakh/Old Testament to at least contain divine revelation. While they may not agree on the extent of the revelation or the method of its delivery, they do share stories as well as a common desire to glean God’s message for God’s people from the pages of the text. One such shared story is that of Job, the servant of Yahweh--called Ayyub, the prophet of Allah, in the Qur’an. Job is described as a pious, righteous man who loses everything—family, possessions, and health—when his faith is tested. Three friends come to console him. Through it, all Job remains faithful to his God who rewards him by restoring all that was lost. All three hermeneutic communities consider Job to be an archetype of human response to suffering, regarding Job’s response to his situation as exemplary. The story of Job addresses more than the distribution of the evil problem. At stake in the story is Job’s very relationship to his God. Some exegetes believe that Job was adapted into the Jewish milieu by a gifted redactor who used the original ancient tale as the “frame” for the biblical account (chapters 1, 2, and 4:7-17) and then enlarged the story with the complex center section of poetic dialogues creating a complex work with numerous possible interpretations. Within the poetic center, Job goes so far as to question God, a response to which Jews relate, finding strength in dialogue—even in wrestling with God. Muslims only embrace the Job of the biblical narrative frame, as further identified through the Qur’an and the prophetic traditions, considering the center section an errant human addition not representative of a true prophet of Islam. The Qur’anic injunction against questioning God also renders the center theologically suspect. Christians also draw various responses from the story of Job. While many believers may agree with the Islamic perspective of God’s ultimate sovereignty, others would join their Jewish neighbors in questioning God, not anticipating answers but rather an awareness of his presence—peace and hope becoming a reality experienced through the indwelling presence of God’s Holy Spirit. Related questions are as endless as the possible responses. This paper will consider a few of the many Jewish, Christian, and Islamic insights from the ancient story, in hopes adherents within each tradition will use it to better understand the other faiths’ approach to suffering.

Keywords: suffering, Job, Qur'an, tanakh

Procedia PDF Downloads 186
1264 Ionic Liquids as Substrates for Metal-Organic Framework Synthesis

Authors: Julian Mehler, Marcus Fischer, Martin Hartmann, Peter S. Schulz

Abstract:

During the last two decades, the synthesis of metal-organic frameworks (MOFs) has gained ever increasing attention. Based on their pore size and shape as well as host-guest interactions, they are of interest for numerous fields related to porous materials, like catalysis and gas separation. Usually, MOF-synthesis takes place in an organic solvent between room temperature and approximately 220 °C, with mixtures of polyfunctional organic linker molecules and metal precursors as substrates. Reaction temperatures above the boiling point of the solvent, i.e. solvothermal reactions, are run in autoclaves or sealed glass vessels under autogenous pressures. A relatively new approach for the synthesis of MOFs is the so-called ionothermal synthesis route. It applies an ionic liquid as a solvent, which can serve as a structure-directing template and/or a charge-compensating agent in the final coordination polymer structure. Furthermore, this method often allows for less harsh reaction conditions than the solvothermal route. Here a variation of the ionothermal approach is reported, where the ionic liquid also serves as an organic linker source. By using 1-ethyl-3-methylimidazolium terephthalates ([EMIM][Hbdc] and [EMIM]₂[bdc]), the one-step synthesis of MIL-53(Al)/Boehemite composites with interesting features is possible. The resulting material is already formed at moderate temperatures (90-130 °C) and is stabilized in the usually unfavored ht-phase. Additionally, in contrast to already published procedures for MIL-53(Al) synthesis, no further activation at high temperatures is mandatory. A full characterization of this novel composite material is provided, including XRD, SS-NMR, El-Al., SEM as well as sorption measurements and its interesting features are compared to MIL-53(Al) samples produced by the classical solvothermal route. Furthermore, the syntheses of the applied ionic liquids and salts is discussed. The influence of the degree of ionicity of the linker source [EMIM]x[H(2-x)bdc] on the crystal structure and the achievable synthesis temperature are investigated and give insight into the role of the IL during synthesis. Aside from the synthesis of MIL-53 from EMIM terephthalates, the use of the phosphonium cation in this approach is discussed as well. Additionally, the employment of ILs in the preparation of other MOFs is presented briefly. This includes the ZIF-4 framework from the respective imidazolate ILs and chiral camphorate based frameworks from their imidazolium precursors.

Keywords: ionic liquids, ionothermal synthesis, material synthesis, MIL-53, MOFs

Procedia PDF Downloads 208
1263 Research on Tight Sandstone Oil Accumulation Process of the Third Member of Shahejie Formation in Dongpu Depression, China

Authors: Hui Li, Xiongqi Pang

Abstract:

In recent years, tight oil has become a hot spot for unconventional oil and gas exploration and development in the world. Dongpu Depression is a typical hydrocarbon-rich basin in the southwest of Bohai Bay Basin, in which tight sandstone oil and gas have been discovered in deep reservoirs, most of which are buried more than 3500m. The distribution and development characteristics of deep tight sandstone reservoirs need to be studied. The main source rocks in study area are dark mudstone and shale of the middle and lower third sub-member of Shahejie Formation. Total Organic Carbon (TOC) content of source rock is between 0.08-11.54%, generally higher than 0.6% and the value of S1+S2 is between 0.04–72.93 mg/g, generally higher than 2 mg/g. It can be evaluated as middle to fine level overall. The kerogen type of organic matter is predominantly typeⅡ1 andⅡ2. Vitrinite reflectance (Ro) is mostly greater than 0.6% indicating that the source rock entered the hydrocarbon generation threshold. The physical property of reservoir was poor, the most reservoir has a porosity lower than 12% and a permeability of less than 1×10⁻³μm. The rocks in this area showed great heterogeneity, some areas developed desserts with high porosity and permeability. According to SEM, thin section image, inclusion test and so on, the reservoir was affected by compaction and cementation during early diagenesis stage (44-31Ma). The diagenesis caused the tight reservoir in Huzhuangji, Pucheng, Weicheng Area while the porosity in Machang, Qiaokou, Wenliu Area was still over 12%. In the process of middle diagenesis phase stage A (31-17Ma), the reservoir porosity in Machang, Pucheng, Huzhuangji Area increased due to dissolution; after that the oil generation window of source rock was achieved for the first phase hydrocarbon charging (31-23Ma), formed the conventional oil deposition in Machang, Qiaokou, Wenliu, Huzhuangji Area and unconventional tight reservoir in Pucheng, Weicheng Area. Then came to stage B of middle diagenesis phase (17-7Ma), in this stage, the porosity of reservoir continued to decrease after the dissolution and led to a situation that the reservoirs were generally compacted. And since then, the second hydrocarbon filling has been processing since 7Ma. Most of the pools charged and formed in this procedure are tight sandstone oil reservoir. In conclusion, tight sandstone oil was formed in two patterns in Dongpu Depression, which could be concluded as ‘density fist then accumulation’ pattern and ‘accumulation fist next density’ pattern.

Keywords: accumulation process, diagenesis, dongpu depression, tight sandstone oil

Procedia PDF Downloads 116
1262 Tiebout and Crime: How Crime Affect the Income Tax Capacity

Authors: Nik Smits, Stijn Goeminne

Abstract:

Despite the extensive literature on the relation between crime and migration, not much is known about how crime affects the tax capacity of local communities. This paper empirically investigates whether the Flemish local income tax base yield is sensitive to changes in the local crime level. The underlying assumptions are threefold. In a Tiebout world, rational voters holding the local government accountable for the safety of its citizens, move out when the local level of security gets too much alienated from what they want it to be (first assumption). If migration is due to crime, then the more wealthy citizens are expected to move first (second assumption). Looking for a place elsewhere implies transaction costs, which the more wealthy citizens are more likely to be able to pay. As a consequence, the average income per capita and so the income distribution will be affected, which in turn, will influence the local income tax base yield (third assumption). The decreasing average income per capita, if not compensated by increasing earnings by the citizens that are staying or by the new citizens entering the locality, must result in a decreasing local income tax base yield. In the absence of a higher level governments’ compensation, decreasing local tax revenues could prove to be disastrous for a crime-ridden municipality. When communities do not succeed in forcing back the number of offences, this can be the onset of a cumulative process of urban deterioration. A spatial panel data model containing several proxies for the local level of crime in 306 Flemish municipalities covering the period 2000-2014 is used to test the relation between crime and the local income tax base yield. In addition to this direct relation, the underlying assumptions are investigated as well. Preliminary results show a modest, but positive relation between local violent crime rates and the efflux of citizens, persistent up until a 2 year lag. This positive effect is dampened by possible increasing crime rates in neighboring municipalities. The change in violent crimes -and to a lesser extent- thefts and extortions reduce the influx of citizens with a one year lag. Again this effect is diminished by external effects from neighboring municipalities, meaning that increasing crime rates in neighboring municipalities (especially violent crimes) have a positive effect on the local influx of citizens. Crime also has a depressing effect on the average income per capita within a municipality, whereas increasing crime rates in neighboring municipalities increase it. Notwithstanding the previous results, crime does not seem to significantly affect the local tax base yield. The results suggest that the depressing effect of crime on the income basis has to be compensated by a limited, but a wealthier influx of new citizens.

Keywords: crime, local taxes, migration, Tiebout mobility

Procedia PDF Downloads 307
1261 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: agricultural engineering, image processing, computer vision, flower detection

Procedia PDF Downloads 329