Search results for: queue size distribution at a random epoch
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11846

Search results for: queue size distribution at a random epoch

1316 Response of Planktonic and Aggregated Bacterial Cells to Water Disinfection with Photodynamic Inactivation

Authors: Thayse Marques Passos, Brid Quilty, Mary Pryce

Abstract:

The interest in developing alternative techniques to obtain safe water, free from pathogens and hazardous substances, is growing in recent times. The photodynamic inactivation of microorganisms (PDI) is a promising ecologically-friendly and multi-target approach for water disinfection. It uses visible light as an energy source combined with a photosensitiser (PS) to transfer energy/electrons to a substrate or molecular oxygen generating reactive oxygen species, which cause cidal effects towards cells. PDI has mainly been used in clinical studies and investigations on its application to disinfect water is relatively recent. The majority of studies use planktonic cells. However, in their natural environments, bacteria quite often do not occur as freely suspended cells (planktonic) but in cell aggregates that are either freely floating or attached to surfaces as biofilms. Microbes can form aggregates and biofilms as a strategy to protect them from environmental stress. As aggregates, bacteria have a better metabolic function, they communicate more efficiently, and they are more resistant to biocide compounds than their planktonic forms. Among the bacteria that are able to form aggregates are members of the genus Pseudomonas, they are a very diverse group widely distributed in the environment. Pseudomonas species can form aggregates/biofilms in water and can cause particular problems in water distribution systems. The aim of this study was to evaluate the effectiveness of photodynamic inactivation in killing a range of planktonic cells including Escherichia coli DSM 1103, Staphylococcus aureus DSM 799, Shigella sonnei DSM 5570, Salmonella enterica and Pseudomonas putida DSM 6125, and aggregating cells of Pseudomonas fluorescens DSM 50090, Pseudomonas aeruginosa PAO1. The experiments were performed in glass Petri dishes, containing the bacterial suspension and the photosensitiser, irradiated with a multi-LED (wavelengths 430nm and 660nm) for different time intervals. The responses of the cells were monitored using the pour plate technique and confocal microscopy. The study showed that bacteria belonging to Pseudomonads group tend to be more tolerant to PDI. While E. coli, S. aureus, S. sonnei and S. enterica required a dosage ranging from 39.47 J/cm2 to 59.21 J/cm2 for a 5 log reduction, Pseudomonads needed a dosage ranging from 78.94 to 118.42 J/cm2, a higher dose being required when the cells aggregated.

Keywords: bacterial aggregation, photoinactivation, Pseudomonads, water disinfection

Procedia PDF Downloads 298
1315 Parents and Stakeholders’ Perspectives on Early Reading Intervention Implemented as a Curriculum for Children with Learning Disabilities

Authors: Bander Mohayya Alotaibi

Abstract:

The valuable partnerships between parents and teachers may develop positive and effective interactions between home and school. This will help these stakeholders share information and resources regarding student academics during ongoing interactions. Thus, partnerships will build a solid foundation for both families and schools to help children succeed in school. Parental involvement can be seen as an effective tool that can change homes and communities and not just schools’ systems. Seeking parents and stakeholders’ attitudes toward learning and learners can help schools design a curriculum. Subsequently, this information can be used to find ways to help improve the academic performance of students, especially in low performing schools. There may be some conflicts when designing curriculum. In addition, designing curriculum might bring more educational expectations to all the sides. There is a lack of research that targets the specific attitude of parents toward specific concepts on curriculum contents. More research is needed to study the perspective that parents of children with learning disabilities (LD) have regarding early reading curriculum. Parents and stakeholders’ perspectives on early reading intervention implemented as a curriculum for children with LD was studied through an advanced quantitative research. The purpose of this study seeks to understand stakeholders and parents’ perspectives of key concepts and essential early reading skills that impact the design of curriculum that will serve as an intervention for early struggler readers who have LD. Those concepts or stages include phonics, phonological awareness, and reading fluency as well as strategies used in house by parents. A survey instrument was used to gather the data. Participants were recruited through 29 schools and districts of the metropolitan area of the northern part of Saudi Arabia. Participants were stakeholders including parents of children with learning disability. Data were collected using distribution of paper and pen survey to schools. Psychometric properties of the instrument were evaluated for the validity and reliability of the survey; face validity, content validity, and construct validity including an Exploratory Factor Analysis were used to shape and reevaluate the structure of the instrument. Multivariate analysis of variance (MANOVA) used to find differences between the variables. The study reported the results of the perspectives of stakeholders toward reading strategies, phonics, phonological awareness, and reading fluency. Also, suggestions and limitations are discussed.

Keywords: stakeholders, learning disability, early reading, perspectives, parents, intervention, curriculum

Procedia PDF Downloads 156
1314 Copy Number Variants in Children with Non-Syndromic Congenital Heart Diseases from Mexico

Authors: Maria Lopez-Ibarra, Ana Velazquez-Wong, Lucelli Yañez-Gutierrez, Maria Araujo-Solis, Fabio Salamanca-Gomez, Alfonso Mendez-Tenorio, Haydeé Rosas-Vargas

Abstract:

Congenital heart diseases (CHD) are the most common congenital abnormalities. These conditions can occur as both an element of distinct chromosomal malformation syndromes or as non-syndromic forms. Their etiology is not fully understood. Genetic variants such copy number variants have been associated with CHD. The aim of our study was to analyze these genomic variants in peripheral blood from Mexican children diagnosed with non-syndromic CHD. We included 16 children with atrial and ventricular septal defects and 5 healthy subjects without heart malformations as controls. To exclude the most common heart disease-associated syndrome alteration, we performed a fluorescence in situ hybridization test to identify the 22q11.2, responsible for congenital heart abnormalities associated with Di-George Syndrome. Then, a microarray based comparative genomic hybridization was used to identify global copy number variants. The identification of copy number variants resulted from the comparison and analysis between our results and data from main genetic variation databases. We identified copy number variants gain in three chromosomes regions from pediatric patients, 4q13.2 (31.25%), 9q34.3 (25%) and 20q13.33 (50%), where several genes associated with cellular, biosynthetic, and metabolic processes are located, UGT2B15, UGT2B17, SNAPC4, SDCCAG3, PMPCA, INPP6E, C9orf163, NOTCH1, C20orf166, and SLCO4A1. In addition, after a hierarchical cluster analysis based on the fluorescence intensity ratios from the comparative genomic hybridization, two congenital heart disease groups were generated corresponding to children with atrial or ventricular septal defects. Further analysis with a larger sample size is needed to corroborate these copy number variants as possible biomarkers to differentiate between heart abnormalities. Interestingly, the 20q13.33 gain was present in 50% of children with these CHD which could suggest that alterations in both coding and non-coding elements within this chromosomal region may play an important role in distinct heart conditions.

Keywords: aCGH, bioinformatics, congenital heart diseases, copy number variants, fluorescence in situ hybridization

Procedia PDF Downloads 293
1313 Youth and Employment: An Outlook on Challenges of Demographic Dividend

Authors: Vidya Yadav

Abstract:

India’s youth bulge is now sharpest at the critical 15-24 age group, even as its youngest, and oldest age groups begin to narrow. As the ‘single year, age data’ for the 2011 Census releases the data on the number of people at each year of age in the population. The data shows that India’s working age population (15-64 years) is now 63.4 percent of the total, as against just short of 60 percent in 2001. The numbers also show that the ‘dependency ratio’ the ratio of children (0-14) and the elderly (65 above) to those in the working age has shrunk further to 0.55. “Even as the western world is in ageing situation, these new numbers show that India’s population is still very young”. As the fertility falls faster in urban areas, rural India is younger than urban India; while 51.73 percent of rural Indians are under the age of 24 and 45.9 percent of urban Indians are under 24. The percentage of the population under the age of 24 has dropped, but many demographers say that it should not be interpreted as a sign of the youth bulge is shrinking. Rather it is because of “declining fertility, the number of infants and children reduces first, and this is what we see with the number of under age 24. Indeed the figure shows that the proportion of children in the 0-4 and 5-9 age groups has fallen in 2011 compared to 2001. For the first time, the percentage of children in the 10-14 age group has also fallen, as the effect of families reducing the number of children they have begins to be felt. The present paper key issue is to examine that “whether this growing youth bulge has the right skills for the workforce or not”. The study seeks to examine the youth population structure and employment distribution among them in India during 2001-2011 in different industrial category. It also tries to analyze the workforce participation rate as main and marginal workers both for male and female workers in rural and urban India by utilizing an abundant source of census data from 2001-2011. Result shows that an unconscionable number of adolescents are working when they should study. In rural areas, large numbers of youths are working as an agricultural labourer. Study shows that most of the youths working are in the 15-19 age groups. In fact, this is the age of entry into higher education, but due to economic compulsion forces them to take up jobs, killing their dreams of higher skills or education. Youths are primarily engaged in low paying irregular jobs which are clearly revealed by census data on marginal workers. That is those who get work for less than six months in a year. Large proportions of youths are involved in the cultivation and household industries works.

Keywords: main, marginal, youth, work

Procedia PDF Downloads 293
1312 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 331
1311 Viability of EBT3 Film in Small Dimensions to Be Use for in-Vivo Dosimetry in Radiation Therapy

Authors: Abdul Qadir Jangda, Khadija Mariam, Usman Ahmed, Sharib Ahmed

Abstract:

The Gafchromic EBT3 film has the characteristic of high spatial resolution, weak energy dependence and near tissue equivalence which makes them viable to be used for in-vivo dosimetry in External Beam and Brachytherapy applications. The aim of this study is to assess the smallest film dimension that may be feasible for the use in in-vivo dosimetry. To evaluate the viability, the film sizes from 3 x 3 mm to 20 x 20 mm were calibrated with 6 MV Photon and 6 MeV electron beams. The Gafchromic EBT3 (Lot no. A05151201, Make: ISP) film was cut into five different sizes in order to establish the relationship between absorbed dose vs. film dimensions. The film dimension were 3 x 3, 5 x 5, 10 x 10, 15 x 15, and 20 x 20 mm. The films were irradiated on Varian Clinac® 2100C linear accelerator for dose range from 0 to 1000 cGy using PTW solid water phantom. The irradiation was performed as per clinical absolute dose rate calibratin setup, i.e. 100 cm SAD, 5.0 cm depth and field size of 10x10 cm2 and 100 cm SSD, 1.4 cm depth and 15x15 cm2 applicator for photon and electron respectively. The irradiated films were scanned with the landscape orientation and a post development time of 48 hours (minimum). Film scanning accomplished using Epson Expression 10000 XL Flatbed Scanner and quantitative analysis carried out with ImageJ freeware software. Results show that the dose variation with different film dimension ranging from 3 x 3 mm to 20 x 20 mm is very minimal with a maximum standard deviation of 0.0058 in Optical Density for a dose level of 3000 cGy and the the standard deviation increases with the increase in dose level. So the precaution must be taken while using the small dimension films for higher doses. Analysis shows that there is insignificant variation in the absorbed dose with a change in film dimension of EBT3 film. Study concludes that the film dimension upto 3 x 3 mm can safely be used up to a dose level of 3000 cGy without the need of recalibration for particular dimension in use for dosimetric application. However, for higher dose levels, one may need to calibrate the films for a particular dimension in use for higher accuracy. It was also noticed that the crystalline structure of the film got damage at the edges while cutting the film, which can contribute to the wrong dose if the region of interest includes the damage area of the film

Keywords: external beam radiotherapy, film calibration, film dosimetery, in-vivo dosimetery

Procedia PDF Downloads 497
1310 A Factor-Analytical Approach on Identities in Environmentally Significant Behavior

Authors: Alina M. Udall, Judith de Groot, Simon de Jong, Avi Shankar

Abstract:

There are many ways in which environmentally significant behavior can be explained. Dominant psychological theories, namely, the theory of planned behavior, the norm-activation theory, its extension, the value-belief-norm theory, and the theory of habit do not explain large parts of environmentally significant behaviors. A new and rapidly growing approach is to focus on how consumer’s identities predict environmentally significant behavior. Identity may be relevant because consumers have many identities that are assumed to guide their behavior. Therefore, we assume that many identities will guide environmentally significant behavior. Many identities can be relevant for environmentally significant behavior. In reviewing the literature, over 200 identities have been studied making it difficult to establish the key identities for explaining environmentally significant behavior. Therefore, this paper first aims to establish the key identities previously used for explaining environmentally significant behavior. Second, the aim is to test which key identities explain environmentally significant behavior. To address the aims, an online survey study (n = 578) is conducted. First, the exploratory factor analysis reveals 15 identity factors. The identity factors are namely, environmentally concerned identity, anti-environmental self-identity, environmental place identity, connectedness with nature identity, green space visitor identity, active ethical identity, carbon off-setter identity, thoughtful self-identity, close community identity, anti-carbon off-setter identity, environmental group member identity, national identity, identification with developed countries, cyclist identity, and thoughtful organisation identity. Furthermore, to help researchers understand and operationalize the identities, the article provides theoretical definitions for each of the identities, in line with identity theory, social identity theory, and place identity theory. Second, the hierarchical regression shows only 10 factors significantly uniquely explain the variance in environmentally significant behavior. In order of predictive power the identities are namely, environmentally concerned identity, anti-environmental self-identity, thoughtful self-identity, environmental group member identity, anti-carbon off-setter identity, carbon off-setter identity, connectedness with nature identity, national identity, and green space visitor identity. The identities explain over 60% of the variance in environmentally significant behavior, a large effect size. Based on this finding, the article reveals a new, theoretical framework showing the key identities explaining environmentally significant behavior, to help improve and align the field.

Keywords: environmentally significant behavior, factor analysis, place identity, social identity

Procedia PDF Downloads 453
1309 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 376
1308 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 138
1307 Shaped Crystal Growth of Fe-Ga and Fe-Al Alloy Plates by the Micro Pulling down Method

Authors: Kei Kamada, Rikito Murakami, Masahiko Ito, Mototaka Arakawa, Yasuhiro Shoji, Toshiyuki Ueno, Masao Yoshino, Akihiro Yamaji, Shunsuke Kurosawa, Yuui Yokota, Yuji Ohashi, Akira Yoshikawa

Abstract:

Techniques of energy harvesting y have been widely developed in recent years, due to high demand on the power supply for ‘Internet of things’ devices such as wireless sensor nodes. In these applications, conversion technique of mechanical vibration energy into electrical energy using magnetostrictive materials n have been brought to attention. Among the magnetostrictive materials, Fe-Ga and Fe-Al alloys are attractive materials due to the figure of merits such price, mechanical strength, high magnetostrictive constant. Up to now, bulk crystals of these alloys are produced by the Bridgman–Stockbarger method or the Czochralski method. Using these method big bulk crystal up to 2~3 inch diameter can be grown. However, non-uniformity of chemical composition along to the crystal growth direction cannot be avoid, which results in non-uniformity of magnetostriction constant and reduction of the production yield. The micro-pulling down (μ-PD) method has been developed as a shaped crystal growth technique. Our group have reported shaped crystal growth of oxide, fluoride single crystals with different shape such rod, plate tube, thin fiber, etc. Advantages of this method is low segregation due to high growth rate and small diffusion of melt at the solid-liquid interface, and small kerf loss due to near net shape crystal. In this presentation, we report the shaped long plate crystal growth of Fe-Ga and Fe-Al alloys using the μ-PD method. Alloy crystals were grown by the μ-PD method using calcium oxide crucible and induction heating system under the nitrogen atmosphere. The bottom hole of crucibles was 5 x 1mm² size. A <100> oriented iron-based alloy was used as a seed crystal. 5 x 1 x 320 mm³ alloy crystal plates were successfully grown. The results of crystal growth, chemical composition analysis, magnetostrictive properties and a prototype vibration energy harvester are reported. Furthermore, continuous crystal growth using powder supply system will be reported to minimize the chemical composition non-uniformity along the growth direction.

Keywords: crystal growth, micro-pulling-down method, Fe-Ga, Fe-Al

Procedia PDF Downloads 335
1306 Molecular Diagnosis of a Virus Associated with Red Tip Disease and Its Detection by Non Destructive Sensor in Pineapple (Ananas comosus)

Authors: A. K. Faizah, G. Vadamalai, S. K. Balasundram, W. L. Lim

Abstract:

Pineapple (Ananas comosus) is a common crop in tropical and subtropical areas of the world. Malaysia once ranked as one of the top 3 pineapple producers in the world in the 60's and early 70's, after Hawaii and Brazil. Moreover, government’s recognition of the pineapple crop as one of priority commodities to be developed for the domestics and international markets in the National Agriculture Policy. However, pineapple industry in Malaysia still faces numerous challenges, one of which is the management of disease and pest. Red tip disease on pineapple was first recognized about 20 years ago in a commercial pineapple stand located in Simpang Renggam, Johor, Peninsular Malaysia. Since its discovery, there has been no confirmation on its causal agent of this disease. The epidemiology of red tip disease is still not fully understood. Nevertheless, the disease symptoms and the spread within the field seem to point toward viral infection. Bioassay test on nucleic acid extracted from the red tip-affected pineapple was done on Nicotiana tabacum cv. Coker by rubbing the extracted sap. Localised lesions were observed 3 weeks after inoculation. Negative staining of the fresh inoculated Nicotiana tabacum cv. Coker showed the presence of membrane-bound spherical particles with an average diameter of 94.25nm under transmission electron microscope. The shape and size of the particles were similar to tospovirus. SDS-PAGE analysis of partial purified virions from inoculated N. tabacum produced a strong and a faint protein bands with molecular mass of approximately 29 kDa and 55 kDa. Partial purified virions of symptomatic pineapple leaves from field showed bands with molecular mass of approximately 29 kDa, 39 kDa and 55kDa. These bands may indicate the nucleocapsid protein identity of tospovirus. Furthermore, a handheld sensor, Greenseeker, was used to detect red tip symptoms on pineapple non-destructively based on spectral reflectance, measured as Normalized Difference Vegetation Index (NDVI). Red tip severity was estimated and correlated with NDVI. Linear regression models were calibrated and tested developed in order to estimate red tip disease severity based on NDVI. Results showed a strong positive relationship between red tip disease severity and NDVI (r= 0.84).

Keywords: pineapple, diagnosis, virus, NDVI

Procedia PDF Downloads 794
1305 Effects of Cash Transfers Mitigation Impacts in the Face of Socioeconomic External Shocks: Evidence from Egypt

Authors: Basma Yassa

Abstract:

Evidence on cash transfers’ effectiveness in mitigating macro and idiosyncratic shocks’ impacts has been mixed and is mostly concentrated in Latin America, Sub-Saharan Africa, and South Asia with very limited evidence from the MENA region. Yet conditional cash transfers schemes have been continually used, especially in Egypt, as the main social protection tool in response to the recent socioeconomic crises and macro shocks. We use 2 panel datasets and 1 cross-sectional dataset to estimate the effectiveness of cash transfers as a shock-mitigative mechanism in the Egyptian context. In this paper, the results from the different models (Panel Fixed Effects model and the Regression Discontinuity Design (RDD) model) confirm that micro and macro shocks lead to significant decline in several household-level welfare outcomes and that Takaful cash transfers have a significant positive impact in mitigating the negative shock impacts, especially on households’ debt incidence, debt levels, and asset ownership, but not necessarily on food, and non-food expenditure levels. The results indicate large positive significant effects on decreasing household incidence of debt by up to 12.4 percent and lowered the debt size by approximately 18 percent among Takaful beneficiaries compared to non-beneficiaries’. Similar evidence is found on asset ownership levels, as the RDD model shows significant positive effects on total asset ownership and productive asset ownership, but the model failed to detect positive impacts on per capita food and non-food expenditures. Further extensions are still in progress to compare the models’ results with the DID model results when using a nationally representative ELMPS panel data (2018/2024) rounds. Finally, our initial analysis suggests that conditional cash transfers are effective in buffering the negative shock impacts on certain welfare indicators even after successive macro-economic shocks in 2022 and 2023 in the Egyptian Context.

Keywords: cash transfers, fixed effects, household welfare, household debt, micro shocks, regression discontinuity design

Procedia PDF Downloads 49
1304 Effect of Acid and Alkali Treatment on Physical and Surface Charge Properties of Clayey Soils

Authors: Nikhil John Kollannur, Dali Naidu Arnepalli

Abstract:

Most of the surface related phenomena in the case of fine-grained soil are attributed to their unique surface charge properties and specific surface area. The temporal variations in soil behavior, to some extent, can be credited to the changes in these properties. Among the multitude of factors that affect the charge and surface area of clay minerals, the inherent system chemistry occupies the cardinal position. The impact is more profound when the chemistry change is manifested in terms of the system pH. pH plays a significant role by modifying the edge charges of clay minerals and facilitating mineral dissolution. Hence there is a need to address the variations in physical and charge properties of fine-grained soils treated over a range of acidic as well as alkaline conditions. In the present study, three soils (two soils commercially procured and one natural soil) exhibiting distinct mineralogical compositions are subjected to different pH environment over a range of 2 to 13. The soil-solutions prepared at a definite liquid to solid ratio are adjusted to the required pH value by adding measured quantities of 0.1M HCl/0.1M NaOH. The studies are conducted over a range of interaction time, varying from 1 to 96 hours. The treated soils are then analyzed for their physical properties in terms of specific surface area and particle size characteristics. Further, modifications in surface morphology are evaluated from scanning electron microscope (SEM) imaging. Changes in the surface charge properties are assessed in terms of zeta potential measurements. Studies show significant variations in total surface area, probably because of the dissolution of clay minerals. This observation is further substantiated by the morphological analysis with SEM imaging. The zeta potential measurements on soils indicate noticeable variation upon pH treatment, which is partially ascribed to the modifications in the pH-dependant edge charges and partially due to the clay mineral dissolution. The results provide valuable insight into the role of pH in a clay-electrolyte system upon surface related phenomena such as species adsorption, fabric modification etc.

Keywords: acid and alkali treatment, mineral dissolution , specific surface area, zeta potential

Procedia PDF Downloads 186
1303 Knowledge Creation and Diffusion Dynamics under Stable and Turbulent Environment for Organizational Performance Optimization

Authors: Jessica Gu, Yu Chen

Abstract:

Knowledge Management (KM) is undoubtable crucial to organizational value creation, learning, and adaptation. Although the rapidly growing KM domain has been fueled with full-fledged methodologies and technologies, studies on KM evolution that bridge the organizational performance and adaptation to the organizational environment are still rarely attempted. In particular, creation (or generation) and diffusion (or share/exchange) of knowledge are of the organizational primary concerns on the problem-solving perspective, however, the optimized distribution of knowledge creation and diffusion endeavors are still unknown to knowledge workers. This research proposed an agent-based model of knowledge creation and diffusion in an organization, aiming at elucidating how the intertwining knowledge flows at microscopic level lead to optimized organizational performance at macroscopic level through evolution, and exploring what exogenous interventions by the policy maker and endogenous adjustments of the knowledge workers can better cope with different environmental conditions. With the developed model, a series of simulation experiments are conducted. Both long-term steady-state and time-dependent developmental results on organizational performance, network and structure, social interaction and learning among individuals, knowledge audit and stocktaking, and the likelihood of choosing knowledge creation and diffusion by the knowledge workers are obtained. One of the interesting findings reveals a non-monotonic phenomenon on organizational performance under turbulent environment while a monotonic phenomenon on organizational performance under a stable environment. Hence, whether the environmental condition is turbulence or stable, the most suitable exogenous KM policy and endogenous knowledge creation and diffusion choice adjustments can be identified for achieving the optimized organizational performance. Additional influential variables are further discussed and future work directions are finally elaborated. The proposed agent-based model generates evidence on how knowledge worker strategically allocates efforts on knowledge creation and diffusion, how the bottom-up interactions among individuals lead to emerged structure and optimized performance, and how environmental conditions bring in challenges to the organization system. Meanwhile, it serves as a roadmap and offers great macro and long-term insights to policy makers without interrupting the real organizational operation, sacrificing huge overhead cost, or introducing undesired panic to employees.

Keywords: knowledge creation, knowledge diffusion, agent-based modeling, organizational performance, decision making evolution

Procedia PDF Downloads 246
1302 Quercetin Nanoparticles and Their Hypoglycemic Effect in a CD1 Mouse Model with Type 2 Diabetes Induced by Streptozotocin and a High-Fat and High-Sugar Diet

Authors: Adriana Garcia-Gurrola, Carlos Adrian Peña Natividad, Ana Laura Martinez Martinez, Alberto Abraham Escobar Puentes, Estefania Ochoa Ruiz, Aracely Serrano Medina, Abraham Wall Medrano, Simon Yobanny Reyes Lopez

Abstract:

Type 2 diabetes mellitus (T2DM) is a metabolic disease characterized by elevated blood glucose levels. Quercetin is a natural flavonoid with a hypoglycemic effect, but reported data are inconsistent due mainly to the structural instability and low solubility of quercetin. Nanoencapsulation is a distinct strategy to overcome the intrinsic limitations of quercetin. Therefore, this work aims to develop a quercetin nano-formulation based on biopolymeric starch nanoparticles to enhance the release and hypoglycemic effect of quercetin in T2DM induced mice model. Starch-quercetin nanoparticles were synthesized using high-intensity ultrasonication, and structural and colloidal properties were determined by FTIR and DLS. For in vivo studies, CD1 male mice (n=25) were divided into five groups (n=5). T2DM was induced using a high-fat and high-sugar diet for 32 weeks and streptozotocin injection. Group 1 consisted of healthy mice fed with a normal diet and water ad libitum; Group 2 were diabetic mice treated with saline solution; Group 3 were diabetic mice treated with glibenclamide; Group 4 were diabetic mice treated with empty nanoparticles; and Group 5 was diabetic mice treated with quercetin nanoparticles. Quercetin nanoparticles had a hydrodynamic size of 232 ± 88.45 nm, a PDI of 0.310 ± 0.04 and a zeta potential of -4 ± 0.85 mV. The encapsulation efficiency of nanoparticles was 58 ± 3.33 %. No significant differences (p = > 0.05) were observed in biochemical parameters (lipids, insulin, and peptide C). Groups 3 and 5 showed a similar hypoglycemic effect, but quercetin nanoparticles showed a longer-lasting effect. Histopathological studies reveal that T2DM mice groups showed degenerated and fatty liver tissue; however, a treated group with quercetin nanoparticles showed liver tissue like that of the healthy mice group. These results demonstrate that quercetin nano-formulations based on starch nanoparticles are effective alternatives with hypoglycemic effects.

Keywords: quercetin, diabetes mellitus tipo 2, in vivo study, nanoparticles

Procedia PDF Downloads 41
1301 Object-Scene: Deep Convolutional Representation for Scene Classification

Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang

Abstract:

Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.

Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization

Procedia PDF Downloads 333
1300 Spatial Analysis of the Socio-Environmental Vulnerability in Medium-Sized Cities: Case Study of Municipality of Caraguatatuba SP-Brazil

Authors: Katia C. Bortoletto, Maria Isabel C. de Freitas, Rodrigo B. N. de Oliveira

Abstract:

The environmental vulnerability studies are essential for priority actions to the reduction of disasters risk. The aim of this study is to analyze the socio-environmental vulnerability obtained through a Census survey, followed by both a statistical analysis (PCA/SPSS/IBM) and a spatial analysis by GIS (ArcGis/ESRI), taking as a case study the Municipality of Caraguatatuba-SP, Brazil. In the municipal development plan analysis the emphasis was given to the Special Zone of Social Interest (ZEIS), the Urban Expansion Zone (ZEU) and the Environmental Protection Zone (ZPA). For the mapping of the social and environmental vulnerabilities of the study area the exposure of people (criticality) and of the place (support capacity) facing disaster risk were obtained from the 2010 Census from the Brazilian Institute of Geography and Statistics (IBGE). Considering the criticality, the variables of greater influence were related to literate persons responsible for the household and literate persons with 5 or more years of age; persons with 60 years or more of age and income of the person responsible for the household. In the Support Capacity analysis, the predominant influence was on the good household infrastructure in districts with low population density and also the presence of neighborhoods with little urban infrastructure and inadequate housing. The results of the comparative analysis show that the areas with high and very high vulnerability classes cover the classes of the ZEIS and the ZPA, whose zoning includes: Areas occupied by low-income population, presence of children and young people, irregular occupations and land suitable to urbanization but underutilized. The presence of zones of urban sprawl (ZEU) in areas of high to very high socio-environmental vulnerability reflects the inadequate use of the urban land in relation to the spatial distribution of the population and the territorial infrastructure, which favors the increase of disaster risk. It can be concluded that the study allowed observing the convergence between the vulnerability analysis and the classified areas in urban zoning. The occupation of areas unsuitable for housing due to its characteristics of risk was confirmed, thus concluding that the methodologies applied are agile instruments to subsidize actions to the reduction disasters risk.

Keywords: socio-environmental vulnerability, urban zoning, reduction disasters risk, methodologies

Procedia PDF Downloads 299
1299 Influence of Smoking on Fine And Ultrafine Air Pollution Pm in Their Pulmonary Genetic and Epigenetic Toxicity

Authors: Y. Landkocz, C. Lepers, P.J. Martin, B. Fougère, F. Roy Saint-Georges. A. Verdin, F. Cazier, F. Ledoux, D. Courcot, F. Sichel, P. Gosset, P. Shirali, S. Billet

Abstract:

In 2013, the International Agency for Research on Cancer (IARC) classified air pollution and fine particles as carcinogenic to humans. Causal relationships exist between elevated ambient levels of airborne particles and increase of mortality and morbidity including pulmonary diseases, like lung cancer. However, due to a double complexity of both physicochemical Particulate Matter (PM) properties and tumor mechanistic processes, mechanisms of action remain not fully elucidated. Furthermore, because of several common properties between air pollution PM and tobacco smoke, like the same route of exposure and chemical composition, potential mechanisms of synergy could exist. Therefore, smoking could be an aggravating factor of the particles toxicity. In order to identify some mechanisms of action of particles according to their size, two samples of PM were collected: PM0.03 2.5 and PM0.33 2.5 in the urban-industrial area of Dunkerque. The overall cytotoxicity of the fine particles was determined on human bronchial cells (BEAS-2B). Toxicological study focused then on the metabolic activation of the organic compounds coated onto PM and some genetic and epigenetic changes induced on a co-culture model of BEAS-2B and alveolar macrophages isolated from bronchoalveolar lavages performed in smokers and non-smokers. The results showed (i) the contribution of the ultrafine fraction of atmospheric particles to genotoxic (eg. DNA double-strand breaks) and epigenetic mechanisms (eg. promoter methylation) involved in tumor processes, and (ii) the influence of smoking on the cellular response. Three main conclusions can be discussed. First, our results showed the ability of the particles to induce deleterious effects potentially involved in the stages of initiation and promotion of carcinogenesis. The second conclusion is that smoking affects the nature of the induced genotoxic effects. Finally, the in vitro developed cell model, using bronchial epithelial cells and alveolar macrophages can take into account quite realistically, some of the existing cell interactions existing in the lung.

Keywords: air pollution, fine and ultrafine particles, genotoxic and epigenetic alterations, smoking

Procedia PDF Downloads 349
1298 Effect of Non-metallic Inclusion from the Continuous Casting Process on the Multi-Stage Forging Process and the Tensile Strength of the Bolt: Case Study

Authors: Tomasz Dubiel, Tadeusz Balawender, Miroslaw Osetek

Abstract:

The paper presents the influence of non-metallic inclusions on the multi-stage forging process and the mechanical properties of the dodecagon socket bolt used in the automotive industry. The detected metallurgical defect was so large that it directly influenced the mechanical properties of the bolt and resulted in failure to meet the requirements of the mechanical property class. In order to assess the defect, an X-ray examination and metallographic examination of the defective bolt were performed, showing exogenous non-metallic inclusion. The size of the defect on the cross-section was 0.531 [mm] in width and 1.523 [mm] in length; the defect was continuous along the entire axis of the bolt. In analysis, a FEM simulation of the multi-stage forging process was designed, taking into account a non-metallic inclusion parallel to the sample axis, reflecting the studied case. The process of defect propagation due to material upset in the head area was analyzed. The final forging stage in shaping the dodecagonal socket and filling the flange area was particularly studied. The effect of the defect was observed to significantly reduce the effective cross-section as a result of the expansion of the defect perpendicular to the axis of the bolt. The mechanical properties of products with and without the defect were analyzed. In the first step, the hardness test confirmed that the required value for the mechanical class 8.8 of both bolt types was obtained. In the second step, the bolts were subjected to a static tensile test. The bolts without the defect gave a positive result, while all 10 bolts with the defect gave a negative result, achieving a tensile strength below the requirements. Tensile strength tests were confirmed by metallographic tests and FEM simulation with perpendicular inclusion spread in the area of the head. The bolts were damaged directly under the bolt head, which is inconsistent with the requirements of ISO 898-1. It has been shown that non-metallic inclusions with orientation in accordance with the axis of the bolt can directly cause loss of functionality and these defects should be detected even before assembling in the machine element.

Keywords: continuous casting, multi-stage forging, non-metallic inclusion, upset bolt head

Procedia PDF Downloads 158
1297 The Effect of Cooperative Learning on Academic Achievement of Grade Nine Students in Mathematics: The Case of Mettu Secondary and Preparatory School

Authors: Diriba Gemechu, Lamessa Abebe

Abstract:

The aim of this study was to examine the effect of cooperative learning method on student’s academic achievement and on the achievement level over a usual method in teaching different topics of mathematics. The study also examines the perceptions of students towards cooperative learning. Cooperative learning is the instructional strategy in which pairs or small groups of students with different levels of ability work together to accomplish a shared goal. The aim of this cooperation is for students to maximize their own and each other learning, with members striving for joint benefit. The teacher’s role changes from wise on the wise to guide on the side. Cooperative learning due to its influential aspects is the most prevalent teaching-learning technique in the modern world. Therefore the study was conducted in order to examine the effect of cooperative learning on the academic achievement of grade 9 students in Mathematics in case of Mettu secondary school. Two sample sections are randomly selected by which one section served randomly as an experimental and the other as a comparison group. Data gathering instruments are achievement tests and questionnaires. A treatment of STAD method of cooperative learning was provided to the experimental group while the usual method is used in the comparison group. The experiment lasted for one semester. To determine the effect of cooperative learning on the student’s academic achievement, the significance of difference between the scores of groups at 0.05 levels was tested by applying t test. The effect size was calculated to see the strength of the treatment. The student’s perceptions about the method were tested by percentiles of the questionnaires. During data analysis, each group was divided into high and low achievers on basis of their previous Mathematics result. Data analysis revealed that both the experimental and comparison groups were almost equal in Mathematics at the beginning of the experiment. The experimental group out scored significantly than comparison group on posttest. Additionally, the comparison of mean posttest scores of high achievers indicates significant difference between the two groups. The same is true for low achiever students of both groups on posttest. Hence, the result of the study indicates the effectiveness of the method for Mathematics topics as compared to usual method of teaching.

Keywords: academic achievement, comparison group, cooperative learning, experimental group

Procedia PDF Downloads 249
1296 Opportunities for Reducing Post-Harvest Losses of Cactus Pear (Opuntia Ficus-Indica) to Improve Small-Holder Farmers Income in Eastern Tigray, Northern Ethiopia: Value Chain Approach

Authors: Meron Zenaselase Rata, Euridice Leyequien Abarca

Abstract:

The production of major crops in Northern Ethiopia, especially the Tigray Region, is at subsistence level due to drought, erratic rainfall, and poor soil fertility. Since cactus pear is a drought-resistant plant, it is considered as a lifesaver fruit and a strategy for poverty reduction in a drought-affected area of the region. Despite its contribution to household income and food security in the area, the cactus pear sub-sector is experiencing many constraints with limited attention given to its post-harvest loss management. Therefore, this research was carried out to identify opportunities for reducing post-harvest losses and recommend possible strategies to reduce post-harvest losses, thereby improving production and smallholder’s income. Both probability and non-probability sampling techniques were employed to collect the data. Ganta Afeshum district was selected from Eastern Tigray, and two peasant associations (Buket and Golea) were also selected from the district purposively for being potential in cactus pear production. Simple random sampling techniques were employed to survey 30 households from each of the two peasant associations, and a semi-structured questionnaire was used as a tool for data collection. Moreover, in this research 2 collectors, 2 wholesalers, 1 processor, 3 retailers, 2 consumers were interviewed; and two focus group discussion was also done with 14 key farmers using semi-structured checklist; and key informant interview with governmental and non-governmental organizations were interviewed to gather more information about the cactus pear production, post-harvest losses, the strategies used to reduce the post-harvest losses and suggestions to improve the post-harvest management. To enter and analyze the quantitative data, SPSS version 20 was used, whereas MS-word were used to transcribe the qualitative data. The data were presented using frequency and descriptive tables and graphs. The data analysis was also done using a chain map, correlations, stakeholder matrix, and gross margin. Mean comparisons like ANOVA and t-test between variables were used. The analysis result shows that the present cactus pear value chain involves main actors and supporters. However, there is inadequate information flow and informal market linkages among actors in the cactus pear value chain. The farmer's gross margin is higher when they sell to the processor than sell to collectors. The significant postharvest loss in the cactus pear value chain is at the producer level, followed by wholesalers and retailers. The maximum and minimum volume of post-harvest losses at the producer level is 4212 and 240 kgs per season. The post-harvest loss was caused by limited farmers skill on-farm management and harvesting, low market price, limited market information, absence of producer organization, poor post-harvest handling, absence of cold storage, absence of collection centers, poor infrastructure, inadequate credit access, using traditional transportation system, absence of quality control, illegal traders, inadequate research and extension services and using inappropriate packaging material. Therefore, some of the recommendations were providing adequate practical training, forming producer organizations, and constructing collection centers.

Keywords: cactus pear, post-harvest losses, profit margin, value-chain

Procedia PDF Downloads 138
1295 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 143
1294 Study of Motion of Impurity Ions in Poly(Vinylidene Fluoride) from View Point of Microstructure of Polymer Solid

Authors: Yuichi Anada

Abstract:

Electrical properties of polymer solid is characterized by dielectric relaxation phenomenon. Complex permittivity shows a high dependence on frequency of external stimulation in the broad frequency range from 0.1mHz to 10GHz. The complex-permittivity dispersion gives us a lot of useful information about the molecular motion of polymers and the structure of polymer aggregates. However, the large dispersion of permittivity at low frequencies due to DC conduction of impurity ions often covers the dielectric relaxation in polymer solid. In experimental investigation, many researchers have tried to remove the DC conduction experimentally or analytically for a long time. On the other hand, our laboratory chose another way of research for this problem from the point of view of a reversal in thinking. The way of our research is to use the impurity ions in the DC conduction as a probe to detect the motion of polymer molecules and to investigate the structure of polymer aggregates. In addition to the complex permittivity, the electric modulus and the conductivity relaxation time are strong tools for investigating the ionic motion in DC conduction. In a non-crystalline part of melt-crystallized polymers, free spaces with inhomogeneous size exist between crystallites. As the impurity ions exist in the non-crystalline part and move through these inhomogeneous free spaces, the motion of ions reflects the microstructure of non-crystalline part. The ionic motion of impurity ions in poly(vinylidene fluoride) (PVDF) is investigated in this study. Frequency dependence of the loss permittivity of PVDF shows a characteristic of the direct current (DC) conduction below 1 kHz of frequency at 435 K. The electric modulus-frequency curve shows a characteristic of the dispersion with the single conductivity relaxation time. Namely, it is the Debye-type dispersion. The conductivity relaxation time analyzed from this curve is 0.00003 s at 435 K. From the plot of conductivity relaxation time of PVDF together with the other polymers against permittivity, it was found that there are two group of polymers; one of the group is characterized by small conductivity relaxation time and large permittivity, and another is characterized by large conductivity relaxation time and small permittivity.

Keywords: conductivity relaxation time, electric modulus, ionic motion, permittivity, poly(vinylidene fluoride), DC conduction

Procedia PDF Downloads 173
1293 Time and Energy Saving Kitchen Layout

Authors: Poonam Magu, Kumud Khanna, Premavathy Seetharaman

Abstract:

The two important resources of any worker performing any type of work at any workplace are time and energy. These are important inputs of the worker and need to be utilised in the best possible manner. The kitchen is an important workplace where the homemaker performs many essential activities. Its layout should be so designed that optimum use of her resources can be achieved.Ideally, the shape of the kitchen, as determined by the physical space enclosed by the four walls, can be square, rectangular or irregular. But it is the shape of the arrangement of counter that one normally refers to while talking of the layout of the kitchen. The arrangement can be along a single wall, along two opposite walls, L shape, U shape or even island. A study was conducted in 50 kitchens belonging to middle income group families. These were DDA built kitchens located in North, South, East and West Delhi.The study was conducted in three phases. In the first phase, 510 non working homemakers were interviewed. The data related to personal characteristics of the homemakers was collected. Additional information was also collected regarding the kitchens-the size, shape , etc. The homemakers were also questioned about various aspects related to meal preparation-people performing the task, number of items cooked, areas used for meal preparation , etc. In the second phase, a suitable technique was designed for conducting time and motion study in the kitchen while the meal was being prepared. This technique was called Path Process Chart. The final phase was carried out in 50 kitchens. The criterion for selection was that all items for a meal should be cooked at the same time. All the meals were cooked by the homemakers in their own kitchens. The meal preparation was studied using the Path Process Chart technique. The data collected was analysed and conclusions drawn. It was found that of all the shapes, it was the kitchen with L shape arrangement in which, on an average a homemaker spent minimum time on meal preparation and also travelled the minimum distance. Thus, the average distance travelled in a L shaped layout was 131.1 mts as compared to 181.2 mts in an U shaped layout. Similarly, 48 minutes was the average time spent on meal preparation in L shaped layout as compared to 53 minutes in U shaped layout. Thus, the L shaped layout was more time and energy saving layout as compared to U shaped.

Keywords: kitchen layout, meal preparation, path process chart technique, workplace

Procedia PDF Downloads 207
1292 Assessment of Incomplete Childhood Immunization Determinants in Ethiopia: A Nationwide Multilevel Study

Authors: Mastewal Endeshaw Getnet

Abstract:

Imunization is one of the most cost-effective and extensively adopted public health strategies for preventing child disability and mortality. Expanded Program on Immunization (EPI) was launched in 1974 with the goal of providing life-saving vaccines to all children in all and building on the success of the global smallpox eradication program. According to World Health Organization report, by 2020, all countries should have achieved 90% vaccination coverage. Many developing countries still have not achieved the goal. Ethiopia is one of Africa's developing countries. The Ethiopian Ministry of health (MoH) launched the EPI program in 1980, with the goal of achieving 90% coverage among children under the age of 1 year by 1990. Among children aged 12-23 months, complete immunization coverage was 47% based on the Ethiopian Demographic and Health Survey (EDAS) 2019 report. The coverage varies depending on the administrative region, ranging from 21% in Afar region to 89% in Amhara region, Ethiopia. Therefore, identifying risk factors for incomplete immunization among children is a key challenge, particularly in Ethiopia, which has a large geographical diversity and a predicted with 119.96 million projected population size in the year 2022. Despite its critical and challenging issue, this issue is still open and has not yet been fully investigated. Recently, a few previous studies have been conducted on the assessment of incomplete children immunization determinants. However, the majority of the studies were cross-sectional surveys that assessed only EPI coverage. Motivated by the above investigation, this study focuses on investigating determinants associated with incomplete immunization among Ethiopian children to facilitate the rate of full immunization coverage. Moreover, we consider both individual immunization and service performance-related factors to investigate incomplete children's determinants. Consequently, we adopted an ecological model in this study. Individual and environmental factors are combined in the Ecological model, which provides multilevel framework for exploring different determinants related with health behaviors. The Ethiopian Demographic and Health Survey will be used as a source of data from 2021 to achieve the objective of this study. The findings of this study will be useful to the Ethiopian government and other public health institutes to improve the coverage score of childhood immunization based on the identified risk determinants.

Keywords: incomplete immunization, children, ethiopia, ecological model

Procedia PDF Downloads 45
1291 Evaluating the Characteristics of Paediatric Accidental Poisonings

Authors: Grace Fangmin Tan, Elaine Yiling Tay, Elizabeth Huiwen Tham, Andrea Wei Ching Yeo

Abstract:

Background: While accidental poisonings in children may seem unavoidable, knowledge of circumstances surrounding such incidents and identification of risk factors is important in the development of secondary prevention strategies. Some risk factors include age of the child, lack of adequate supervision and improper storage of substances. The aim of this study is to assess risk factors and circumstances influencing outcomes in these children. Methodology: A retrospective medical record review of all accidental poisoning cases presenting to the Children’s Emergency at National University Hospital (NUH), Singapore between January 2014 and December 2015 was conducted. Information on demographics, poisoning circumstances and clinical outcomes were collected. Results: Ninety-nine of a total of 186 poisoning cases were accidental ingestions, with a mean age of 4.7 (range 0.4 to 18.3 years). The gender distribution is rather equal with 52(52.5%) females and 47(47.5%) males. Seventy-nine (79.8%) were self-administered by the child and in 20 cases (20.2%), the substance was administered erroneously by caregivers 12/20 (60.0%) of whom were given the wrong drug dose while 8/20 (40.0%) were given the wrong substance. Self-administration was associated with presentation to the ED within 12 hours (p=0.027, OR 6.65, 95% CI 1.24-35.72). Notably, 94.9% of the cases involved substances kept within reach of the child. Sixty-nine (82.1%) had the substance kept in the original container, 3(3.6%) in food containers, 8(9.5%) in other containers and 4(4.8%) without a container. Of the 50 cases with information on labelling, 40/50(80.0%) were accurately labelled, 2/50 (4.0%) wrongly labelled, and 8/50 (16.0%) were unlabelled. Implicated substances included personal care products (11.1%), household cleaning products (3.0%), and different classes of drugs such as paracetamol (22.2%), antihistamines (17.2%) and sympathomimetics (8.1%). Children < 3 years of age were 4.8 times more likely to be poisoned by household substances than children >3 years of age (p=0.009, 95% CI 1.48-15.77). Prehospital interventions were more likely to have been done in poisoning with household substances (p=0.005, OR 6.12 95% CI 1.73-21.68). Fifty-nine (59.6%) were asymptomatic, 34 (34.3%) had a Poisoning Severity Score (PSS) grade of 1 (minor) and 6 (6.1%) grade 2 (moderate). Older children were 9.3 times more likely to be symptomatic (p<0.001, 95% CI 3.15-27.25). Thirty (32%) required admission. Conclusion: A significant proportion of accidental poisoning cases were due to medication administration errors by caregivers, which should be preventable. Risk factors for accidental poisoning included lack of adequate caregiver supervision, improper labelling and young age of the child. There is an urgent need to improve caregiver counselling during medication dispensing as well as to educate caregivers on basic child safety measures in the home to prevent future accidental poisonings.

Keywords: accidental, caregiver, paediatrics, poisoning

Procedia PDF Downloads 216
1290 Beyond the “Breakdown” of Karman Vortex Street

Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal

Abstract:

A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.

Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown

Procedia PDF Downloads 405
1289 Early Age Behavior of Wind Turbine Gravity Foundations

Authors: Janet Modu, Jean-Francois Georgin, Laurent Briancon, Eric Antoinet

Abstract:

The current practice during the repowering phase of wind turbines is deconstruction of existing foundations and construction of new foundations to accept larger wind loads or once the foundations have reached the end of their service lives. The ongoing research project FUI25 FEDRE (Fondations d’Eoliennes Durables et REpowering) therefore serves to propose scalable wind turbine foundation designs to allow reuse of the existing foundations. To undertake this research, numerical models and laboratory-scale models are currently being utilized and implemented in the GEOMAS laboratory at INSA Lyon following instrumentation of a reference wind turbine situated in the Northern part of France. Sensors placed within both the foundation and the underlying soil monitor the evolution of stresses from the foundation’s early age to stresses during service. The results from the instrumentation form the basis of validation for both the laboratory and numerical works conducted throughout the project duration. The study currently focuses on the effect of coupled mechanisms (Thermal-Hydro-Mechanical-Chemical) that induce stress during the early age of the reinforced concrete foundation, and scale factor considerations in the replication of the reference wind turbine foundation at laboratory-scale. Using THMC 3D models on COMSOL Multi-physics software, the numerical analysis performed on both the laboratory-scale and the full-scale foundations simulate the thermal deformation, hydration, shrinkage (desiccation and autogenous) and creep so as to predict the initial damage caused by internal processes during concrete setting and hardening. Results show a prominent effect of early age properties on the damage potential in full-scale wind turbine foundations. However, a prediction of the damage potential at laboratory scale shows significant differences in early age stresses in comparison to the full-scale model depending on the spatial position in the foundation. In addition to the well-known size effect phenomenon, these differences may contribute to inaccuracies encountered when predicting ultimate deformations of the on-site foundation using laboratory scale models.

Keywords: cement hydration, early age behavior, reinforced concrete, shrinkage, THMC 3D models, wind turbines

Procedia PDF Downloads 178
1288 Assessing the Influence of Station Density on Geostatistical Prediction of Groundwater Levels in a Semi-arid Watershed of Karnataka

Authors: Sakshi Dhumale, Madhushree C., Amba Shetty

Abstract:

The effect of station density on the geostatistical prediction of groundwater levels is of critical importance to ensure accurate and reliable predictions. Monitoring station density directly impacts the accuracy and reliability of geostatistical predictions by influencing the model's ability to capture localized variations and small-scale features in groundwater levels. This is particularly crucial in regions with complex hydrogeological conditions and significant spatial heterogeneity. Insufficient station density can result in larger prediction uncertainties, as the model may struggle to adequately represent the spatial variability and correlation patterns of the data. On the other hand, an optimal distribution of monitoring stations enables effective coverage of the study area and captures the spatial variability of groundwater levels more comprehensively. In this study, we investigate the effect of station density on the predictive performance of groundwater levels using the geostatistical technique of Ordinary Kriging. The research utilizes groundwater level data collected from 121 observation wells within the semi-arid Berambadi watershed, gathered over a six-year period (2010-2015) from the Indian Institute of Science (IISc), Bengaluru. The dataset is partitioned into seven subsets representing varying sampling densities, ranging from 15% (12 wells) to 100% (121 wells) of the total well network. The results obtained from different monitoring networks are compared against the existing groundwater monitoring network established by the Central Ground Water Board (CGWB). The findings of this study demonstrate that higher station densities significantly enhance the accuracy of geostatistical predictions for groundwater levels. The increased number of monitoring stations enables improved interpolation accuracy and captures finer-scale variations in groundwater levels. These results shed light on the relationship between station density and the geostatistical prediction of groundwater levels, emphasizing the importance of appropriate station densities to ensure accurate and reliable predictions. The insights gained from this study have practical implications for designing and optimizing monitoring networks, facilitating effective groundwater level assessments, and enabling sustainable management of groundwater resources.

Keywords: station density, geostatistical prediction, groundwater levels, monitoring networks, interpolation accuracy, spatial variability

Procedia PDF Downloads 66
1287 Characterization of Petrophysical Properties of Reservoirs in Bima Formation, Northeastern Nigeria: Implication for Hydrocarbon Exploration

Authors: Gabriel Efomeh Omolaiye, Jimoh Ajadi, Olatunji Seminu, Yusuf Ayoola Jimoh, Ubulom Daniel

Abstract:

Identification and characterization of petrophysical properties of reservoirs in the Bima Formation were undertaken to understand their spatial distribution and impacts on hydrocarbon saturation in the highly heterolithic siliciclastic sequence. The study was carried out using nine well logs from Maiduguri and Baga/Lake sub-basins within the Borno Basin. The different log curves were combined to decipher the lithological heterogeneity of the serrated sand facies and to aid the geologic correlation of sand bodies within the sub-basins. Evaluation of the formation reveals largely undifferentiated to highly serrated and lenticular sand bodies from which twelve reservoirs named Bima Sand-1 to Bima Sand-12 were identified. The reservoir sand bodies are bifurcated by shale beds, which reduced their thicknesses variably from 0.61 to 6.1 m. The shale content in the sand bodies ranged from 11.00% (relatively clean) to high shale content of 88.00%. The formation also has variable porosity values, with calculated total porosity ranged as low as 10.00% to as high as 35.00%. Similarly, effective porosity values spanned between 2.00 to 24.00%. The irregular porosity values also accounted for a wide range of field average permeability estimates computed for the formation, which measured between 0.03 to 319.49 mD. Hydrocarbon saturation (Sh) in the thin lenticular sand bodies also varied from 40.00 to 78.00%. Hydrocarbon was encountered in three intervals in Ga-1, four intervals in Da-1, two intervals in Ar-1, and one interval in Ye-1. Ga-1 well encountered 30.78 m thick of hydrocarbon column in 14 thin sand lobes in Bima Sand-1, with thicknesses from 0.60 m to 5.80 m and average saturation of 51.00%, while Bima Sand-2 intercepted 45.11 m thick of hydrocarbon column in 12 thin sand lobes with an average saturation of 61.00% and Bima Sand-9 has 6.30 m column in 4 thin sand lobes. Da-1 has hydrocarbon in Bima Sand-8 (5.30 m, Sh of 58.00% in 5 sand lobes), Bima Sand-10 (13.50 m, Sh of 52.00% in 6 sand lobes), Bima Sand-11 (6.20 m, Sh of 58.00% in 2 sand lobes) and Bima Sand-12 (16.50 m, Sh of 66% in 6 sand lobes). In the Ar-1 well, hydrocarbon occurs in Bima Sand-3 (2.40 m column, Sh of 48% in a sand lobe) and Bima Sand-9 (6.0 m, Sh of 58% in a sand lobe). Ye-1 well only intersected 0.5 m hydrocarbon in Bima Sand-1 with 78% saturation. Although Bima Formation has variable saturation of hydrocarbon, mainly gas in Maiduguri, and Baga/Lake sub-basins of the research area, its highly thin serrated sand beds, coupled with very low effective porosity and permeability in part, would pose a significant exploitation challenge. The sediments were deposited in a fluvio-lacustrine environment, resulting in a very thinly laminated or serrated alternation of sand and shale beds lithofacies.

Keywords: Bima, Chad Basin, fluvio-lacustrine, lithofacies, serrated sand

Procedia PDF Downloads 174