Search results for: interval type-2
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 859

Search results for: interval type-2

79 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method

Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka

Abstract:

The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.

Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image

Procedia PDF Downloads 315
78 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 150
77 Comparison of Quality of Life One Year after Bariatric Intervention: Systematic Review of the Literature with Bayesian Network Meta-Analysis

Authors: Piotr Tylec, Alicja Dudek, Grzegorz Torbicz, Magdalena Mizera, Natalia Gajewska, Michael Su, Tanawat Vongsurbchart, Tomasz Stefura, Magdalena Pisarska, Mateusz Rubinkiewicz, Piotr Malczak, Piotr Major, Michal Pedziwiatr

Abstract:

Introduction: Quality of life after bariatric surgery is an important factor when evaluating the final result of the treatment. Considering the vast surgical options, we tried to globally compare available methods in terms of quality of following the surgery. The aim of the study is to compare the quality of life a year after bariatric intervention using network meta-analysis methods. Material and Methods: We performed a systematic review according to PRISMA guidelines with Bayesian network meta-analysis. Inclusion criteria were: studies comparing at least two methods of weight loss treatment of which at least one is surgical, assessment of the quality of life one year after surgery by validated questionnaires. Primary outcomes were quality of life one year after bariatric procedure. The following aspects of quality of life were analyzed: physical, emotional, general health, vitality, role physical, social, mental, and bodily pain. All questionnaires were standardized and pooled to a single scale. Lifestyle intervention was considered as a referenced point. Results: An initial reference search yielded 5636 articles. 18 studies were evaluated. In comparison of total score of quality of life, we observed that laparoscopic sleeve gastrectomy (LSG) (median (M): 3.606, Credible Interval 97.5% (CrI): 1.039; 6.191), laparoscopic Roux en-Y gastric by-pass (LRYGB) (M: 4.973, CrI: 2.627; 7.317) and open Roux en-Y gastric by-pass (RYGB) (M: 9.735, CrI: 6.708; 12.760) had better results than other bariatric intervention in relation to lifestyle interventions. In the analysis of the physical aspects of quality of life, we notice better results in LSG (M: 3.348, CrI: 0.548; 6.147) and in LRYGB procedure (M: 5.070, CrI: 2.896; 7.208) than control intervention, and worst results in open RYGB (M: -9.212, CrI: -11.610; -6.844). Analyzing emotional aspects, we found better results than control intervention in LSG, in LRYGB, in open RYGB, and laparoscopic gastric plication. In general health better results were in LSG (M: 9.144, CrI: 4.704; 13.470), in LRYGB (M: 6.451, CrI: 10.240; 13.830) and in single-anastomosis gastric by-pass (M: 8.671, CrI: 1.986; 15.310), and worst results in open RYGB (M: -4.048, CrI: -7.984; -0.305). In social and vital aspects of quality of life, better results were observed in LSG and LRYGB than control intervention. We did not find any differences between bariatric interventions in physical role, mental and bodily aspects of quality of life. Conclusion: The network meta-analysis revealed that better quality of life in total score one year after bariatric interventions were after LSG, LRYGB, open RYGB. In physical and general health aspects worst quality of life was in open RYGB procedure. Other interventions did not significantly affect the quality of life after a year compared to dietary intervention.

Keywords: bariatric surgery, network meta-analysis, quality of life, one year follow-up

Procedia PDF Downloads 159
76 Improvement of Activity of β-galactosidase from Kluyveromyces lactis via Immobilization on Polyethylenimine-Chitosan

Authors: Carlos A. C. G. Neto, Natan C. G. e Silva , Thaís de O. Costa, Luciana R. B. Gonçalves, Maria V. P. Rocha

Abstract:

β-galactosidases (E.C. 3.2.1.23) are enzymes that have attracted by catalyzing the hydrolysis of lactose and in producing galacto-oligosaccharides by favoring transgalactosylation reactions. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers is a promising alternative to enhance the stability of the biocatalysts, among which polyethylenimine (PEI) stands out. PEI has certain properties, such as being a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases. Besides that, protects them from environmental variations. The use of chitosan support coated with PEI could improve the catalytic efficiency of β-galactosidase from Kluyveromyces lactis in the transgalactosylation reaction for the production of prebiotics, such as lactulose since this strain is more effective in the hydrolysis reaction. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from K. lactis immobilized on chitosan-coated with PEI, determining the immobilization parameters, its operational and thermal stability, and then to apply it in hydrolysis and transgalactolisation reactions to produce lactulose using whey as a substrate. The immobilization of β-galactosidase in chitosan previously functionalized with 0.8% (v/v) glutaraldehyde and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg protein per gram support. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey supplemented with fructose at a ratio of 1:2 lactose/fructose, totaling 200 g/L. Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6 with 0.1 mM MnCl2. The biocatalyst whose support was coated was named CHI_GLU_PEI_GAL, and the one that was not coated was named CHI_GLU_GAL. The coating of the support with PEI considerably improved the parameters of immobilization. The immobilization yield increased from 56.53% to 97.45%, biocatalyst activity from 38.93 U/g to 95.26 U/g and the efficiency from 3.51% to 6.0% for uncoated and coated support, respectively. The biocatalyst CHI_GLU_PEI_GAL was better than CHI_GLU_GAL in the hydrolysis of lactose and production of lactulose, converting 97.05% of lactose at 5 min of reaction and producing 7.60 g/L lactulose in the same time interval. QUI_GLU_PEI_GAL biocatalyst was stable in the hydrolysis reactions of lactose during the 10 cycles evaluated, converting 73.45% lactose even after the tenth cycle, and in the lactulose production was stable until the fifth cycle evaluated, producing 10.95 g/L lactulose. However, the thermal stability of CHI_GLU_GAL biocatalyst was superior, with a half-life time 6 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (CHI_GLU_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as, the catalytic action of the enzyme. Besides that, this process can be economically viable due to the use of an industrial residue as a substrate.

Keywords: β-galactosidase, immobilization, kluyveromyces lactis, lactulose, polyethylenimine, transgalactosylation reaction, whey

Procedia PDF Downloads 112
75 Comparison of Two Transcranial Magnetic Stimulation Protocols on Spasticity in Multiple Sclerosis - Pilot Study of a Randomized and Blind Cross-over Clinical Trial

Authors: Amanda Cristina da Silva Reis, Bruno Paulino Venâncio, Cristina Theada Ferreira, Andrea Fialho do Prado, Lucimara Guedes dos Santos, Aline de Souza Gravatá, Larissa Lima Gonçalves, Isabella Aparecida Ferreira Moretto, João Carlos Ferrari Corrêa, Fernanda Ishida Corrêa

Abstract:

Objective: To compare two protocols of Transcranial Magnetic Stimulation (TMS) on quadriceps muscle spasticity in individuals diagnosed with Multiple Sclerosis (MS). Method: Clinical, crossover study, in which six adult individuals diagnosed with MS and spasticity in the lower limbs were randomized to receive one session of high-frequency (≥5Hz) and low-frequency (≤ 1Hz) TMS on motor cortex (M1) hotspot for quadriceps muscle, with a one-week interval between the sessions. To assess the spasticity was applied the Ashworth scale and were analyzed the latency time (ms) of the motor evoked potential (MEP) and the central motor conduction time (CMCT) of the bilateral quadriceps muscle. Assessments were performed before and after each intervention. The difference between groups was analyzed using the Friedman test, with a significance level of 0.05 adopted. Results: All statistical analyzes were performed using the SPSS Statistic version 26 programs, with a significance level established for the analyzes at p<0.05. Shapiro Wilk normality test. Parametric data were represented as mean and standard deviation for non-parametric variables, median and interquartile range, and frequency and percentage for categorical variables. There was no clinical change in quadriceps spasticity assessed using the Ashworth scale for the 1 Hz (p=0.813) and 5 Hz (p= 0.232) protocols for both limbs. Motor Evoked Potential latency time: in the 5hz protocol, there was no significant change for the contralateral side from pre to post-treatment (p>0.05), and for the ipsilateral side, there was a decrease in latency time of 0.07 seconds (p<0.05 ); for the 1Hz protocol there was an increase of 0.04 seconds in the latency time (p<0.05) for the contralateral side to the stimulus, and for the ipsilateral side there was a decrease in the latency time of 0.04 seconds (p=<0.05), with a significant difference between the contralateral (p=0.007) and ipsilateral (p=0.014) groups. Central motor conduction time in the 1Hz protocol, there was no change for the contralateral side (p>0.05) and for the ipsilateral side (p>0.05). In the 5Hz protocol for the contralateral side, there was a small decrease in latency time (p<0.05) and for the ipsilateral side, there was a decrease of 0.6 seconds in the latency time (p<0.05) with a significant difference between groups (p=0.019). Conclusion: A high or low-frequency session does not change spasticity, but it is observed that when the low-frequency protocol was performed, there was an increase in latency time on the stimulated side, and a decrease in latency time on the non-stimulated side, considering then that inhibiting the motor cortex increases cortical excitability on the opposite side.

Keywords: multiple sclerosis, spasticity, motor evoked potential, transcranial magnetic stimulation

Procedia PDF Downloads 91
74 Immobilization of β-Galactosidase from Kluyveromyces Lactis on Polyethylenimine-Agarose for Production of Lactulose

Authors: Carlos A. C. G. Neto, Natan C. G. Silva, Thais O. Costa, Luciana R. B. Goncalves, Maria v. P. Rocha

Abstract:

Galactosidases are enzymes responsible for catalyzing lactose hydrolysis reactions and also favoring transgalactosylation reactions for the production of prebiotics, among which lactulose stands out. These enzymes, when immobilized, can have some enzymatic characteristics substantially improved, and the coating of supports with multifunctional polymers in immobilization processes is a promising alternative in order to extend the useful life of the biocatalysts, for example, the coating with polyethyleneimine (PEI). PEI is a flexible polymer that suits the structure of the enzyme, giving greater stability, especially for multimeric enzymes such as β-galactosidases and also protects it from environmental variations, for example, pH and temperature. In addition, it can substantially improve the immobilization parameters and also the efficiency of enzymatic reactions. In this context, the aim of the present work was first to develop biocatalysts of β-galactosidase from Kluyveromyces lactis immobilized on PEI coated agarose, determining the immobilization parameters, its operational and thermal stability, and then to apply it in the hydrolysis of lactose and synthesis of lactulose, using whey as a substrate. This immobilization strategy was chosen in order to improve the catalytic efficiency of the enzyme in the transgalactosylation reaction for the production of prebiotics, and there are few studies with β-galactosidase from this strain. The immobilization of β-galactosidase in agarose previously functionalized with 48% (w/v) glycidol and then coated with 10% (w/v) PEI solution was evaluated using an enzymatic load of 10 mg/g of protein. Subsequently, the hydrolysis and transgalactosylation reactions were conducted at 50 °C, 120 RPM for 20 minutes, using whey (66.7 g/L of lactose) supplemented with 133.3 g/L fructose at a ratio of 1:2 (lactose/fructose). Operational stability studies were performed in the same conditions for 10 cycles. Thermal stabilities of biocatalysts were conducted at 50 ºC in 50 mM phosphate buffer, pH 6.6, with 0.1 mM MnCl2. The biocatalysts whose supports were coated were named AGA_GLY_PEI_GAL, and those that were not coated were named AGA_GLY_GAL. The coating of the support with PEI considerably improved immobilization yield (2.6-fold), the biocatalyst activity (1.4-fold), and efficiency (2.2-fold). The biocatalyst AGA_GLY_PEI_GAL was better than AGA_GLY_GAL in hydrolysis and transgalactosylation reactions, converting 88.92% of lactose at 5 min of reaction and obtaining a residual concentration of 5.24 g/L. Besides that, it was produced 13.90 g/L lactulose in the same time interval. AGA_GLY_PEI_GAL biocatalyst was stable during the 10 cycles evaluated, converting approximately 80% of lactose and producing 10.95 g/L of lactulose even after the tenth cycle. However, the thermal stability of AGA_GLY_GAL biocatalyst was superior, with a half-life time 5 times higher, probably because the enzyme was immobilized by covalent bonding, which is stronger than adsorption (AGA_GLY_PEI_GAL). Therefore, the strategy of coating the supports with PEI has proven to be effective for the immobilization of β-galactosidase from K. lactis, considerably improving the immobilization parameters, as well as the enzyme, catalyzed reactions. In addition, the use of whey as a raw material for lactulose production has proved to be an industrially advantageous alternative.

Keywords: β-galactosidase, immobilization, lactulose, polyethylenimine, whey

Procedia PDF Downloads 119
73 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design

Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez

Abstract:

Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.

Keywords: coffee waste, optimization, oil yield, statistical planning

Procedia PDF Downloads 119
72 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube

Authors: Nirjhar Dhang, S. Vinay Kumar

Abstract:

Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.

Keywords: concrete, image processing, plane strain, interfacial transition zone

Procedia PDF Downloads 241
71 Testing Nitrogen and Iron Based Compounds as an Environmentally Safer Alternative to Control Broadleaf Weeds in Turf

Authors: Simran Gill, Samuel Bartels

Abstract:

Turfgrass is an important component of urban and rural lawns and landscapes. However, broadleaf weeds such as dandelions (Taraxacum officinale) and white clovers (Trifolium repens) pose major challenges to the health and aesthetics of turfgrass fields. Chemical weed control methods, such as 2,4-D weedicides, have been widely deployed; however, their safety and environmental impacts are often debated. Alternative, environmentally friendly control methods have been considered, but experimental tests for their effectiveness have been limited. This study investigates the use and effectiveness of nitrogen and iron compounds as nutrient management methods of weed control. In a two-phase experiment, the first conducted on a blend of cool season turfgrasses in plastic containers, the blend included Perennial ryegrass (Lolium perenne), Kentucky bluegrass (Poa pratensis) and Creeping red fescue (Festuca rubra) grown under controlled conditions in the greenhouse, involved the application of different combinations of nitrogen (urea and ammonium sulphate) and iron (chelated iron and iron sulphate) compounds and their combinations (urea × chelated iron, urea × iron sulphate, ammonium sulphate × chelated iron, ammonium sulphate × iron sulphate) contrasted with chemical 2, 4-D weedicide and a control (no application) treatment. There were three replicates of each of the treatments, resulting in a total of 30 treatment combinations. The parameters assessed during weekly data collection included a visual quality rating of weeds (nominal scale of 0-9), number of leaves, longest leaf span, number of weeds, chlorophyll fluorescence of grass, the visual quality rating of grass (0-9), and the weight of dried grass clippings. The results drawn from the experiment conducted over the period of 12 weeks, with three applications each at an interval of every 4 weeks, stated that the combination of ammonium sulphate and iron sulphate appeared to be most effective in halting the growth and establishment of dandelions and clovers while it also improved turf health. The second phase of the experiment, which involved the ammonium sulphate × iron sulphate, weedicide, and control treatments, was conducted outdoors on already established perennial turf with weeds under natural field conditions. After 12 weeks of observation, the results were comparable among the treatments in terms of weed control, but the ammonium sulphate × iron sulphate treatment fared much better in terms of the improved visual quality of the turf and other quality ratings. Preliminary results from these experiments thus suggest that nutrient management based on nitrogen and iron compounds could be a useful environmentally friendly alternative for controlling broadleaf weeds and improving the health and quality of turfgrass.

Keywords: broadleaf weeds, nitrogen, iron, turfgrass

Procedia PDF Downloads 75
70 Heat Stress a Risk Factor for Poor Maternal Health- Evidence from South India

Authors: Vidhya Venugopal, Rekha S.

Abstract:

Introduction: Climate change and the growing frequency of higher average temperatures and heat waves have detrimental health effects, especially for certain vulnerable groups with limited socioeconomic status (SES) or physiological capacity to adapt to or endure high temperatures. Little research has been conducted on the effects of heat stress on pregnant women and fetuses in tropical regions such as India. Very high ambient temperatures may worsen Adverse Pregnancy Outcomes (APOs) and are a major worry in the scenario of climate change. The relationship between rising temperatures and APO must be better understood in order to design more effective interventions. Methodology: We conducted an observational cohort study involving 865 pregnant women in various districts of Tamil Nadu districts between 2014 and 2021. Physiological Heat Strain Indicators (HSI) such as morning and evening Core Body Temperature (CBT) and Urine Specific Gravity (USG) were monitored using an infrared thermometer and refractometer, respectively. A validated, modified version of the HOTHAPS questionnaire was utilised to collect self-reported health symptoms. A follow-up was undertaken with the mothers to collect information regarding birth outcomes and APOs, such as spontaneous abortions, stillbirths, Preterm Birth (PTB), birth abnormalities, and Low Birth Weight (LBW). Major findings of the study: According to the findings of our study, ambient temperatures (mean WBGT°C) were substantially higher (>28°C) for approximately 46% of women performing moderate daily life activities. 82% versus 43% of these women experienced dehydration and heat-related complaints. 34% of women had USG >1.020, which is symptomatic of dehydration. APOs, which include spontaneous abortions, were prevalent at 2.2%, stillbirth/preterm birth/birth abnormalities were prevalent at 2.2%, and low birth weight was prevalent at 16.3%. With exposures to WBGT>28°C, the incidence of miscarriage or unexpected abortion rose by approximately 2.7 times (95% CI: 1.1-6.9). In addition, higher WBGT exposures were associated with a 1.4-fold increased risk of unfavorable birth outcomes (95% Confidence Interval [CI]: 1.02-1.09). The risk of spontaneous abortions was 2.8 times higher among women who conceived during the hotter months (February – September) compared to those women who conceived in the cooler months (October – January) (95% CI: 1.04-7.4). Positive relationships between ambient heat and APOs found in this study necessitate further exploration into the underlying factors for extensive cohort studies to generate information to enable the formulation of policies that can effectively protect these women against excessive heat stress for enhanced maternal and fetal health.

Keywords: heat exposures, community, pregnant women, physiological strain, adverse outcome, interventions

Procedia PDF Downloads 84
69 Lifespan Assessment of the Fish Crossing System of Itaipu Power Plant (Brazil/Paraguay) Based on the Reaching of Its Sedimentological Equilibrium Computed by 3D Modeling and Churchill Trapping Efficiency

Authors: Anderson Braga Mendes, Wallington Felipe de Almeida, Cicero Medeiros da Silva

Abstract:

This study aimed to assess the lifespan of the fish transposition system of the Itaipu Power Plant (Brazil/Paraguay) by using 3D hydrodynamic modeling and Churchill trapping effiency in order to identify the sedimentological equilibrium configuration in the main pond of the Piracema Channel, which is part of a 10 km hydraulic circuit that enables fish migration from downstream to upstream (and vice-versa) the Itaipu Dam, overcoming a 120 m water drop. For that, bottom data from 2002 (its opening year) and 2015 were collected and analyzed, besides bed material at 12 stations to the purpose of identifying their granulometric profiles. The Shields and Yalin and Karahan diagrams for initiation of motion of bed material were used to determine the critical bed shear stress for the sedimentological equilibrium state based on the sort of sediment (grain size) to be found at the bottom once the balance is reached. Such granulometry was inferred by analyzing the grosser material (fine and medium sands) which inflows the pond and deposits in its backwater zone, being adopted a range of diameters within the upper and lower limits of that sand stratification. The software Delft 3D was used in an attempt to compute the bed shear stress at every station under analysis. By modifying the input bathymetry of the main pond of the Piracema Channel so as to the computed bed shear stress at each station fell within the intervals of acceptable critical stresses simultaneously, it was possible to foresee the bed configuration of the main pond when the sedimentological equilibrium is reached. Under such condition, 97% of the whole pond capacity will be silted, and a shallow water course with depths ranging from 0.2 m to 1.5 m will be formed; in 2002, depths ranged from 2 m to 10 m. Out of that water path, the new bottom will be practically flat and covered by a layer of water 0.05 m thick. Thus, in the future the main pond of the Piracema Channel will lack its purpose of providing a resting place for migrating fish species, added to the fact that it may become an insurmountable barrier for medium and large sized specimens. Everything considered, it was estimated that its lifespan, from the year of its opening to the moment of the sedimentological equilibrium configuration, will be approximately 95 years–almost half of the computed lifespan of Itaipu Power Plant itself. However, it is worth mentioning that drawbacks concerning the silting in the main pond will start being noticed much earlier than such time interval owing to the reasons previously mentioned.

Keywords: 3D hydrodynamic modeling, Churchill trapping efficiency, fish crossing system, Itaipu power plant, lifespan, sedimentological equilibrium

Procedia PDF Downloads 234
68 Analysis of Latest Fitness Trends in India

Authors: Amita Rana

Abstract:

From the ancient to modern times, the nature of fitness activities has varied. We can choose any form of exercise that is suitable for our particular need. Watchers of fitness trends say that the road to better health is paved with new possibilities along with some old ones that are poised to make a comeback. Educated, certified and experienced fitness professionals; strength training; fitness programmes for older adults; exercise and weight loss; children and obesity; personal training; core training; group personal training; Zumba and other dance workouts; functional fitness; yoga; comprehensive health promotion programmes at worksite; boot-camp; outdoor activities; reaching new markets; spinning; sport-specific training; worker incentive programmes; wellness coaching; and physician referrals are among the fitness trends included in worldwide surveys. However, trends related to fitness in India could be the same or different. Hence, the present paper makes an attempt to analyze the latest fitness trends in India. A total of eighteen (18) surveys were shortlisted on the basis of their relevance to the present topic of study and were arranged in descending order of their chronology. Content analysis was done after the preliminary set of data collection, which formed the basis of a group of data. Further, frequency and percentage were used to statistically represent the data. It can be concluded from the analysis of data regarding recent fitness trends in India that yoga dominates the fitness activity list, followed by numerous other activities including running, Zumba and sh’bam, boot camp, boxing, kickboxing, cycling, swimming, TRX, ass-pocalypse, ballet, biking, bokwa fitness, dance-iso-bic, masala bhangra, outdoor activities, pilates, planks, push-ups, sofa workouts, stairs Workouts, tabata training, and twerking. The body weight/ gym-specified/ strength training as well as high intensity interval training dominate the preferred workouts; followed by mixed work-outs, cross training work-outs, express work-outs, functional fitness, natural body movements, personalized training, and stay-at-home workouts. General areas that featured in the latest fitness trends in India demonstrates that the fitness is making an impact on all sections of the society be it children, women, older adults, senior citizens, worksite fitness. Fitness is becoming the lifestyle of the masses. People are doing exercise for weight-loss, combining diet with exercising; prefer sweating, making groups participate in fitness activities and wellness programmes. Technology is another area which has a high impact on the lives of people. They are using wearable technology for workout tracking and following numerous mobile friendly apps.

Keywords: fitness, India, survey, trend

Procedia PDF Downloads 314
67 Use of PACER Application as Physical Activity Assessment Tool: Results of a Reliability and Validity Study

Authors: Carine Platat, Fatima Qshadi, Ghofran Kayed, Nour Hussein, Amjad Jarrar, Habiba Ali

Abstract:

Nowadays, smartphones are very popular. They are offering a variety of easy-to-use and free applications among which step counters and fitness tests. The number of users is huge making of such applications a potentially efficient new strategy to encourage people to become more active. Nonetheless, data on their reliability and validity are very scarce and when available, they are often negative and contradictory. Besides, weight status, which is likely to introduce a bias in the physical activity assessment, was not often considered. Hence, the use of these applications as motivational tool, assessment tool and in research is questionable. PACER is one of the free step counters application. Even though it is one of the best rated free application by users, it has never been tested for reliability and validity. Prior any use of PACER, this remains to be investigated. The objective of this work is to investigate the reliability and validity of the smartphone application PACER in measuring the number of steps and in assessing the cardiorespiratory fitness by the 6 minutes walking test. 20 overweight or obese students (10 male and 10 female) were recruited at the United Arab Emirate University, aged between 18 and 25 years old. Reliability and validity were tested in real life conditions and in controlled conditions by using a treadmill. Test-retest experiments were done with PACER on 2 days separated by a week in real life conditions (24 hours each time) and in controlled conditions (30 minutes on treadmill, 3km/h). Validity was tested against the pedometer OMRON in the same conditions. During treadmill test, video was recorded and steps numbers were compared between PACER, pedometer and video. The validity of PACER in estimating the cardiorespiratory fitness (VO2max) as part of the 6 minutes walking test (6MWT) was studied against the 20m shuttle running test. Reliability was studied by calculating intraclass correlation coefficients (ICC), 95% confidence interval (95%CI) and by Bland-Altman plots. Validity was studied by calculating Spearman correlation coefficient (rho) and Bland-Altman plots. PACER reliability was good in both male and female in real life conditions (p≤10-3) but only in female in controlled conditions (p=0.01). PACER was valid against OMRON pedometer in male and female in real life conditions (rho=0.94, p≤10-3 ; rho=0.64, p=0.01, in male and female respectively). In controlled conditions, PACER was not valid against pedometer. But, PACER was valid against video in female (rho=0.72, p≤10-3). PACER was valid against the shuttle run test in male and female (rho-=0.66, p=0.01 ; rho=0.51, p=0.04) to estimate VO2max. This study provides data on the reliability and viability of PACER in overweight or obese male and female young adults. Globally, PACER was shown as reliable and valid in real life conditions in overweight or obese male and female to count steps and assess fitness. This supports the use of PACER to assess and promote physical activity in clinical follow-up and community interventions.

Keywords: smartphone application, pacer, reliability, validity, steps, fitness, physical activity

Procedia PDF Downloads 453
66 Investigating Seasonal Changes of Urban Land Cover with High Spatio-Temporal Resolution Satellite Data via Image Fusion

Authors: Hantian Wu, Bo Huang, Yuan Zeng

Abstract:

Divisions between wealthy and poor, private and public landscapes are propagated by the increasing economic inequality of cities. While these are the spatial reflections of larger social issues and problems, urban design can at least employ spatial techniques that promote more inclusive rather than exclusive, overlapping rather than segregated, interlinked rather than disconnected landscapes. Indeed, the type of edge or border between urban landscapes plays a critical role in the way the environment is perceived. China experiences rapid urbanization, which poses unpredictable environmental challenges. The urban green cover and water body are under changes, which highly relevant to resident wealth and happiness. However, very limited knowledge and data on their rapid changes are available. In this regard, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understating the driving forces of urban landscape changes can be a significant contribution for urban planning and studying. High-resolution remote sensing data has been widely applied to urban management in China. The map of urban land use map for the entire China of 2018 with 10 meters resolution has been published. However, this research focuses on the large-scale and high-resolution remote sensing land use but does not precisely focus on the seasonal change of urban covers. High-resolution remote sensing data has a long-operation cycle (e.g., Landsat 8 required 16 days for the same location), which is unable to satisfy the requirement of monitoring urban-landscape changes. On the other hand, aerial-remote or unmanned aerial vehicle (UAV) sensing are limited by the aviation-regulation and cost was hardly widely applied in the mega-cities. Moreover, those data are limited by the climate and weather conditions (e.g., cloud, fog), and those problems make capturing spatial and temporal dynamics is always a challenge for the remote sensing community. Particularly, during the rainy season, no data are available even for Sentinel Satellite data with 5 days interval. Many natural events and/or human activities drive the changes of urban covers. In this case, enhancing the monitoring of urban landscape with high-frequency method, evaluating and estimating the impacts of the urban landscape changes, and understanding the mechanism of urban landscape changes can be a significant contribution for urban planning and studying. This project aims to use the high spatiotemporal fusion of remote sensing data to create short-cycle, high-resolution remote sensing data sets for exploring the high-frequently urban cover changes. This research will enhance the long-term monitoring applicability of high spatiotemporal fusion of remote sensing data for the urban landscape for optimizing the urban management of landscape border to promoting the inclusive of the urban landscape to all communities.

Keywords: urban land cover changes, remote sensing, high spatiotemporal fusion, urban management

Procedia PDF Downloads 126
65 Qualitative Narrative Framework as Tool for Reduction of Stigma and Prejudice

Authors: Anastasia Schnitzer, Oliver Rehren

Abstract:

Mental health has become an increasingly important topic in society in recent years, not least due to the challenges posed by the corona pandemic. Along with this, the public has become more and more aware that a lack of enlightenment and proper coping mechanisms may result in a notable risk to develop mental disorders. Yet, there are still many biases against those affected, which are further connected to issues of stigmatization and societal exclusion. One of the main strategies to combat these forms of prejudice and stigma is to induce intergroup contact. More specifically, the Intergroup Contact Theory states engaging in certain types of contact with members of marginalized groups may be an effective way to improve attitudes towards these groups. However, due to the persistent prejudice and stigmatization, affected individuals often do not dare to speak openly about their mental disorders, so that intergroup contact often goes unnoticed. As a result, many people only experience conscious contact with individuals with a mental disorder through media. As an analogy to the Intergroup Contact Theory, the Parasocial Contact Hypothesis proposes that repeatedly being exposed to positive media representations of outgroup members can lead to a reduction of negative prejudices and attitudes towards this outgroup. While there is a growing body of research on the merit of this mechanism, measurements often only consist of 'positive' or 'negative' parasocial contact conditions (or examine the valence or quality of the previous contact with the outgroup); meanwhile, more specific conditions are often neglected. The current study aims to tackle this shortcoming. By scrutinizing the potential of contemporary series as a narrative framework of high quality, we strive to elucidate more detailed aspects of beneficial parasocial contact -for the sake of reducing prejudice and stigma towards individuals with mental disorders. Thus, a two-factorial between-subject online panel study with three measurement points was conducted (N = 95). Participants were randomly assigned to one of two groups, having to watch episodes of either a series with a narrative framework of high (Quality-TV) or low quality (Continental-TV), with one-week interval in-between the episodes. Suitable series were determined with the help of a pretest. Prejudice and stigma towards people with mental disorders were measured at the beginning of the study, before and after each episode, and in a final follow-up one week after the last two episodes. Additionally, parasocial interaction (PSI), quality of contact (QoC), and transportation were measured several times. Based on these data, multivariate multilevel analyses were performed in R using the lavaan package. Latent growth models showed moderate to high increases in QoC and PSI as well as small to moderate decreases in stigma and prejudice over time. Multilevel path analysis with individual and group levels further revealed that a qualitative narrative framework leads to a higher quality of contact experience, which then leads to lower prejudice and stigma, with effects ranging from moderate to high.

Keywords: prejudice, quality of contact, parasocial contact, narrative framework

Procedia PDF Downloads 85
64 Gastro-Protective Actions of Melatonin and Murraya koenigii Leaf Extract Combination in Piroxicam Treated Male Wistar Rats

Authors: Syed Benazir Firdaus, Debosree Ghosh, Aindrila Chattyopadhyay, Kuladip Jana, Debasish Bandyopadhyay

Abstract:

Gastro-toxic effect of piroxicam, a classical non-steroidal anti-inflammatory drug (NSAID), has restricted its use in arthritis and similar diseases. The present study aims to find if a combination of melatonin and Murraya koenigii leaf extract therapy can protect against piroxicam induced ulcerative damage in rats. For this study, rats were divided into four groups namely control group where rats were orally administered distilled water, only combination treated group, piroxicam treated group and combination pre-administered piroxicam treated group. Each group of rats consisted of six animals. Melatonin at a dose of 20mg/kg body weight and antioxidant rich Murraya koenigii leaf extract at a dose of 50 mg /kg body weight were successively administered at 30 minutes interval one hour before oral administration of piroxicam at a dose of 30 mg/kg body weight to Wistar rats in the combination pre-administered piroxicam treated group. The rats of the animal group which was only combination treated were administered both the drugs respectively without piroxicam treatment whereas the piroxicam treated animal group was administered only piroxicam at 30mg/kg body weight without any pre-treatment with the combination. Macroscopic examination along with histo-pathological study of gastric tissue using haemotoxylin-eosin staining and alcian blue dye staining showed protection of the gastric mucosa in the combination pre-administered piroxicam treated group. Determination of adherent mucus content biochemically and collagen content through Image J analysis of picro-sirius stained sections of rat gastric tissue also revealed protective effects of the combination in piroxicam mediated toxicity. Gelatinolytic activity of piroxicam was significantly reduced by pre-administration of the drugs which was well exhibited by the gelatin zymography study of the rat gastric tissue. Mean ulcer index determined from macroscopic study of rat stomach reduced to a minimum (0±0.00; Mean ± Standard error of mean and number of animals in the group=6) indicating the absence of ulcer spots on pre-treatment of rats with the combination. Gastro-friendly prostaglandin (PGE2) which otherwise gets depleted on piroxicam treatment was also well protected when the combination was pre-administered in the rats prior to piroxicam treatment. The requirement of the individual drugs in low doses in this combinatorial therapeutic approach will possibly minimize the cost of therapy as well as it will eliminate the possibility of any pro-oxidant side effects on the use of high doses of antioxidants. Beneficial activity of this combination therapy in the rat model raises the possibility that similar protective actions might be also observed if it is adopted by patients consuming NSAIDs like piroxicam. However, the introduction of any such therapeutic approach is subject to future studies in human.

Keywords: gastro-protective action, melatonin, Murraya koenigii leaf extract, piroxicam

Procedia PDF Downloads 308
63 Using Convolutional Neural Networks to Distinguish Different Sign Language Alphanumerics

Authors: Stephen L. Green, Alexander N. Gorban, Ivan Y. Tyukin

Abstract:

Within the past decade, using Convolutional Neural Networks (CNN)’s to create Deep Learning systems capable of translating Sign Language into text has been a breakthrough in breaking the communication barrier for deaf-mute people. Conventional research on this subject has been concerned with training the network to recognize the fingerspelling gestures of a given language and produce their corresponding alphanumerics. One of the problems with the current developing technology is that images are scarce, with little variations in the gestures being presented to the recognition program, often skewed towards single skin tones and hand sizes that makes a percentage of the population’s fingerspelling harder to detect. Along with this, current gesture detection programs are only trained on one finger spelling language despite there being one hundred and forty-two known variants so far. All of this presents a limitation for traditional exploitation for the state of current technologies such as CNN’s, due to their large number of required parameters. This work aims to present a technology that aims to resolve this issue by combining a pretrained legacy AI system for a generic object recognition task with a corrector method to uptrain the legacy network. This is a computationally efficient procedure that does not require large volumes of data even when covering a broad range of sign languages such as American Sign Language, British Sign Language and Chinese Sign Language (Pinyin). Implementing recent results on method concentration, namely the stochastic separation theorem, an AI system is supposed as an operate mapping an input present in the set of images u ∈ U to an output that exists in a set of predicted class labels q ∈ Q of the alphanumeric that q represents and the language it comes from. These inputs and outputs, along with the interval variables z ∈ Z represent the system’s current state which implies a mapping that assigns an element x ∈ ℝⁿ to the triple (u, z, q). As all xi are i.i.d vectors drawn from a product mean distribution, over a period of time the AI generates a large set of measurements xi called S that are grouped into two categories: the correct predictions M and the incorrect predictions Y. Once the network has made its predictions, a corrector can then be applied through centering S and Y by subtracting their means. The data is then regularized by applying the Kaiser rule to the resulting eigenmatrix and then whitened before being split into pairwise, positively correlated clusters. Each of these clusters produces a unique hyperplane and if any element x falls outside the region bounded by these lines then it is reported as an error. As a result of this methodology, a self-correcting recognition process is created that can identify fingerspelling from a variety of sign language and successfully identify the corresponding alphanumeric and what language the gesture originates from which no other neural network has been able to replicate.

Keywords: convolutional neural networks, deep learning, shallow correctors, sign language

Procedia PDF Downloads 101
62 Air Pollution on Stroke in Shenzhen, China: A Time-Stratified Case Crossover Study Modified by Meteorological Variables

Authors: Lei Li, Ping Yin, Haneen Khreis

Abstract:

Stroke is the second leading cause of death and a third leading cause of death and disability worldwide in 2019. Given the significant role of environmental factors in stroke development and progression, it is essential to investigate the effect of air pollution on stroke occurrence while considering the modifying effects of meteorological variables. This study aimed to evaluate the association between short-term exposure to air pollution and the incidence of stroke subtypes in Shenzhen, China, and to explore the potential interactions of meteorological factors with air pollutants. The study analyzed data from January 1, 2006, to December 31, 2014, including 88,214 cases of ischemic stroke and 30,433 cases of hemorrhagic stroke among residents of Shenzhen. Using a time-stratified case–crossover design with conditional quasi-Poisson regression, the study estimated the percentage changes in stroke morbidity associated with short-term exposure to nitrogen dioxide (NO₂), sulfur dioxide (SO₂), particulate matter less than 10 mm in aerodynamic diameter (PM10), carbon monoxide (CO), and ozone (O₃). A five-day moving average of air pollution was applied to capture the cumulative effects of air pollution. The estimates were further stratified by sex, age, education level, and season. The additive and multiplicative interaction between air pollutants and meteorologic variables were assessed by the relative excess risk due to interaction (RERI) and adding the interactive term into the main model, respectively. The study found that NO₂ was positively associated with ischemic stroke occurrence throughout the year and in the cold season (November through April), with a stronger effect observed among men. Each 10 μg/m³ increment in the five-day moving average of NO₂ was associated with a 2.38% (95% confidence interval was 1.36% to 3.41%) increase in the risk of ischemic stroke over the whole year and a 3.36% (2.04% to 4.69%) increase in the cold season. The harmful effect of CO on ischemic stroke was observed only in the cold season, with each 1 mg/m³ increment in the five-day moving average of CO increasing the risk by 12.34% (3.85% to 21.51%). There was no statistically significant additive interaction between individual air pollutants and temperature or relative humidity, as demonstrated by the RERI. The interaction term in the model showed a multiplicative antagonistic effect between NO₂ and temperature (p-value=0.0268). For hemorrhagic stroke, no evidence of the effects of any individual air pollutants was found in the whole population. However, the RERI indicated a statistically additive and multiplicative interaction of temperature on the effects of PM10 and O₃ on hemorrhagic stroke onset. Therefore, the insignificant conclusion should be interpreted with caution. The study suggests that environmental NO₂ and CO might increase the morbidity of ischemic stroke, particularly during the cold season. These findings could help inform policy decisions aimed at reducing air pollution levels to prevent stroke and other health conditions. Additionally, the study provides valuable insights into the interaction between air pollution and meteorological variables, which underscores the need for further research into the complex relationship between environmental factors and health.

Keywords: air pollution, meteorological variables, interactive effect, seasonal pattern, stroke

Procedia PDF Downloads 89
61 Biostratigraphic Significance of Shaanxilithes ningqiangensis from the Tal Group (Cambrian), Nigalidhar Syncline, Lesser Himalaya, India and Its GC-MS Analysis

Authors: C. A. Sharma, Birendra P. Singh

Abstract:

We recovered 40 well preserved ribbon-shaped, meandering specimens of S. ningqiangensis from the Earthy Dolomite Member (Krol Group) and calcareous siltstone beds of the Earthy Siltstone Member (Tal Group) showing closely spaced annulations that lacked branching. The beginning and terminal points are indistinguishable. In certain cases, individual specimens are characterized by irregular, low-angle to high-angle sinuosity. It has been variously described as body fossil, ichnofossil and algae. Detailed study of this enigmatic fossil is needed to resolve the long standing controversy regarding its phylogenetic and stratigraphic placements, which will be an important contribution to the evolutionary history of metazoans. S. ningqiangensis has been known from the late Neoproterozoic (Ediacaran) of southern and central China (Sichuan, Shaanxi, Quinghai and Guizhou provinces and Ningxia Hui Autonomous region), Siberian platform and across Pc/C Boundary from latest Neoprterozoic to earliest Cambrian of northern India. Shaanxilithes is considered an Ediacaran organism that spans the Precambrian–Cambrian boundary, an interval marked by significant taphonomic and ecological transformations that include not only innovation but also probable extinction. All the past well constrained finds of S. ningqiangensis are restricted to Ediacaran age. However, due to the new recoveries of the fossil from Nigalidhar Syncline, the stratigraphic status of S. ningqiangensis-bearing Earthy Siltstone Member of the Shaliyan Formation of the Tal Group (Cambrian) is rendered uncertain, though the overlying Chert Member in the adjoining Korgai Syncline has yielded definite early Cambrian acritarchs. The moot question is whether the Earthy Siltstone Member represents an Ediacaran or an early Cambrian age?. It would be interesting to find if Shaanxilithes, so far known from Ediacaran sequences, could it transgress to the early Cambrian or in simple words could it withstand the Pc/C Boundary event? GC-MS data shows the S. ningqiangensis structure is formed by hydrocarbon organic compounds which are filled with inorganic elements filler like silica, Calcium, phosphorus etc. The S. ningqiangensis structure is a mixture of organic compounds of high molecular weight, containing several saturated rings with hydrocarbon chains having an occasional isolated carbon-carbon double bond and also containing, in addition, to small amounts of nitrogen, sulfur and oxygen. Data also revealed that the presence of nitrogen which would be either in the form of peptide chains means amide/amine or chemical form i.e. nitrates/nitrites etc. The formula weight and the weight ratio of C/H shows that it would be expected for algae derived organics, since algae produce fatty acids as well as other hydrocarbons such as cartenoids.

Keywords: GC-MS Analysis, lesser himalaya, Pc/C Boundary, shaanxilithes

Procedia PDF Downloads 260
60 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 80
59 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 148
58 Reworking of the Anomalies in the Discounted Utility Model as a Combination of Cognitive Bias and Decrease in Impatience: Decision Making in Relation to Bounded Rationality and Emotional Factors in Intertemporal Choices

Authors: Roberta Martino, Viviana Ventre

Abstract:

Every day we face choices whose consequences are deferred in time. These types of choices are the intertemporal choices and play an important role in the social, economic, and financial world. The Discounted Utility Model is the mathematical model of reference to calculate the utility of intertemporal prospects. The discount rate is the main element of the model as it describes how the individual perceives the indeterminacy of subsequent periods. Empirical evidence has shown a discrepancy between the behavior expected from the predictions of the model and the effective choices made from the decision makers. In particular, the term temporal inconsistency indicates those choices that do not remain optimal with the passage of time. This phenomenon has been described with hyperbolic models of the discount rate which, unlike the linear or exponential nature assumed by the discounted utility model, is not constant over time. This paper explores the problem of inconsistency by tracing the decision-making process through the concept of impatience. The degree of impatience and the degree of decrease of impatience are two parameters that allow to quantify the weight of emotional factors and cognitive limitations during the evaluation and selection of alternatives. In fact, although the theory assumes perfectly rational decision makers, behavioral finance and cognitive psychology have made it possible to understand that distortions in the decision-making process and emotional influence have an inevitable impact on the decision-making process. The degree to which impatience is diminished is the focus of the first part of the study. By comparing consistent and inconsistent preferences over time, it was possible to verify that some anomalies in the discounted utility model are a result of the combination of cognitive bias and emotional factors. In particular: the delay effect and the interval effect are compared through the concept of misperception of time; starting from psychological considerations, a criterion is proposed to identify the causes of the magnitude effect that considers the differences in outcomes rather than their ratio; the sign effect is analyzed by integrating in the evaluation of prospects with negative outcomes the psychological aspects of loss aversion provided by Prospect Theory. An experiment implemented confirms three findings: the greatest variation in the degree of decrease in impatience corresponds to shorter intervals close to the present; the greatest variation in the degree of impatience occurs for outcomes of lower magnitude; the variation in the degree of impatience is greatest for negative outcomes. The experimental phase was implemented with the construction of the hyperbolic factor through the administration of questionnaires constructed for each anomaly. This work formalizes the underlying causes of the discrepancy between the discounted utility model and the empirical evidence of preference reversal.

Keywords: decreasing impatience, discount utility model, hyperbolic discount, hyperbolic factor, impatience

Procedia PDF Downloads 103
57 Using Inverted 4-D Seismic and Well Data to Characterise Reservoirs from Central Swamp Oil Field, Niger Delta

Authors: Emmanuel O. Ezim, Idowu A. Olayinka, Michael Oladunjoye, Izuchukwu I. Obiadi

Abstract:

Monitoring of reservoir properties prior to well placements and production is a requirement for optimisation and efficient oil and gas production. This is usually done using well log analyses and 3-D seismic, which are often prone to errors. However, 4-D (Time-lapse) seismic, incorporating numerous 3-D seismic surveys of the same field with the same acquisition parameters, which portrays the transient changes in the reservoir due to production effects over time, could be utilised because it generates better resolution. There is, however dearth of information on the applicability of this approach in the Niger Delta. This study was therefore designed to apply 4-D seismic, well-log and geologic data in monitoring of reservoirs in the EK field of the Niger Delta. It aimed at locating bypassed accumulations and ensuring effective reservoir management. The Field (EK) covers an area of about 1200km2 belonging to the early (18ma) Miocene. Data covering two 4-D vintages acquired over a fifteen-year interval were obtained from oil companies operating in the field. The data were analysed to determine the seismic structures, horizons, Well-to-Seismic Tie (WST), and wavelets. Well, logs and production history data from fifteen selected wells were also collected from the Oil companies. Formation evaluation, petrophysical analysis and inversion alongside geological data were undertaken using Petrel, Shell-nDi, Techlog and Jason Software. Well-to-seismic tie, formation evaluation and saturation monitoring using petrophysical and geological data and software were used to find bypassed hydrocarbon prospects. The seismic vintages were interpreted, and the amounts of change in the reservoir were defined by the differences in Acoustic Impedance (AI) inversions of the base and the monitor seismic. AI rock properties were estimated from all the seismic amplitudes using controlled sparse-spike inversion. The estimated rock properties were used to produce AI maps. The structural analysis showed the dominance of NW-SE trending rollover collapsed-crest anticlines in EK with hydrocarbons trapped northwards. There were good ties in wells EK 27, 39. Analysed wavelets revealed consistent amplitude and phase for the WST; hence, a good match between the inverted impedance and the good data. Evidence of large pay thickness, ranging from 2875ms (11420 TVDSS-ft) to about 2965ms, were found around EK 39 well with good yield properties. The comparison between the base of the AI and the current monitor and the generated AI maps revealed zones of untapped hydrocarbons as well as assisted in determining fluids movement. The inverted sections through EK 27, 39 (within 3101 m - 3695 m), indicated depletion in the reservoirs. The extent of the present non-uniform gas-oil contact and oil-water contact movements were from 3554 to 3575 m. The 4-D seismic approach led to better reservoir characterization, well development and the location of deeper and bypassed hydrocarbon reservoirs.

Keywords: reservoir monitoring, 4-D seismic, well placements, petrophysical analysis, Niger delta basin

Procedia PDF Downloads 117
56 Differentiating Third Instar Larvae of Three Species of Flies (Family: Sarcophagidae) of Potential Forensic Importance in Jamaica, Using Morphological Characteristics

Authors: Rochelle Daley, Eric Garraway, Catherine Murphy

Abstract:

Crime is a major problem in Jamaica as well as the high number of unsolved violent crimes. The introduction of forensic entomology in criminal investigations has the potential to decrease the number of unsolved violent crimes through the estimation of PMI (post-mortem interval) or time since death. Though it has great potential, forensic entomology requires data from insects specific to a geographical location to be credibly applied in legal investigations. It is a relatively new area of study in the Caribbean, with multiple pioneer research opportunities. Of critical importance in forensic entomology is the ability to identify the species of interest. Larvae are commonly collected at crime scenes and a means of rapid identification is crucial. Moreover, a low-cost method is critical in countries with limited budget available for crime fighting. Sarcophagids are one of the most important colonisers of a carcass however, they are difficult to distinguish using morphology due to their similarities, however, there is a lack of research on the larvae of this family. This research contributes to that, having identified the larvae of three species from the family Sarcophagidae: Peckia nicasia, Peckia chrysostoma and Blaesoxipha plinthopyga; important agents in flesh decomposition. Adults of Sarcophidae are also difficult to differentiate, often requiring study of the genitalia; the use of larvae in species identification is important in such cases. Adult Sarcophagids were attracted using bottle traps baited with pig liver. These adults larviposited and the larvae were collected and colonises (generation 2 and 3) reared at room temperature for morphological work (n=50). The posterior ends of the larvae from segments 9 or 10 were removed and mounted posterior end upwards to allow study using a light microscope at magnification X200 (posterior cavity and intersegmental spine bands) and X640 (anterior and posterior spiracle). The remaining sections of the larvae were cleared in 10 % KOH and the cephalopharyngeal skeleton dissected out and measured at different points. The cephalopharyngeal skeletons show observable differences in the shapes and sizes of the mouth hooks as well as the length of the ventral cornua. The most notable difference between species is in the general shape of the anal segments and the shape of the posterior spiracles. Intersegmental spine bands of these larvae become less pigmented and visible as the larvae change instars. Spine bands along with anterior spiracle are not recommended as features for species distinction. Larvae can potentially be used to distinguish Sarcophagids to the level of species, with observable differences in the anal segments and the cephalopharyngeal skeletons. However, this method of identification should be tested by comparing these morphological features with other Jamaican Sarcophagids to further support this conclusion.

Keywords: 3rd instar larval morphology, forensic entomology, Jamaica, Sarcophagidae

Procedia PDF Downloads 147
55 Birth Weight, Weight Gain and Feeding Pattern as Predictors for the Onset of Obesity in School Children

Authors: Thimira Pasas P, Nirmala Priyadarshani M, Ishani R

Abstract:

Obesity is a global health issue. Early identification is essential to plan interventions and intervene than to reduce the worsening of obesity and its consequences on the health issues of the individual. Childhood obesity is multifactorial, with both modifiable and unmodifiable risk factors. A genetically susceptible individual (unmodifiable), when placed in an obesogenic environment (modifiable), is likely to become obese in onset and progression. The present study was conducted to identify the age of onset of childhood obesity and the influence of modifiable risk factors for childhood obesity among school children living in a suburban area of Sri Lanka. The study population was aged 11-12 years of Piliyandala Educational Zone. Data were collected from 11–12-year-old school children attending government schools in the Piliyandala Educational Zone. They were using a validated, pre-tested self-administered questionnaire. A stratified random sampling method was performed to select schools and to select a representative sample to include all 3 types of government schools of students due to the prevailing pandemic situation, information from the last school medical inspection on data from 2020used for this purpose. For each obese child identified, 2 non-obese children were selected as controls. A single representative from the area was selected by using a systematic random sampling method with a sampling interval of 3. Data was collected using a validated, pre-tested self-administered questionnaire and the Child Health Development Record of the child. An introduction, which included explanations and instructions for filing the questionnaire, was carried out as a group activity prior to distributing the questionnaire among the sample. The results of the present study aligned with the hypothesis that the age of onset of childhood obesity and prediction must be within the first two years of child life. A total of 130 children (66 males: 64 females) participated in the study. The age of onset of obesity was seen to be within the first two years of life. The risk of obesity at 11-12 years of age was Obesity risk was identified at 3-time s higher among females who underwent rapid weight gain within their infancy period. Consuming milk prior to breakfast emerged as a risk factor that increases the risk of obesity by three times. The current study found that the drink before breakfast tends to increase the obesity risk by 3-folds, especially among obese females. Proper monitoring must be carried out to identify the rapid weight gain, especially within the first 2 years of life. Consumption of mug milk before breakfast tends to increase the obesity risk by 3 times. Identification of the confounding factors, proper awareness of the mothers/guardians and effective proper interventions need to be carried out to reduce the obesity risk among school children in the future.

Keywords: childhood obesity, school children, age of onset, weight gain, feeding pattern, activity level

Procedia PDF Downloads 141
54 Safety Profile of Human Papillomavirus Vaccines: A Post-Licensure Analysis of the Vaccine Adverse Events Reporting System, 2007-2017

Authors: Giulia Bonaldo, Alberto Vaccheri, Ottavio D'Annibali, Domenico Motola

Abstract:

The Human Papilloma Virus (HPV) was shown to be the cause of different types of carcinomas, first of all of the cervical intraepithelial neoplasia. Since the early 80s to today, thanks first to the preventive screening campaigns (pap-test) and following to the introduction of HPV vaccines on the market; the number of new cases of cervical cancer has decreased significantly. The HPV vaccines currently approved are three: Cervarix® (HPV2 - virus type: 16 and 18), Gardasil® (HPV4 - 6, 11, 16, 18) and Gardasil 9® (HPV9 - 6, 11, 16, 18, 31, 33, 45, 52, 58), which all protect against the two high-risk HPVs (6, 11) that are mainly involved in cervical cancers. Despite the remarkable effectiveness of these vaccines has been demonstrated, in the recent years, there have been many complaints about their risk-benefit profile due to Adverse Events Following Immunization (AEFI). The purpose of this study is to provide a support about the ongoing discussion on the safety profile of HPV vaccines based on real life data deriving from spontaneous reports of suspected AEFIs collected in the Vaccine Adverse Events Reporting System (VAERS). VAERS is a freely-available national vaccine safety surveillance database of AEFI, co-administered by the Centers for Disease Control and Prevention (CDC) and Food and Drug Administration (FDA). We collected all the reports between January 2007 to December 2017 related to the HPV vaccines with a brand name (HPV2, HPV4, HPV9) or without (HPVX). A disproportionality analysis using Reporting Odds Ratio (ROR) with 95% confidence interval and p value ≤ 0.05 was performed. Over the 10-year period, 54889 reports of AEFI related to HPV vaccines reported in VAERS, corresponding to 224863 vaccine-event pairs, were retrieved. The highest number of reports was related to Gardasil (n = 42244), followed by Gardasil 9 (7212) and Cervarix (3904). The brand name of the HPV vaccine was not reported in 1529 cases. The two events more frequently reported and statistically significant for each vaccine were: dizziness (n = 5053) ROR = 1.28 (CI95% 1.24 – 1.31) and syncope (4808) ROR = 1.21 (1.17 – 1.25) for Gardasil. For Gardasil 9, injection site pain (305) ROR = 1.40 (1.25 – 1.57) and injection site erythema (297) ROR = 1.88 (1.67 – 2.10) and for Cervarix, headache (672) ROR = 1.14 (1.06 – 1.23) and loss of consciousness (528) ROR = 1.71 (1.57 – 1.87). In total, we collected 406 reports of death and 2461 cases of permanent disability in the ten-year period. The events consisting of incorrect vaccine storage or incorrect administration were not considered. The AEFI analysis showed that the most frequently reported events are non-serious and listed in the corresponding SmPCs. In addition to these, potential safety signals arose regarding less frequent and severe AEFIs that would deserve further investigation. This already happened with the referral of the European Medicines Agency (EMA) for the adverse events POTS (Postural Orthostatic Tachycardia Syndrome) and CRPS (Complex Regional Pain Syndrome) associated with anti-papillomavirus vaccines.

Keywords: adverse drug reactions, pharmacovigilance, safety, vaccines

Procedia PDF Downloads 165
53 Enhancing Project Management Performance in Prefabricated Building Construction under Uncertainty: A Comprehensive Approach

Authors: Niyongabo Elyse

Abstract:

Prefabricated building construction is a pioneering approach that combines design, production, and assembly to attain energy efficiency, environmental sustainability, and economic feasibility. Despite continuous development in the industry in China, the low technical maturity of standardized design, factory production, and construction assembly introduces uncertainties affecting prefabricated component production and on-site assembly processes. This research focuses on enhancing project management performance under uncertainty to help enterprises navigate these challenges and optimize project resources. The study introduces a perspective on how uncertain factors influence the implementation of prefabricated building construction projects. It proposes a theoretical model considering project process management ability, adaptability to uncertain environments, and collaboration ability of project participants. The impact of uncertain factors is demonstrated through case studies and quantitative analysis, revealing constraints on implementation time, cost, quality, and safety. To address uncertainties in prefabricated component production scheduling, a fuzzy model is presented, expressing processing times in interval values. The model utilizes a cooperative co-evolution evolution algorithm (CCEA) to optimize scheduling, demonstrated through a real case study showcasing reduced project duration and minimized effects of processing time disturbances. Additionally, the research addresses on-site assembly construction scheduling, considering the relationship between task processing times and assigned resources. A multi-objective model with fuzzy activity durations is proposed, employing a hybrid cooperative co-evolution evolution algorithm (HCCEA) to optimize project scheduling. Results from real case studies indicate improved project performance in terms of duration, cost, and resilience to processing time delays and resource changes. The study also introduces a multistage dynamic process control model, utilizing IoT technology for real-time monitoring during component production and construction assembly. This approach dynamically adjusts schedules when constraints arise, leading to enhanced project management performance, as demonstrated in a real prefabricated housing project. Key contributions include a fuzzy prefabricated components production scheduling model, a multi-objective multi-mode resource-constrained construction project scheduling model with fuzzy activity durations, a multi-stage dynamic process control model, and a cooperative co-evolution evolution algorithm. The integrated mathematical model addresses the complexity of prefabricated building construction project management, providing a theoretical foundation for practical decision-making in the field.

Keywords: prefabricated construction, project management performance, uncertainty, fuzzy scheduling

Procedia PDF Downloads 51
52 Identification and Understanding of Colloidal Destabilization Mechanisms in Geothermal Processes

Authors: Ines Raies, Eric Kohler, Marc Fleury, Béatrice Ledésert

Abstract:

In this work, the impact of clay minerals on the formation damage of sandstone reservoirs is studied to provide a better understanding of the problem of deep geothermal reservoir permeability reduction due to fine particle dispersion and migration. In some situations, despite the presence of filters in the geothermal loop at the surface, particles smaller than the filter size (<1 µm) may surprisingly generate significant permeability reduction affecting in the long term the overall performance of the geothermal system. Our study is carried out on cores from a Triassic reservoir in the Paris Basin (Feigneux, 60 km Northeast of Paris). Our goal is to first identify the clays responsible for clogging, a mineralogical characterization of these natural samples was carried out by coupling X-Ray Diffraction (XRD), Scanning Electron Microscopy (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The results show that the studied stratigraphic interval contains mostly illite and chlorite particles. Moreover, the spatial arrangement of the clays in the rocks as well as the morphology and size of the particles, suggest that illite is more easily mobilized than chlorite by the flow in the pore network. Thus, based on these results, illite particles were prepared and used in core flooding in order to better understand the factors leading to the aggregation and deposition of this type of clay particles in geothermal reservoirs under various physicochemical and hydrodynamic conditions. First, the stability of illite suspensions under geothermal conditions has been investigated using different characterization techniques, including Dynamic Light Scattering (DLS) and Scanning Transmission Electron Microscopy (STEM). Various parameters such as the hydrodynamic radius (around 100 nm), the morphology and surface area of aggregates were measured. Then, core-flooding experiments were carried out using sand columns to mimic the permeability decline due to the injection of illite-containing fluids in sandstone reservoirs. In particular, the effects of ionic strength, temperature, particle concentration and flow rate of the injected fluid were investigated. When the ionic strength increases, a permeability decline of more than a factor of 2 could be observed for pore velocities representative of in-situ conditions. Further details of the retention of particles in the columns were obtained from Magnetic Resonance Imaging and X-ray Tomography techniques, showing that the particle deposition is nonuniform along the column. It is clearly shown that very fine particles as small as 100 nm can generate significant permeability reduction under specific conditions in high permeability porous media representative of the Triassic reservoirs of the Paris basin. These retention mechanisms are explained in the general framework of the DLVO theory

Keywords: geothermal energy, reinjection, clays, colloids, retention, porosity, permeability decline, clogging, characterization, XRD, SEM-EDS, STEM, DLS, NMR, core flooding experiments

Procedia PDF Downloads 178
51 Social Economic Factors Associated with the Nutritional Status of Children In Western Uganda

Authors: Baguma Daniel Kajura

Abstract:

The study explores socio-economic factors, health related and individual factors that influence the breastfeeding habits of mothers and their effect on the nutritional status of their infants in the Rwenzori region of Western Uganda. A cross-sectional research design was adopted, and it involved the use of self-administered questionnaires, interview guides, and focused group discussion guides to assess the extent to which socio-demographic factors associated with breastfeeding practices influence child malnutrition. Using this design, data was collected from 276 mother-paired infants out of the selected 318 mother-paired infants over a period of ten days. Using a sample size formula by Kish Leslie for cross-sectional studies N= Zα2 P (1- P) / δ2, where N= sample size estimate of paired mother paired infants. P= assumed true population prevalence of mother–paired infants with malnutrition cases, P = 29.3%. 1-P = the probability of mother-paired infants not having malnutrition, so 1-P = 70.7% Zα = Standard normal deviation at 95% confidence interval corresponding to 1.96.δ = Absolute error between the estimated and true population prevalence of malnutrition of 5%. The calculated sample size N = 1.96 × 1.96 (0.293 × 0.707) /0,052= 318 mother paired infants. Demographic and socio-economic data for all mothers were entered into Microsoft Excel software and then exported to STATA 14 (StataCorp, 2015). Anthropometric measurements were taken for all children by the researcher and the trained assistants who physically weighed the children. The use of immunization card was used to attain the age of the child. The bivariate logistic regression analysis was used to assess the relationship between socio-demographic factors associated with breastfeeding practices and child malnutrition. The multivariable regression analysis was used to draw a conclusion on whether or not there are any true relationships between the socio-demographic factors associated with breastfeeding practices as independent variables and child stunting and underweight as dependent variables in relation to breastfeeding practices. Descriptive statistics on background characteristics of the mothers were generated and presented in frequency distribution tables. Frequencies and means were computed, and the results were presented using tables, then, we determined the distribution of stunting and underweight among infants by the socioeconomic and demographic factors. Findings reveal that children of mothers who used milk substitutes besides breastfeeding are over two times more likely to be stunted compared to those whose mothers exclusively breastfed them. Feeding children with milk substitutes instead of breastmilk predisposes them to both stunting and underweight. Children of mothers between 18 and 34 years of age are less likely to be underweight, as were those who were breastfed over ten times a day. The study further reveals that 55% of the children were underweight, and 49% were stunted. Of the underweight children, an equal number (58/151) were either mildly or moderately underweight (38%), and 23% (35/151) were severely underweight. Empowering community outreach programs by increasing knowledge and increased access to services on integrated management of child malnutrition is crucial to curbing child malnutrition in rural areas.

Keywords: infant and young child feeding, breastfeeding, child malnutrition, maternal health

Procedia PDF Downloads 24
50 Valorization of Surveillance Data and Assessment of the Sensitivity of a Surveillance System for an Infectious Disease Using a Capture-Recapture Model

Authors: Jean-Philippe Amat, Timothée Vergne, Aymeric Hans, Bénédicte Ferry, Pascal Hendrikx, Jackie Tapprest, Barbara Dufour, Agnès Leblond

Abstract:

The surveillance of infectious diseases is necessary to describe their occurrence and help the planning, implementation and evaluation of risk mitigation activities. However, the exact number of detected cases may remain unknown whether surveillance is based on serological tests because identifying seroconversion may be difficult. Moreover, incomplete detection of cases or outbreaks is a recurrent issue in the field of disease surveillance. This study addresses these two issues. Using a viral animal disease as an example (equine viral arteritis), the goals were to establish suitable rules for identifying seroconversion in order to estimate the number of cases and outbreaks detected by a surveillance system in France between 2006 and 2013, and to assess the sensitivity of this system by estimating the total number of outbreaks that occurred during this period (including unreported outbreaks) using a capture-recapture model. Data from horses which exhibited at least one positive result in serology using viral neutralization test between 2006 and 2013 were used for analysis (n=1,645). Data consisted of the annual antibody titers and the location of the subjects (towns). A consensus among multidisciplinary experts (specialists in the disease and its laboratory diagnosis, epidemiologists) was reached to consider seroconversion as a change in antibody titer from negative to at least 32 or as a three-fold or greater increase. The number of seroconversions was counted for each town and modeled using a unilist zero-truncated binomial (ZTB) capture-recapture model with R software. The binomial denominator was the number of horses tested in each infected town. Using the defined rules, 239 cases located in 177 towns (outbreaks) were identified from 2006 to 2013. Subsequently, the sensitivity of the surveillance system was estimated as the ratio of the number of detected outbreaks to the total number of outbreaks that occurred (including unreported outbreaks) estimated using the ZTB model. The total number of outbreaks was estimated at 215 (95% credible interval CrI95%: 195-249) and the surveillance sensitivity at 82% (CrI95%: 71-91). The rules proposed for identifying seroconversion may serve future research. Such rules, adjusted to the local environment, could conceivably be applied in other countries with surveillance programs dedicated to this disease. More generally, defining ad hoc algorithms for interpreting the antibody titer could be useful regarding other human and animal diseases and zoonosis when there is a lack of accurate information in the literature about the serological response in naturally infected subjects. This study shows how capture-recapture methods may help to estimate the sensitivity of an imperfect surveillance system and to valorize surveillance data. The sensitivity of the surveillance system of equine viral arteritis is relatively high and supports its relevance to prevent the disease spreading.

Keywords: Bayesian inference, capture-recapture, epidemiology, equine viral arteritis, infectious disease, seroconversion, surveillance

Procedia PDF Downloads 300