Search results for: annual and daily flow duration curve
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10442

Search results for: annual and daily flow duration curve

482 Antiangiogenic and Pro-Apoptotic Properties of Shemamruthaa: An Herbal Preparation in Experimental Mammary Carcinoma-Bearing Rats and Breast Cancer Cell Line In vitro

Authors: Nandhakumar Elumalai, Purushothaman Ayyakannu, Sachidanandam T. Panchanatham

Abstract:

Background: Understanding the basic mechanisms and factors underlying the tumor growth and invasion has gained attention in recent times. The processes of angiogenesis and apoptosis are known to play a vital role in various stages of cancer. The vascular endothelial growth factor (VEGF) is well established as one of the key regulators of tumor angiogenesis while MMPs are known for their exclusive ability to degrade ECM. Objective: The present study was designed to evaluate the pro apoptotic and anti angiogenic activity of the herbal formulation Shemamruthaa. The anticancer activity of Shemamruthaa was tested in breast cancer cell line (MCF-7). Results of MTT, trypan blue and flow cytometric analysis of apoptotis suggested that Shemamruthaa can induce cytotoxicity in cancer cells, in a concentration- and time dependent manner and induce apoptosis. With these results, we further evaluated the antiangiogenic and pro-apoptotic activities of Shemamruthaa in DMBA induced mammary carcinoma in Sprague Dawley rats. Flavono tumour was induced in 8-week-old Sprague-Dawley rats by gastric intubation of 25 mg DMBA in 1ml olive oil. After 90 days of induction period, the rats were orally administered with Shemamruthaa (400 mg/kg body wt) for 45 days. Treatment with the drug SM significantly modulated the expression of p53, MMP-2, MMP-3, MMP-9 and VEGF by means of its anti angiogenic and protease inhibiting activity. Conclusion: Based on these results, it might be concluded that the formulation, Shemamruthaa, constituted of dried flowers of Hibiscus rosa-sinensis, fruits of Emblica officinalis, and honey has been found to exhibit pronounced antiproliferative and apoptotic effects. This enhanced anticancer effect of Shemamruthaa might be attributed to the synergistic action of polyphenols such as flavonoids, tannins, alkaloids, glycosides, saponins, steroids, terpenoids, vitamin C, niacin, pyrogallol, hydroxymethylfurfural, trilinolein, and other compounds present in the formulation. Collectively, these results demonstrate that Shemamruthaa holds potential to be developed as a potent chemotherapeutic agent against mammary carcinoma.

Keywords: Shemamruthaa, flavonoids, MCF-7 cell line, mammary cancer

Procedia PDF Downloads 252
481 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
480 Study of Polychlorinated Dibenzo-P-Dioxins and Dibenzofurans Dispersion in the Environment of a Municipal Solid Waste Incinerator

Authors: Gómez R. Marta, Martín M. Jesús María

Abstract:

The general aim of this paper identifies the areas of highest concentration of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) around the incinerator through the use of dispersion models. Atmospheric dispersion models are useful tools for estimating and prevent the impact of emissions from a particular source in air quality. These models allow considering different factors that influence in air pollution: source characteristics, the topography of the receiving environment and weather conditions to predict the pollutants concentration. The PCDD/Fs, after its emission into the atmosphere, are deposited on water or land, near or far from emission source depending on the size of the associated particles and climatology. In this way, they are transferred and mobilized through environmental compartments. The modelling of PCDD/Fs was carried out with following tools: Atmospheric Dispersion Model Software (ADMS) and Surfer. ADMS is a dispersion model Gaussian plume, used to model the impact of air quality industrial facilities. And Surfer is a program of surfaces which is used to represent the dispersion of pollutants on a map. For the modelling of emissions, ADMS software requires the following input parameters: characterization of emission sources (source type, height, diameter, the temperature of the release, flow rate, etc.) meteorological and topographical data (coordinate system), mainly. The study area was set at 5 Km around the incinerator and the first population center nearest to focus PCDD/Fs emission is about 2.5 Km, approximately. Data were collected during one year (2013) both PCDD/Fs emissions of the incinerator as meteorology in the study area. The study has been carried out during period's average that legislation establishes, that is to say, the output parameters are taking into account the current legislation. Once all data required by software ADMS, described previously, are entered, and in order to make the representation of the spatial distribution of PCDD/Fs concentration and the areas affecting them, the modelling was proceeded. In general, the dispersion plume is in the direction of the predominant winds (Southwest and Northeast). Total levels of PCDD/Fs usually found in air samples, are from <2 pg/m3 for remote rural areas, from 2-15 pg/m3 in urban areas and from 15-200 pg/m3 for areas near to important sources, as can be an incinerator. The results of dispersion maps show that maximum concentrations are the order of 10-8 ng/m3, well below the values considered for areas close to an incinerator, as in this case.

Keywords: atmospheric dispersion, dioxin, furan, incinerator

Procedia PDF Downloads 217
479 A Case Report: The Role of Gut Directed Hypnotherapy in Resolution of Irritable Bowel Syndrome in a Medication Refractory Pediatric Male Patient

Authors: Alok Bapatla, Pamela Lutting, Mariastella Serrano

Abstract:

Background: Irritable Bowel Syndrome (IBS) is a functional gastrointestinal disorder characterized by abdominal pain associated with altered bowel habits in the absence of an underlying organic cause. Although the exact etiology of IBS is not fully understood, one of the leading theories postulates a pathology within the Brain-Gut Axis that leads to an overall increase in gastrointestinal sensitivity and pejorative changes in gastrointestinal motility. Research and clinical practice have shown that Gut Directed Hypnotherapy (GDH) has a beneficial clinical role in improving Mind-Gut control and thereby comorbid conditions such as anxiety, abdominal pain, constipation, and diarrhea. Aims: This study presents a 17-year old male with underlying anxiety and a one-year history of IBS-Constipation Predominant Subtype (IBS-C), who has demonstrated impressive improvement of symptoms following GDH treatment following refractory trials with medications including bisacodyl, senna, docusate, magnesium citrate, lubiprostone, linaclotide. Method: The patient was referred to a licensed clinical psychologist specializing in clinical hypnosis and cognitive-behavioral therapy (CBT), who implemented “The Standardized Hypnosis Protocol for IBS” developed by Dr. Olafur S. Palsson, Psy.D at the University of North Carolina at Chapel Hill. The hypnotherapy protocol consisted of a total of seven weekly 45-minute sessions supplemented with a 20-minute audio recording to be listened to once daily. Outcome variables included the GAD-7, PHQ-9 and DCI-2, as well as self-ratings (ranging 0-10) for pain (intensity and frequency), emotional distress about IBS symptoms, and overall emotional distress. All variables were measured at intake prior to administration of the hypnosis protocol and at the conclusion of the hypnosis treatment. A retrospective IBS Questionnaire (IBS Severity Scoring System) was also completed at the conclusion of the GDH treatment for pre-and post-test ratings of clinical symptoms. Results: The patient showed improvement in all outcome variables and self-ratings, including abdominal pain intensity, frequency of abdominal pain episodes, emotional distress relating to gut issues, depression, and anxiety. The IBS Questionnaire showed a significant improvement from a severity score of 400 (defined as severe) prior to GDH intervention compared to 55 (defined as complete resolution) at four months after the last session. IBS Questionnaire subset questions that showed a significant score improvement included abdominal pain intensity, days of pain experienced per 10 days, satisfaction with bowel habits, and overall interference of life affected by IBS symptoms. Conclusion: This case supports the existing research literature that GDH has a significantly beneficial role in improving symptoms in patients with IBS. Emphasis is placed on the numerical results of the IBS Questionnaire scoring, which reflects a patient who initially suffered from severe IBS with failed response to multiple medications, who subsequently showed full and sustained resolution

Keywords: pediatrics, constipation, irritable bowel syndrome, hypnotherapy, gut-directed hypnosis

Procedia PDF Downloads 198
478 Hydrodynamics and Hydro-acoustics of Fish Schools: Insights from Computational Models

Authors: Ji Zhou, Jung Hee Seo, Rajat Mittal

Abstract:

Fish move in groups for foraging, reproduction, predator protection, and hydrodynamic efficiency. Schooling's predator protection involves the "many eyes" theory, which increases predator detection probability in a group. Reduced visual signature in a group scales with school size, offering per-capita protection. The ‘confusion effect’ makes it hard for predators to target prey in a group. These benefits, however, all focus on vision-based sensing, overlooking sound-based detection. Fish, including predators, possess sophisticated sensory systems for pressure waves and underwater sound. The lateral line system detects acoustic waves, while otolith organs sense infrasound, and sharks use an auditory system for low-frequency sounds. Among sound generation mechanisms of fish, the mechanism of dipole sound relates to hydrodynamic pressure forces on the body surface of the fish and this pressure would be affected by group swimming. Thus, swimming within a group could affect this hydrodynamic noise signature of fish and possibly serve as an additional protection afforded by schooling, but none of the studies to date have explored this effect. BAUVs with fin-like propulsors could reduce acoustic noise without compromising performance, addressing issues of anthropogenic noise pollution in marine environments. Therefore, in this study, we used our in-house immersed-boundary method flow and acoustic solver, ViCar3D, to simulate fish schools consisting of four swimmers in the classic ‘diamond’ configuration and discussed the feasibility of yielding higher swimming efficiency and controlling far-field sound signature of the school. We examine the effects of the relative phase of fin flapping of the swimmers and the simulation results indicate that the phase of the fin flapping is a dominant factor in both thrust enhancement and the total sound radiated into the far-field by a group of swimmers. For fish in the “diamond” configuration, a suitable combination of the relative phase difference between pairs of leading fish and trailing fish can result in better swimming performance with significantly lower hydroacoustic noise.

Keywords: fish schooling, biopropulsion, hydrodynamics, hydroacoustics

Procedia PDF Downloads 61
477 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design

Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi

Abstract:

Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.

Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect

Procedia PDF Downloads 107
476 A Differential Scanning Calorimetric Study of Frozen Liquid Egg Yolk Thawed by Different Thawing Methods

Authors: Karina I. Hidas, Csaba Németh, Anna Visy, Judit Csonka, László Friedrich, Ildikó Cs. Nyulas-Zeke

Abstract:

Egg yolk is a popular ingredient in the food industry due to its gelling, emulsifying, colouring, and coagulating properties. Because of the heat sensitivity of proteins, egg yolk can only be heat treated at low temperatures, so its shelf life, even with the addition of a preservative, is only a few weeks. Freezing can increase the shelf life of liquid egg yolk up to 1 year, but it undergoes gelling below -6 ° C, which is an irreversible phenomenon. The degree of gelation depends on the time and temperature of freezing and is influenced by the process of thawing. Therefore, in our experiment, we examined egg yolks thawed in different ways. In this study, unpasteurized, industrially broken, separated, and homogenized liquid egg yolk was used. Freshly produced samples were frozen in plastic containers at -18°C in a laboratory freezer. Frozen storage was performed for 90 days. Samples were analysed at day zero (unfrozen) and after frozen storage for 1, 7, 14, 30, 60 and 90 days. Samples were thawed in two ways (at 5°C for 24 hours and 30°C for 3 hours) before testing. Calorimetric properties were examined by differential scanning calorimetry, where heat flow curves were recorded. Denaturation enthalpy values were calculated by fitting a linear baseline, and denaturation temperature values were evaluated. Besides, dry matter content of samples was measured by the oven method with drying at 105°C to constant weight. For statistical analysis two-way ANOVA (α = 0.05) was employed, where thawing mode and freezing time were the fixed factors. Denaturation enthalpy values decreased from 1.1 to 0.47 at the end of the storage experiment, which represents a reduction of about 60%. The effect of freezing time was significant on these values, already the enthalpy of samples stored frozen for 1 day was significantly reduced. However, the mode of thawing did not significantly affect the denaturation enthalpy of the samples, and no interaction was seen between the two factors. The denaturation temperature and dry matter content did not change significantly either during the freezing period or during the defrosting mode. Results of our study show that slow freezing and frozen storage at -18°C greatly reduces the amount of protein that can be denatured in egg yolk, indicating that the proteins have been subjected to aggregation, denaturation or other protein conversions regardless of how they were thawed.

Keywords: denaturation enthalpy, differential scanning calorimetry, liquid egg yolk, slow freezing

Procedia PDF Downloads 129
475 Integrating Circular Economy Framework into Life Cycle Analysis: An Exploratory Study Applied to Geothermal Power Generation Technologies

Authors: Jingyi Li, Laurence Stamford, Alejandro Gallego-Schmid

Abstract:

Renewable electricity has become an indispensable contributor to achieving net-zero by the mid-century to tackle climate change. Unlike solar, wind, or hydro, geothermal was stagnant in its electricity production development for decades. However, with the significant breakthrough made in recent years, especially the implementation of enhanced geothermal systems (EGS) in various regions globally, geothermal electricity could play a pivotal role in alleviating greenhouse gas emissions. Life cycle assessment has been applied to analyze specific geothermal power generation technologies, which proposed suggestions to optimize its environmental performance. For instance, selecting a high heat gradient region enables a higher flow rate from the production well and extends the technical lifespan. Although such process-level improvements have been made, the significance of geothermal power generation technologies so far has not explicitly displayed its competitiveness on a broader horizon. Therefore, this review-based study integrates a circular economy framework into life cycle assessment, clarifying the underlying added values for geothermal power plants to complete the sustainability profile. The derived results have provided an enlarged platform to discuss geothermal power generation technologies: (i) recover the heat and electricity from the process to reduce the fossil fuel requirements; (ii) recycle the construction materials, such as copper, steel, and aluminum for future projects; (iii) extract the lithium ions from geothermal brine and make geothermal reservoir become a potential supplier of the lithium battery industry; (iv) repurpose the abandoned oil and gas wells to build geothermal power plants; (v) integrate geothermal energy with other available renewable energies (e.g., solar and wind) to provide heat and electricity as a hybrid system at different weather; (vi) rethink the fluids used in stimulation process (EGS only), replace water with CO2 to achieve negative emissions from the system. These results provided a new perspective to the researchers, investors, and policymakers to rethink the role of geothermal in the energy supply network.

Keywords: climate, renewable energy, R strategies, sustainability

Procedia PDF Downloads 137
474 Cyber-Victimization among Higher Education Students as Related to Academic and Personal Factors

Authors: T. Heiman, D. Olenik-Shemesh

Abstract:

Over the past decade, with the rapid growth of electronic communication, the internet and, in particular, social networking has become an inseparable part of people's daily lives. Along with its benefits, a new type of online aggression has emerged, defined as cyber bullying, a form of interpersonal aggressive behavior that takes place through electronic means. Cyber-bullying is characterized by repetitive behavior over time of maladaptive authority and power usage using computers and cell phones via sending insulting messages and hurtful pictures. Preliminary findings suggest that the prevalence of involvement in cyber-bullying among higher education students varies between 10 and 35%. As to date, universities are facing an uphill effort in trying to restrain online misbehavior. As no studies examined the relationships between cyber-bullying involvement with personal aspects, and its impacts on academic achievement and work functioning, this present study examined the nature of cyber-bullying involvement among 1,052 undergraduate students (mean age = 27.25, S.D = 4.81; 66.2% female), coping with, as well as the effects of social support, perceived self-efficacy, well-being, and body-perception, in relation to cyber-victimization. We assume that students in higher education are a vulnerable population and at high risk of being cyber-victims. We hypothesize that social support might serve as a protective factor and will moderate the relationships between the socio-emotional variables and the occurrence of cyber- victimization. The findings of this study will present the relationships between cyber-victimization and the social-emotional aspects, which constitute risk and protective factors. After receiving approval from the Ethics Committee of the University, a Google Drive questionnaire was sent to a random sample of students, studying in the various University study centers. Students' participation was voluntary, and they completed the five questionnaires anonymously: Cyber-bullying, perceived self-efficacy, subjective well-being, social support and body perception. Results revealed that 11.6% of the students reported being cyber-victims during last year. Examining the emotional and behavioral reactions to cyber-victimization revealed that female emotional and behavioral reactions were significantly greater than the male reactions (p < .001). Moreover, females reported on a significant higher social support compared to men; male reported significantly on a lower social capability than female; and men's body perception was significantly more positive than women's scores. No gender differences were observed for subjective well-being scale. Significant positive correlations were found between cyber-victimization and fewer friends, lower grades, and work ineffectiveness (r = 0.37- .40, p < 0 .001). The results of the Hierarchical regression indicated significantly that cyber-victimization can be predicted by lower social support, lower body perception, and gender (female), that explained 5.6% of the variance (R2 = 0.056, F(5,1047) = 12.47, p < 0.001). The findings deepen our understanding of the students' involvement in cyber-bullying, and present the relationships of the social-emotional and academic aspects on cyber-victim students. In view of our findings, higher education policy could help facilitate coping with cyber-bullying incidents, and student support units could develop intervention programs aimed at reducing cyber-bullying and its impacts.

Keywords: academic and personal factors, cyber-victimization, social support, higher education

Procedia PDF Downloads 289
473 Non-Thermal Pulsed Plasma Discharge for Contaminants of Emerging Concern Removal in Water

Authors: Davide Palma, Dimitra Papagiannaki, Marco Minella, Manuel Lai, Rita Binetti, Claire Richard

Abstract:

Modern analytical technologies allow us to detect water contaminants at trace and ultra-trace concentrations highlighting how a large number of organic compounds is not efficiently abated by most wastewater treatment facilities relying on biological processes; we usually refer to these micropollutants as contaminants of emerging concern (CECs). The availability of reliable end effective technologies, able to guarantee the high standards of water quality demanded by legislators worldwide, has therefore become a primary need. In this context, water plasma stands out among developing technologies as it is extremely effective in the abatement of numerous classes of pollutants, cost-effective, and environmentally friendly. In this work, a custom-built non-thermal pulsed plasma discharge generator was used to abate the concentration of selected CECs in the water samples. Samples were treated in a 50 mL pyrex reactor using two different types of plasma discharge occurring at the surface of the treated solution or, underwater, working with positive polarity. The distance between the tips of the electrodes determined where the discharge was formed: underwater when the distance was < 2mm, at the water surface when the distance was > 2 mm. Peak voltage was in the 100-130kV range with typical current values of 20-40 A. The duration of the pulse was 500 ns, and the frequency of discharge could be manually set between 5 and 45 Hz. Treatment of 100 µM diclofenac solution in MilliQ water, with a pulse frequency of 17Hz, revealed that surface discharge was more efficient in the degradation of diclofenac that was no longer detectable after 6 minutes of treatment. Over 30 minutes were required to obtain the same results with underwater discharge. These results are justified by the higher rate of H₂O₂ formation (21.80 µmolL⁻¹min⁻¹ for surface discharge against 1.20 µmolL⁻¹min⁻¹ for underwater discharge), larger discharge volume and UV light emission, high rate of ozone and NOx production (up to 800 and 1400 ppb respectively) observed when working with surface discharge. Then, the surface discharge was used for the treatment of the three selected perfluoroalkyl compounds, namely, perfluorooctanoic acid (PFOA), perfluorohexanoic acid (PFHxA), and pefluorooctanesulfonic acid (PFOS) both individually and in mixture, in ultrapure and groundwater matrices with initial concentration of 1 ppb. In both matrices, PFOS exhibited the best degradation reaching complete removal after 30 min of treatment (degradation rate 0.107 min⁻¹ in ultrapure water and 0.0633 min⁻¹ in groundwater), while the degradation rate of PFOA and PFHxA was slower of around 65% and 80%, respectively. Total nitrogen (TN) measurements revealed levels up to 45 mgL⁻¹h⁻¹ in water samples treated with surface discharge, while, in analogous samples treated with underwater discharge, TN increase was 5 to 10 times lower. These results can be explained by the significant NOx concentrations (over 1400 ppb) measured above functioning reactor operating with superficial discharge; rapid NOx hydrolysis led to nitrates accumulation in the solution explaining the observed evolution of TN values. Ionic chromatography measures confirmed that the vast majority of TN was under the form of nitrates. In conclusion, non-thermal pulsed plasma discharge, obtained with a custom-built generator, was proven to effectively degrade diclofenac in water matrices confirming the potential interest of this technology for wastewater treatment. The surface discharge was proven to be more effective in CECs removal due to the high rate of formation of H₂O₂, ozone, reactive radical species, and strong UV light emission. Furthermore, nitrates enriched water obtained after treatment could be an interesting added-value product to be used as fertilizer in agriculture. Acknowledgment: This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 765860.

Keywords: CECs removal, nitrogen fixation, non-thermal plasma, water treatment

Procedia PDF Downloads 121
472 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria

Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo

Abstract:

This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.

Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria

Procedia PDF Downloads 512
471 Utilising Indigenous Knowledge to Design Dykes in Malawi

Authors: Martin Kleynhans, Margot Soler, Gavin Quibell

Abstract:

Malawi is one of the world’s poorest nations and consequently, the design of flood risk management infrastructure comes with a different set of challenges. There is a lack of good quality hydromet data, both in spatial terms and in the quality thereof and the challenge in the design of flood risk management infrastructure is compounded by the fact that maintenance is almost completely non-existent and that solutions have to be simple to be effective. Solutions should not require any further resources to remain functional after completion, and they should be resilient. They also have to be cost effective. The Lower Shire Valley of Malawi suffers from frequent flood events. Various flood risk management interventions have been designed across the valley during the course of the Shire River Basin Management Project – Phase I, and due to the data poor environment, indigenous knowledge was relied upon to a great extent for hydrological and hydraulic model calibration and verification. However, indigenous knowledge comes with the caveat that it is ‘fuzzy’ and that it can be manipulated for political reasons. The experience in the Lower Shire valley suggests that indigenous knowledge is unlikely to invent a problem where none exists, but that flood depths and extents may be exaggerated to secure prioritization of the intervention. Indigenous knowledge relies on the memory of a community and cannot foresee events that exceed past experience, that could occur differently to those that have occurred in the past, or where flood management interventions change the flow regime. This complicates communication of planned interventions to local inhabitants. Indigenous knowledge is, for the most part, intuitive, but flooding can sometimes be counter intuitive, and the rural poor may have a lower trust of technology. Due to a near complete lack of maintenance of infrastructure, infrastructure has to be designed with no moving parts and no requirement for energy inputs. This precludes pumps, valves, flap gates and sophisticated warning systems. Designs of dykes during this project included ‘flood warning spillways’, that double up as pedestrian and animal crossing points, which provide warning of impending dangerous water levels behind dykes to residents before water levels that could cause a possible dyke failure are reached. Locally available materials and erosion protection using vegetation were used wherever possible to keep costs down.

Keywords: design of dykes in low-income countries, flood warning spillways, indigenous knowledge, Malawi

Procedia PDF Downloads 279
470 Development of Three-Dimensional Groundwater Model for Al-Corridor Well Field, Amman–Zarqa Basin

Authors: Moayyad Shawaqfah, Ibtehal Alqdah, Amjad Adaileh

Abstract:

Coridoor area (400 km2) lies to the north – east of Amman (60 km). It lies between 285-305 E longitude and 165-185 N latitude (according to Palestine Grid). It been subjected to exploitation of groundwater from new eleven wells since the 1999 with a total discharge of 11 MCM in addition to the previous discharge rate from the well field 14.7 MCM. Consequently, the aquifer balance is disturbed and a major decline in water level. Therefore, suitable groundwater resources management is required to overcome the problems of over pumping and its effect on groundwater quality. Three–dimensional groundwater flow model Processing Modeflow for Windows Pro (PMWIN PRO, 2003) has been used in order to calculate the groundwater budget, aquifer characteristics, and to predict the aquifer response under different stresses for the next 20 years (2035). The model was calibrated for steady state conditions by trial and error calibration. The calibration was performed by matching observed and calculated initial heads for year 2001. Drawdown data for period 2001-2010 were used to calibrate transient model by matching calculated with observed one, after that, the transient model was validated by using the drawdown data for the period 2011-2014. The hydraulic conductivities of the Basalt- A7/B2 aquifer System are ranging between 1.0 and 8.0 m/day. The low conductivity value was found at the north-west and south-western parts of the study area, the high conductivity value was found at north-western corner of the study area and the average storage coefficient is about 0.025. The water balance for the Basalt and B2/A7 formation at steady state condition with a discrepancy of 0.003%. The major inflows come from Jebal Al Arab through the basalt and through the limestone aquifer (B2/A7 12.28 MCMY aquifer and from excess rainfall is about 0.68 MCM/a. While the major outflows from the Basalt-B2/A7 aquifer system are toward Azraq basin with about 5.03 MCMY and leakage to A1/6 aquitard with 7.89 MCMY. Four scenarios have been performed to predict aquifer system responses under different conditions. Scenario no.2 was found to be the best one which indicates that the reduction the abstraction rates by 50% of current withdrawal rate (25.08 MCMY) to 12.54 MCMY. The maximum drawdowns were decreased to reach about, 7.67 and 8.38m in the years 2025 and 2035 respectively.

Keywords: Amman/Zarqa Basin, Jordan, groundwater management, groundwater modeling, modflow

Procedia PDF Downloads 216
469 Lessons Learnt from Industry: Achieving Net Gain Outcomes for Biodiversity

Authors: Julia Baker

Abstract:

Development plays a major role in stopping biodiversity loss. But the ‘silo species’ protection of legislation (where certain species are protected while many are not) means that development can be ‘legally compliant’ and result in biodiversity loss. ‘Net Gain’ (NG) policies can help overcome this by making it an absolute requirement that development causes no overall loss of biodiversity and brings a benefit. However, offsetting biodiversity losses in one location with gains elsewhere is controversial because people suspect ‘offsetting’ to be an easy way for developers to buy their way out of conservation requirements. Yet the good practice principles (GPP) of offsetting provide several advantages over existing legislation for protecting biodiversity from development. This presentation describes the learning from implementing NG approaches based on GPP. It regards major upgrades of the UK’s transport networks, which involved removing vegetation in order to construct and safely operate new infrastructure. While low-lying habitats were retained, trees and other habitats disrupting the running or safety of transport networks could not. Consequently, achieving NG within the transport corridor was not possible and offsetting was required. The first ‘lessons learnt’ were on obtaining a commitment from business leaders to go beyond legislative requirements and deliver NG, and on the institutional change necessary to embed GPP within daily operations. These issues can only be addressed when the challenges that biodiversity poses for business are overcome. These challenges included: biodiversity cannot be measured easily unlike other sustainability factors like carbon and water that have metrics for target-setting and measuring progress; and, the mindset that biodiversity costs money and does not generate cash in return, which is the opposite of carbon or waste for example, where people can see how ‘sustainability’ actions save money. The challenges were overcome by presenting the GPP of NG as a cost-efficient solution to specific, critical risks facing the business that also boost industry recognition, and by using government-issued NG metrics to develop business-specific toolkits charting their NG progress whilst ensuring that NG decision-making was based on rich ecological data. An institutional change was best achieved by supporting, mentoring and training sustainability/environmental managers for these ‘frontline’ staff to embed GPP within the business. The second learning was from implementing the GPP where business partnered with local governments, wildlife groups and land owners to support their priorities for nature conservation, and where these partners had a say in decisions about where and how best to achieve NG. From this inclusive approach, offsetting contributed towards conservation priorities when all collaborated to manage trade-offs between: -Delivering ecologically equivalent offsets or compensating for losses of one type of biodiversity by providing another. -Achieving NG locally to the development whilst contributing towards national conservation priorities through landscape-level planning. -Not just protecting the extent and condition of existing biodiversity but ‘doing more’. -The multi-sector collaborations identified practical, workable solutions to ‘in perpetuity’. But key was strengthening linkages between biodiversity measures implemented for development and conservation work undertaken by local organizations so that developers support NG initiatives that really count.

Keywords: biodiversity offsetting, development, nature conservation planning, net gain

Procedia PDF Downloads 195
468 Partnering With Key Stakeholders for Successful Implementation of Inhaled Analgesia for Specific Emergency Department Presentations

Authors: Sarah Hazelwood, Janice Hay

Abstract:

Methoxyflurane is an inhaled analgesic administered via a disposable inhaler, which has been used in Australia for 40 years for the management of pain in children & adults. However, there is a lack of data for methoxyflurane as a frontline analgesic medication within the emergency department (ED). This study will investigate the usefulness of methoxyflurane in a private inner-city ED. The study concluded that the inclusion of all key stakeholders in the prescribing, administering & use of this new process led to comprehensive uptake & vastly positive outcomes for consumer & health professionals. Method: A 12-week prospective pilot study was completed utilizing patients presenting to the ED in pain (numeric pain rating score > 4) that fit the requirement of methoxyflurane use (as outlined in the Australian Prescriber information package). Nurses completed a formatted spreadsheet for each interaction where methoxyflurane was used. Patient demographics, day, time, initial numeric pain score, analgesic response time, the reason for use, staff concern (free text), & patient feedback (free text), & discharge time was documented. When clinical concern was raised, the researcher retrieved & reviewed patient notes. Results: 140 methoxyflurane inhalers were used. 60% of patients were 31 years of age & over (n=82) with 16% aged 70+. The gender split; 51% male: 49% female. Trauma-related pain (57%) saw the highest use of administration, with the evening hours (1500-2259) seeing the greatest numbers used (39%). Tuesday, Thursday & Sunday shared the highest daily use throughout the study. A minimum numerical pain score of 4/10 (n=13, 9%), with the ranges of 5 - 7/10 (moderate pain) being given by almost 50% of patients. Only 3 instances of pain scores increased post use of methoxyflurane (all other entries showed pain score < initial rating). Patients & staff noted obvious analgesic response within 3 minutes (n= 96, 81%, of administration). Nurses documented a change in patient vital signs for 4 of the 15 patient-related concerns; the remaining concerns were due to “gagging” on the taste, or “having a coughing episode”; one patient tried to leave the department before the procedure was attended (very euphoric state). Upon review of the staff concerns – no adverse events occurred & return to therapeutic vitals occurred within 10 minutes. Length of stay for patients was compared with similar presentations (such as dislocated shoulder or ankle fracture) & saw an average 40-minute decrease in time to discharge. Methoxyflurane treatment was rated “positively” by > 80% of patients – with remaining feedback related to mild & transient concerns. Staff similarly noted a positive response to methoxyflurane as an analgesic & as an added tool for frontline analgesic purposes. Conclusion: Methoxyflurane should be used on suitable patient presentations requiring immediate, short term pain relief. As a highly portable, non-narcotic avenue to treat pain this study showed obvious therapeutic benefit, positive feedback, & a shorter length of stay in the ED. By partnering with key stake holders, this study determined methoxyflurane use decreased work load, decreased wait time to analgesia, and increased patient satisfaction.

Keywords: analgesia, benefits, emergency, methoxyflurane

Procedia PDF Downloads 123
467 Characterization of WNK2 Role on Glioma Cells Vesicular Traffic

Authors: Viviane A. O. Silva, Angela M. Costa, Glaucia N. M. Hajj, Ana Preto, Aline Tansini, Martin Roffé, Peter Jordan, Rui M. Reis

Abstract:

Autophagy is a recycling and degradative system suggested to be a major cell death pathway in cancer cells. Autophagy pathway is interconnected with the endocytosis pathways sharing the same ultimate lysosomal destination. Lysosomes are crucial regulators of cell homeostasis, responsible to downregulate receptor signalling and turnover. It seems highly likely that derailed endocytosis can make major contributions to several hallmarks of cancer. WNK2, a member of the WNK (with-no-lysine [K]) subfamily of protein kinases, had been found downregulated by its promoter hypermethylation, and has been proposed to act as a specific tumour-suppressor gene in brain tumors. Although some contradictory studies indicated WNK2 as an autophagy modulator, its role in cancer cell death is largely unknown. There is also growing evidence for additional roles of WNK kinases in vesicular traffic. Aim: To evaluate the role of WNK2 in autophagy and endocytosis on glioma context. Methods: Wild-type (wt) A172 cells (WNK2 promoter-methylated), and A172 transfected either with an empty vector (Ev) or with a WNK2 expression vector, were used to assess the cellular basal capacities to promote autophagy, through western blot and flow-cytometry analysis. Additionally, we evaluated the effect of WNK2 on general endocytosis trafficking routes by immunofluorescence. Results: The re-expression of ectopic WNK2 did not interfere with autophagy-related protein light chain 3 (LC3-II) expression levels as well as did not promote mTOR signaling pathway alteration when compared with Ev or wt A172 cells. However, the restoration of WNK2 resulted in a marked increase (8 to 92,4%) of Acidic Vesicular Organelles formation (AVOs). Moreover, our results also suggest that WNK2 cells promotes delay in uptake and internalization rate of cholera toxin B and transferrin ligands. Conclusions: The restoration of WNK2 interferes in vesicular traffic during endocytosis pathway and increase AVOs formation. This results also suggest the role of WNK2 in growth factor receptor turnover related to cell growth and homeostasis and associates one more time, WNK2 silencing contribution in genesis of gliomas.

Keywords: autophagy, endocytosis, glioma, WNK2

Procedia PDF Downloads 370
466 Water Dumpflood into Multiple Low-Pressure Gas Reservoirs

Authors: S. Lertsakulpasuk, S. Athichanagorn

Abstract:

As depletion-drive gas reservoirs are abandoned when there is insufficient production rate due to pressure depletion, waterflooding has been proposed to increase the reservoir pressure in order to prolong gas production. Due to high cost, water injection may not be economically feasible. Water dumpflood into gas reservoirs is a new promising approach to increase gas recovery by maintaining reservoir pressure with much cheaper costs than conventional waterflooding. Thus, a simulation study of water dumpflood into multiple nearly abandoned or already abandoned thin-bedded gas reservoirs commonly found in the Gulf of Thailand was conducted to demonstrate the advantage of the proposed method and to determine the most suitable operational parameters for reservoirs having different system parameters. A reservoir simulation model consisting of several thin-layered depletion-drive gas reservoirs and an overlying aquifer was constructed in order to investigate the performance of the proposed method. Two producers were initially used to produce gas from the reservoirs. One of them was later converted to a dumpflood well after gas production rate started to decline due to continuous reduction in reservoir pressure. The dumpflood well was used to flow water from the aquifer to increase pressure of the gas reservoir in order to drive gas towards producer. Two main operational parameters which are wellhead pressure of producer and the time to start water dumpflood were investigated to optimize gas recovery for various systems having different gas reservoir dip angles, well spacings, aquifer sizes, and aquifer depths. This simulation study found that water dumpflood can increase gas recovery up to 12% of OGIP depending on operational conditions and system parameters. For the systems having a large aquifer and large distance between wells, it is best to start water dumpflood when the gas rate is still high since the long distance between the gas producer and dumpflood well helps delay water breakthrough at producer. As long as there is no early water breakthrough, the earlier the energy is supplied to the gas reservoirs, the better the gas recovery. On the other hand, for the systems having a small or moderate aquifer size and short distance between the two wells, performing water dumpflood when the rate is close to the economic rate is better because water is more likely to cause an early breakthrough when the distance is short. Water dumpflood into multiple nearly-depleted or depleted gas reservoirs is a novel study. The idea of using water dumpflood to increase gas recovery has been mentioned in the literature but has never been investigated. This detailed study will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost and risk.

Keywords: dumpflood, increase gas recovery, low-pressure gas reservoir, multiple gas reservoirs

Procedia PDF Downloads 444
465 Achieving Household Electricity Saving Potential Through Behavioral Change

Authors: Lusi Susanti, Prima Fithri

Abstract:

The rapid growth of Indonesia population is directly proportional to the energy needs of the country, but not all of Indonesian population can relish the electricity. Indonesia's electrification ratio is still around 80.1%, which means that approximately 19.9% of households in Indonesia have not been getting the flow of electrical energy. Household electricity consumptions in Indonesia are generally still dominated by the public urban. In the city of Padang, West Sumatera, Indonesia, about 94.10% are power users of government services (PLN). The most important thing of the issue is human resources efficient energy. User behavior in utilizing electricity becomes significant. However repair solution will impact the user's habits sustainable energy issues. This study attempts to identify the user behavior and lifestyle that affect household electricity consumption and to evaluate the potential for energy saving. The behavior component is frequently underestimated or ignored in analyses of household electrical energy end use, partly because of its complexity. It is influenced by socio-demographic factors, culture, attitudes, aesthetic norms and comfort, as well as social and economic variables. Intensive questioner survey, in-depth interview and statistical analysis are carried out to collect scientific evidences of the behavioral based changes instruments to reduce electricity consumption in household sector. The questioner was developed to include five factors assuming affect the electricity consumption pattern in household sector. They are: attitude, energy price, household income, knowledge and other determinants. The survey was carried out in Padang, West Sumatra Province Indonesia. About 210 questioner papers were proportionally distributed to households in 11 districts in Padang. Stratified sampling was used as a method to select respondents. The results show that the household size, income, payment methods and size of house are factors affecting electricity saving behavior in residential sector. Household expenses on electricity are strongly influenced by gender, type of job, level of education, size of house, income, payment method and level of installed power. These results provide a scientific evidence for stakeholders on the potential of controlling electricity consumption and designing energy policy by government in residential sector.

Keywords: electricity, energy saving, household, behavior, policy

Procedia PDF Downloads 438
464 Evaluation of the CRISP-DM Business Understanding Step: An Approach for Assessing the Predictive Power of Regression versus Classification for the Quality Prediction of Hydraulic Test Results

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Digitalisation in production technology is a driver for the application of machine learning methods. Through the application of predictive quality, the great potential for saving necessary quality control can be exploited through the data-based prediction of product quality and states. However, the serial use of machine learning applications is often prevented by various problems. Fluctuations occur in real production data sets, which are reflected in trends and systematic shifts over time. To counteract these problems, data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets to extract stable features. Successful process control of the target variables aims to centre the measured values around a mean and minimise variance. Competitive leaders claim to have mastered their processes. As a result, much of the real data has a relatively low variance. For the training of prediction models, the highest possible generalisability is required, which is at least made more difficult by this data availability. The implementation of a machine learning application can be interpreted as a production process. The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that describes the life cycle of data science. As in any process, the costs to eliminate errors increase significantly with each advancing process phase. For the quality prediction of hydraulic test steps of directional control valves, the question arises in the initial phase whether a regression or a classification is more suitable. In the context of this work, the initial phase of the CRISP-DM, the business understanding, is critically compared for the use case at Bosch Rexroth with regard to regression and classification. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. Suitable methods for leakage volume flow regression and classification for inspection decision are applied. Impressively, classification is clearly superior to regression and achieves promising accuracies.

Keywords: classification, CRISP-DM, machine learning, predictive quality, regression

Procedia PDF Downloads 144
463 Compression and Air Storage Systems for Small Size CAES Plants: Design and Off-Design Analysis

Authors: Coriolano Salvini, Ambra Giovannelli

Abstract:

The use of renewable energy sources for electric power production leads to reduced CO2 emissions and contributes to improving the domestic energy security. On the other hand, the intermittency and unpredictability of their availability poses relevant problems in fulfilling safely and in a cost efficient way the load demand along the time. Significant benefits in terms of “grid system applications”, “end-use applications” and “renewable applications” can be achieved by introducing energy storage systems. Among the currently available solutions, CAES (Compressed Air Energy Storage) shows favorable features. Small-medium size plants equipped with artificial air reservoirs can constitute an interesting option to get efficient and cost-effective distributed energy storage systems. The present paper is addressed to the design and off-design analysis of the compression system of small size CAES plants suited to absorb electric power in the range of hundreds of kilowatt. The system of interest is constituted by an intercooled (in case aftercooled) multi-stage reciprocating compressor and a man-made reservoir obtained by connecting large diameter steel pipe sections. A specific methodology for the system preliminary sizing and off-design modeling has been developed. Since during the charging phase the electric power absorbed along the time has to change according to the peculiar CAES requirements and the pressure ratio increases continuously during the filling of the reservoir, the compressor has to work at variable mass flow rate. In order to ensure an appropriately wide range of operations, particular attention has been paid to the selection of the most suitable compressor capacity control device. Given the capacity regulation margin of the compressor and the actual level of charge of the reservoir, the proposed approach allows the instant-by-instant evaluation of minimum and maximum electric power absorbable from the grid. The developed tool gives useful information to appropriately size the compression system and to manage it in the most effective way. Various cases characterized by different system requirements are analysed. Results are given and widely discussed.

Keywords: artificial air storage reservoir, compressed air energy storage (CAES), compressor design, compression system management.

Procedia PDF Downloads 229
462 In vitro Evaluation of Capsaicin Patches for Transdermal Drug Delivery

Authors: Alija Uzunovic, Sasa Pilipovic, Aida Sapcanin, Zahida Ademovic, Berina Pilipović

Abstract:

Capsaicin is a naturally occurring alkaloid extracted from capsicum fruit extracts of different of Capsicum species. It has been employed topically to treat many diseases such as rheumatoid arthritis, osteoarthritis, cancer pain and nerve pain in diabetes. The high degree of pre-systemic metabolism of intragastrical capsaicin and the short half-life of capsaicin by intravenous administration made topical application of capsaicin advantageous. In this study, we have evaluated differences in the dissolution characteristics of capsaicin patch 11 mg (purchased from market) at different dissolution rotation speed. The proposed patch area is 308 cm2 (22 cm x 14 cm; it contains 36 µg of capsaicin per square centimeter of adhesive). USP Apparatus 5 (Paddle Over Disc) is used for transdermal patch testing. The dissolution study was conducted using USP apparatus 5 (n=6), ERWEKA DT800 dissolution tester (paddle-type) with addition of a disc. The fabricated patch of 308 cm2 is to be cut into 9 cm2 was placed against a disc (delivery side up) retained with the stainless-steel screen and exposed to 500 mL of phosphate buffer solution pH 7.4. All dissolution studies were carried out at 32 ± 0.5 °C and different rotation speed (50± 5; 100± 5 and 150± 5 rpm). 5 ml aliquots of samples were withdrawn at various time intervals (1, 4, 8 and 12 hours) and replaced with 5 ml of dissolution medium. Withdrawn were appropriately diluted and analyzed by reversed-phase liquid chromatography (RP-LC). A Reversed Phase Liquid Chromatography (RP-LC) method has been developed, optimized and validated for the separation and quantitation of capsaicin in a transdermal patch. The method uses a ProntoSIL 120-3-C18 AQ 125 x 4,0 mm (3 μm) column maintained at 600C. The mobile phase consisted of acetonitrile: water (50:50 v/v), the flow rate of 0.9 mL/min, the injection volume 10 μL and the detection wavelength 222 nm. The used RP-LC method is simple, sensitive and accurate and can be applied for fast (total chromatographic run time was 4.0 minutes) and simultaneous analysis of capsaicin and dihydrocapsaicin in a transdermal patch. According to the results obtained in this study, we can conclude that the relative difference of dissolution rate of capsaicin after 12 hours was elevated by increase of dissolution rotation speed (100 rpm vs 50 rpm: 84.9± 11.3% and 150 rpm vs 100 rpm: 39.8± 8.3%). Although several apparatus and procedures (USP apparatus 5, 6, 7 and a paddle over extraction cell method) have been used to study in vitro release characteristics of transdermal patches, USP Apparatus 5 (Paddle Over Disc) could be considered as a discriminatory test. would be able to point out the differences in the dissolution rate of capsaicin at different rotation speed.

Keywords: capsaicin, in vitro, patch, RP-LC, transdermal

Procedia PDF Downloads 227
461 Comparing the Gap Formation around Composite Restorations in Three Regions of Tooth Using Optical Coherence Tomography (OCT)

Authors: Rima Zakzouk, Yasushi Shimada, Yuan Zhou, Yasunori Sumi, Junji Tagami

Abstract:

Background and Purpose: Swept source optical coherence tomography (OCT) is an interferometric imaging technique that has been recently used in cariology. In spite of progress made in adhesive dentistry, the composite restoration has been failing due to secondary caries which occur due to environmental factors in oral cavities. Therefore, a precise assessment to effective marginal sealing of restoration is highly required. The aim of this study was evaluating gap formation at composite/cavity walls interface with or without phosphoric acid etching using SS-OCT. Materials and Methods: Round tapered cavities (2×2 mm) were prepared in three locations, mid-coronal, cervical, and root of bovine incisors teeth in two groups (SE and PA Groups). While self-etching adhesive (Clearfil SE Bond) was applied for the both groups, Group PA had been already pretreated with phosphoric acid etching (K-Etchant gel). Subsequently, both groups were restored by Estelite Flow Quick Flowable Composite Resin. Following 5000 thermal cycles, three cross-sectionals were obtained from each cavity using OCT at 1310-nm wavelength at 0°, 60°, 120° degrees. Scanning was repeated after two months to monitor the gap progress. Then the average percentage of gap length was calculated using image analysis software, and the difference of mean between both groups was statistically analyzed by t-test. Subsequently, the results were confirmed by sectioning and observing representative specimens under Confocal Laser Scanning Microscope (CLSM). Results: The results showed that pretreatment with phosphoric acid etching, Group PA, led to significantly bigger gaps in mid-coronal and cervical compared to SE group, while in the root cavity no significant difference was observed between both groups. On the other hand, the gaps formed in root’s cavities were significantly bigger than those in mid-coronal and cervical within the same group. This study investigated the effect of phosphoric acid on gap length progress on the composite restorations. In conclusions, phosphoric acid etching treatment did not reduce the gap formation even in different regions of the tooth. Significance: The cervical region of tooth was more exposing to gap formation than mid-coronal region, especially when we added pre-etching treatment.

Keywords: image analysis, optical coherence tomography, phosphoric acid etching, self-etch adhesives

Procedia PDF Downloads 221
460 Formulation and Evaluation of Glimepiride (GMP)-Solid Nanodispersion and Nanodispersed Tablets

Authors: Ahmed. Abdel Bary, Omneya. Khowessah, Mojahed. al-jamrah

Abstract:

Introduction: The major challenge with the design of oral dosage forms lies with their poor bioavailability. The most frequent causes of low oral bioavailability are attributed to poor solubility and low permeability. The aim of this study was to develop solid nanodispersed tablet formulation of Glimepiride for the enhancement of the solubility and bioavailability. Methodology: Solid nanodispersions of Glimepiride (GMP) were prepared using two different ratios of 2 different carriers, namely; PEG6000, pluronic F127, and by adopting two different techniques, namely; solvent evaporation technique and fusion technique. A full factorial design of 2 3 was adopted to investigate the influence of formulation variables on the prepared nanodispersion properties. The best chosen formula of nanodispersed powder was formulated into tablets by direct compression. The Differential Scanning Calorimetry (DSC) analysis and Fourier Transform Infra-Red (FTIR) analysis were conducted for the thermal behavior and surface structure characterization, respectively. The zeta potential and particle size analysis of the prepared glimepiride nanodispersions was determined. The prepared solid nanodispersions and solid nanodispersed tablets of GMP were evaluated in terms of pre-compression and post-compression parameters, respectively. Results: The DSC and FTIR studies revealed that there was no interaction between GMP and all the excipients used. Based on the resulted values of different pre-compression parameters, the prepared solid nanodispersions powder blends showed poor to excellent flow properties. The resulted values of the other evaluated pre-compression parameters of the prepared solid nanodispersion were within the limits of pharmacopoeia. The drug content of the prepared nanodispersions ranged from 89.6 ± 0.3 % to 99.9± 0.5% with particle size ranged from 111.5 nm to 492.3 nm and the resulted zeta potential (ζ ) values of the prepared GMP-solid nanodispersion formulae (F1-F8) ranged from -8.28±3.62 mV to -78±11.4 mV. The in-vitro dissolution studies of the prepared solid nanodispersed tablets of GMP concluded that GMP- pluronic F127 combinations (F8), exhibited the best extent of drug release, compared to other formulations, and to the marketed product. One way ANOVA for the percent of drug released from the prepared GMP-nanodispersion formulae (F1- F8) after 20 and 60 minutes showed significant differences between the percent of drug released from different GMP-nanodispersed tablet formulae (F1- F8), (P<0.05). Conclusion: Preparation of glimepiride as nanodispersed particles proven to be a promising tool for enhancing the poor solubility of glimepiride.

Keywords: glimepiride, solid Nanodispersion, nanodispersed tablets, poorly water soluble drugs

Procedia PDF Downloads 488
459 Recent Developments in E-waste Management in India

Authors: Rajkumar Ghosh, Bhabani Prasad Mukhopadhay, Ananya Mukhopadhyay, Harendra Nath Bhattacharya

Abstract:

This study investigates the global issue of electronic waste (e-waste), focusing on its prevalence in India and other regions. E-waste has emerged as a significant worldwide problem, with India contributing a substantial share of annual e-waste generation. The primary sources of e-waste in India are computer equipment and mobile phones. Many developed nations utilize India as a dumping ground for their e-waste, with major contributions from the United States, China, Europe, Taiwan, South Korea, and Japan. The study identifies Maharashtra, Tamil Nadu, Mumbai, and Delhi as prominent contributors to India's e-waste crisis. This issue is contextualized within the broader framework of the United Nations' 2030 Agenda for Sustainable Development, which encompasses 17 Sustainable Development Goals (SDGs) and 169 associated targets to address poverty, environmental preservation, and universal prosperity. The study underscores the interconnectedness of e-waste management with several SDGs, including health, clean water, economic growth, sustainable cities, responsible consumption, and ocean conservation. Central Pollution Control Board (CPCB) data reveals that e-waste generation surpasses that of plastic waste, increasing annually at a rate of 31%. However, only 20% of electronic waste is recycled through organized and regulated methods in underdeveloped nations. In Europe, efficient e-waste management stands at just 35%. E-waste pollution poses serious threats to soil, groundwater, and public health due to toxic components such as mercury, lead, bromine, and arsenic. Long-term exposure to these toxins, notably arsenic in microchips, has been linked to severe health issues, including cancer, neurological damage, and skin disorders. Lead exposure, particularly concerning for children, can result in brain damage, kidney problems, and blood disorders. The study highlights the problematic transboundary movement of e-waste, with approximately 352,474 metric tonnes of electronic waste illegally shipped from Europe to developing nations annually, mainly to Africa, including Nigeria, Ghana, and Tanzania. Effective e-waste management, underpinned by appropriate infrastructure, regulations, and policies, offers opportunities for job creation and aligns with the objectives of the 2030 Agenda for SDGs, especially in the realms of decent work, economic growth, and responsible production and consumption. E-waste represents hazardous pollutants and valuable secondary resources, making it a focal point for anthropogenic resource exploitation. The United Nations estimates that e-waste holds potential secondary raw materials worth around 55 billion Euros. The study also identifies numerous challenges in e-waste management, encompassing the sheer volume of e-waste, child labor, inadequate legislation, insufficient infrastructure, health concerns, lack of incentive schemes, limited awareness, e-waste imports, high costs associated with recycling plant establishment, and more. To mitigate these issues, the study offers several solutions, such as providing tax incentives for scrap dealers, implementing reward and reprimand systems for e-waste management compliance, offering training on e-waste handling, promoting responsible e-waste disposal, advancing recycling technologies, regulating e-waste imports, and ensuring the safe disposal of domestic e-waste. A mechanism, Buy-Back programs, will compensate customers in cash when they deposit unwanted digital products. This E-waste could contain any portable electronic device, such as cell phones, computers, tablets, etc. Addressing the e-waste predicament necessitates a multi-faceted approach involving government regulations, industry initiatives, public awareness campaigns, and international cooperation to minimize environmental and health repercussions while harnessing the economic potential of recycling and responsible management.

Keywords: e-waste management, sustainable development goal, e-waste disposal, recycling technology, buy-back policy

Procedia PDF Downloads 85
458 Fuzzy Time Series- Markov Chain Method for Corn and Soybean Price Forecasting in North Carolina Markets

Authors: Selin Guney, Andres Riquelme

Abstract:

Among the main purposes of optimal and efficient forecasts of agricultural commodity prices is to guide the firms to advance the economic decision making process such as planning business operations and marketing decisions. Governments are also the beneficiaries and suppliers of agricultural price forecasts. They use this information to establish a proper agricultural policy, and hence, the forecasts affect social welfare and systematic errors in forecasts could lead to a misallocation of scarce resources. Various empirical approaches have been applied to forecast commodity prices that have used different methodologies. Most commonly-used approaches to forecast commodity sectors depend on classical time series models that assume values of the response variables are precise which is quite often not true in reality. Recently, this literature has mostly evolved to a consideration of fuzzy time series models that provide more flexibility in terms of the classical time series models assumptions such as stationarity, and large sample size requirement. Besides, fuzzy modeling approach allows decision making with estimated values under incomplete information or uncertainty. A number of fuzzy time series models have been developed and implemented over the last decades; however, most of them are not appropriate for forecasting repeated and nonconsecutive transitions in the data. The modeling scheme used in this paper eliminates this problem by introducing Markov modeling approach that takes into account both the repeated and nonconsecutive transitions. Also, the determination of length of interval is crucial in terms of the accuracy of forecasts. The problem of determining the length of interval arbitrarily is overcome and a methodology to determine the proper length of interval based on the distribution or mean of the first differences of series to improve forecast accuracy is proposed. The specific purpose of this paper is to propose and investigate the potential of a new forecasting model that integrates methodologies for determining the proper length of interval based on the distribution or mean of the first differences of series and Fuzzy Time Series- Markov Chain model. Moreover, the accuracy of the forecasting performance of proposed integrated model is compared to different univariate time series models and the superiority of proposed method over competing methods in respect of modelling and forecasting on the basis of forecast evaluation criteria is demonstrated. The application is to daily corn and soybean prices observed at three commercially important North Carolina markets; Candor, Cofield and Roaring River for corn and Fayetteville, Cofield and Greenville City for soybeans respectively. One main conclusion from this paper is that using fuzzy logic improves the forecast performance and accuracy; the effectiveness and potential benefits of the proposed model is confirmed with small selection criteria value such MAPE. The paper concludes with a discussion of the implications of integrating fuzzy logic and nonarbitrary determination of length of interval for the reliability and accuracy of price forecasts. The empirical results represent a significant contribution to our understanding of the applicability of fuzzy modeling in commodity price forecasts.

Keywords: commodity, forecast, fuzzy, Markov

Procedia PDF Downloads 217
457 The Effect of Filter Design and Face Velocity on Air Filter Performance

Authors: Iyad Al-Attar

Abstract:

Air filters installed in HVAC equipment and gas turbine for power generation confront several atmospheric contaminants with various concentrations while operating in different environments (tropical, coastal, hot). This leads to engine performance degradation, as contaminants are capable of deteriorating components and fouling compressor assembly. Compressor fouling is responsible for 70 to 85% of gas turbine performance degradation leading to reduction in power output and availability and an increase in the heat rate and fuel consumption. Therefore, filter design must take into account face velocities, pleat count and its corresponding surface area; to verify filter performance characteristics (Efficiency and Pressure Drop). The experimental work undertaken in the current study examined two groups of four filters with different pleating densities were investigated for the initial pressure drop response and fractional efficiencies. The pleating densities used for this study is 28, 30, 32 and 34 pleats per 100mm for each pleated panel and measured for ten different flow rates ranging from 500 to 5000 m3/h with increment of 500m3/h. This experimental work of the current work has highlighted the underlying reasons behind the reduction in filter permeability due to the increase in face velocity and pleat density. The reasons that led to surface area losses of filtration media are due to one or combination of the following effects: pleat-crowding, deflection of the entire pleated panel, pleat distortion at the corner of the pleat and/or filtration medium compression. It is evident from entire array of experiments that as the particle size increases, the efficiency decreases until the MPPS is reached. Beyond the MPPS, the efficiency increases with increase in particle size. The MPPS shifts to a smaller particle size as the face velocity increases, while the pleating density and orientation did not have a pronounced effect on the MPPS. Throughout the study, an optimal pleat count which satisfies initial pressure drop and efficiency requirements may not have necessarily existed. The work has also suggested that a valid comparison of the pleat densities should be based on the effective surface area that participates in the filtration action and not the total surface area the pleat density provides.

Keywords: air filters, fractional efficiency, gas cleaning, glass fibre, HEPA filter, permeability, pressure drop

Procedia PDF Downloads 135
456 Investigation of the EEG Signal Parameters during Epileptic Seizure Phases in Consequence to the Application of External Healing Therapy on Subjects

Authors: Karan Sharma, Ajay Kumar

Abstract:

Epileptic seizure is a type of disease due to which electrical charge in the brain flows abruptly resulting in abnormal activity by the subject. One percent of total world population gets epileptic seizure attacks.Due to abrupt flow of charge, EEG (Electroencephalogram) waveforms change. On the display appear a lot of spikes and sharp waves in the EEG signals. Detection of epileptic seizure by using conventional methods is time-consuming. Many methods have been evolved that detect it automatically. The initial part of this paper provides the review of techniques used to detect epileptic seizure automatically. The automatic detection is based on the feature extraction and classification patterns. For better accuracy decomposition of the signal is required before feature extraction. A number of parameters are calculated by the researchers using different techniques e.g. approximate entropy, sample entropy, Fuzzy approximate entropy, intrinsic mode function, cross-correlation etc. to discriminate between a normal signal & an epileptic seizure signal.The main objective of this review paper is to present the variations in the EEG signals at both stages (i) Interictal (recording between the epileptic seizure attacks). (ii) Ictal (recording during the epileptic seizure), using most appropriate methods of analysis to provide better healthcare diagnosis. This research paper then investigates the effects of a noninvasive healing therapy on the subjects by studying the EEG signals using latest signal processing techniques. The study has been conducted with Reiki as a healing technique, beneficial for restoring balance in cases of body mind alterations associated with an epileptic seizure. Reiki is practiced around the world and is recommended for different health services as a treatment approach. Reiki is an energy medicine, specifically a biofield therapy developed in Japan in the early 20th century. It is a system involving the laying on of hands, to stimulate the body’s natural energetic system. Earlier studies have shown an apparent connection between Reiki and the autonomous nervous system. The Reiki sessions are applied by an experienced therapist. EEG signals are measured at baseline, during session and post intervention to bring about effective epileptic seizure control or its elimination altogether.

Keywords: EEG signal, Reiki, time consuming, epileptic seizure

Procedia PDF Downloads 406
455 Gilgel Gibe III: Dam-Induced Displacement in Ethiopia and Kenya

Authors: Jonny Beirne

Abstract:

Hydropower developments have come to assume an important role within the Ethiopian government's overall development strategy for the country during the last ten years. The Gilgel Gibe III on the Omo river, due to become operational in September 2014, represents the most ambitious, and controversial, of these projects to date. Further aspects of the government's national development strategy include leasing vast areas of designated 'unused' land for large-scale commercial agricultural projects and 'voluntarily' villagizing scattered, semi-nomadic agro-pastoralist groups to centralized settlements so as to use land and water more efficiently and to better provide essential social services such as education and healthcare. The Lower Omo valley, along the Omo River, is one of the sites of this villagization programme as well as of these large-scale commercial agricultural projects which are made possible owing to the regulation of the river's flow by Gibe III. Though the Ethiopian government cite many positive aspects of these agricultural and hydropower developments there are still expected to be serious regional and transnational effects, including on migration flows, in an area already characterized by increasing climatic vulnerability with attendant population movements and conflicts over scarce resources. The following paper is an attempt to track actual and anticipated migration flows resulting from the construction of Gibe III in the immediate vicinity of the dam, downstream in the Lower Omo Valley and across the border in Kenya around Lake Turkana. In the case of those displaced in the Lower Omo Valley, this will be considered in view of the distinction between voluntary villagization and forced resettlement. The research presented is not primary-source material. Instead, it is drawn from the reports and assessments of the Ethiopian government, rights-based groups, and academic researchers as well as media articles. It is hoped that this will serve to draw greater attention to the issue and encourage further methodological research on the dynamics of dam constructions (and associated large-scale irrigation schemes) on migration flows and on the ultimate experience of displacement and resettlement for environmental migrants in the region.

Keywords: forced displacement, voluntary resettlement, migration, human rights, human security, land grabs, dams, commercial agriculture, pastoralism, ecosystem modification, natural resource conflict, livelihoods, development

Procedia PDF Downloads 381
454 Overview of Environmental and Economic Theories of the Impact of Dams in Different Regions

Authors: Ariadne Katsouras, Andrea Chareunsy

Abstract:

The number of large hydroelectric dams in the world has increased from almost 6,000 in the 1950s to over 45,000 in 2000. Dams are often built to increase the economic development of a country. This can occur in several ways. Large dams take many years to build so the construction process employs many people for a long time and that increased production and income can flow on into other sectors of the economy. Additionally, the provision of electricity can help raise people’s living standards and if the electricity is sold to another country then the money can be used to provide other public goods for the residents of the country that own the dam. Dams are also built to control flooding and provide irrigation water. Most dams are of these types. This paper will give an overview of the environmental and economic theories of the impact of dams in different regions of the world. There is a difference in the degree of environmental and economic impacts due to the varying climates and varying social and political factors of the regions. Production of greenhouse gases from the dam’s reservoir, for instance, tends to be higher in tropical areas as opposed to Nordic environments. However, there are also common impacts due to construction of the dam itself, such as, flooding of land for the creation of the reservoir and displacement of local populations. Economically, the local population tends to benefit least from the construction of the dam. Additionally, if a foreign company owns the dam or the government subsidises the cost of electricity to businesses, then the funds from electricity production do not benefit the residents of the country the dam is built in. So, in the end, the dams can benefit a country economically, but the varying factors related to its construction and how these are dealt with, determine the level of benefit, if any, of the dam. Some of the theories or practices used to evaluate the potential value of a dam include cost-benefit analysis, environmental impacts assessments and regressions. Systems analysis is also a useful method. While these theories have value, there are also possible shortcomings. Cost-benefit analysis converts all the costs and benefits to dollar values, which can be problematic. Environmental impact assessments, likewise, can be incomplete, especially if the assessment does not include feedback effects, that is, they only consider the initial impact. Finally, regression analysis is dependent on the available data and again would not necessarily include feedbacks. Systems analysis is a method that can allow more complex modelling of the environment and the economic system. It would allow a clearer picture to emerge of the impacts and can include a long time frame.

Keywords: comparison, economics, environment, hydroelectric dams

Procedia PDF Downloads 197
453 Population Dynamics of Cyprinid Fish Species (Mahseer: Tor Species) and Its Conservation in Yamuna River of Garhwal Region, India

Authors: Davendra Singh Malik

Abstract:

India is one of the mega-biodiversity countries in the world and contributing about 11.72% of global fish diversity. The Yamuna river is the longest tributary of Ganga river ecosystem, providing a natural habitat for existing fish diversity of Himalayan region of Indian subcontinent. The several hydropower dams and barrages have been constructed on different locations of major rivers in Garhwal region. These dams have caused a major ecological threat to change existing fresh water ecosystems altering water flows, interrupting ecological connectivity, fragmenting habitats and native riverine fish species. Mahseer fishes (Indian carp) of the genus Tor, are large cyprinids endemic to continental Asia popularly known as ‘Game or sport fishes’ have continued to be decimated by fragmented natural habitats due to damming the water flow in riverine system and categorized as threatened fishes of India. The fresh water fish diversity as 24 fish species were recorded from Yamuna river. The present fish catch data has revealed that mahseer fishes (Tor tor and Tor putitora) were contributed about 32.5 %, 25.6 % and 18.2 % in upper, middle and lower riverine stretches of Yaumna river. The length range of mahseer (360-450mm) recorded as dominant size of catch composition. The CPUE (catch per unit effort) of mahseer fishes also indicated about a sharp decline of fish biomass, changing growth pattern, sex ratio and maturity stages of fishes. Only 12.5 – 14.8 % mahseer female brooders have showed only maturity phases in breeding months. The fecundity of mature mahseer female fish brooders ranged from 2500-4500 no. of ova during breeding months. The present status of mahseer fishery has attributed to the over exploitative nature in Yamuna river. The mahseer population is shrinking continuously in down streams of Yamuna river due to cumulative effects of various ecological stress. Mahseer conservation programme have implemented as 'in situ fish conservation' for enhancement of viable population size of mahseer species and restore the genetic loss of mahseer fish germplasm in Yamuna river of Garhwal Himalayan region.

Keywords: conservation practice, population dynamics, tor fish species, Yamuna River

Procedia PDF Downloads 255