Search results for: treatment methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21405

Search results for: treatment methods

1845 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments

Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva

Abstract:

It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.

Keywords: industrial environment, material standards, optical measuring instrument, traceability

Procedia PDF Downloads 122
1844 Predicting the Effect of Vibro Stone Column Installation on Performance of Reinforced Foundations

Authors: K. Al Ammari, B. G. Clarke

Abstract:

Soil improvement using vibro stone column techniques consists of two main parts: (1) the installed load bearing columns of well-compacted, coarse-grained material and (2) the improvements to the surrounding soil due to vibro compaction. Extensive research work has been carried out over the last 20 years to understand the improvement in the composite foundation performance due to the second part mentioned above. Nevertheless, few of these studies have tried to quantify some of the key design parameters, namely the changes in the stiffness and stress state of the treated soil, or have consider these parameters in the design and calculation process. Consequently, empirical and conservative design methods are still being used by ground improvement companies with a significant variety of results in engineering practice. Two-dimensional finite element study to develop an axisymmetric model of a single stone column reinforced foundation was performed using PLAXIS 2D AE to quantify the effect of the vibro installation of this column in soft saturated clay. Settlement and bearing performance were studied as an essential part of the design and calculation of the stone column foundation. Particular attention was paid to the large deformation in the soft clay around the installed column caused by the lateral expansion. So updated mesh advanced option was taken in the analysis. In this analysis, different degrees of stone column lateral expansions were simulated and numerically analyzed, and then the changes in the stress state, stiffness, settlement performance and bearing capacity were quantified. It was found that application of radial expansion will produce a horizontal stress in the soft clay mass that gradually decrease as the distance from the stone column axis increases. The excess pore pressure due to the undrained conditions starts to dissipate immediately after finishing the column installation, allowing the horizontal stress to relax. Changes in the coefficient of the lateral earth pressure K ٭, which is very important in representing the stress state, and the new stiffness distribution in the reinforced clay mass, were estimated. More encouraging results showed that increasing the expansion during column installation has a noticeable effect on improving the bearing capacity and reducing the settlement of reinforced ground, So, a design method should include this significant effect of the applied lateral displacement during the stone column instillation in simulation and numerical analysis design.

Keywords: bearing capacity, design, installation, numerical analysis, settlement, stone column

Procedia PDF Downloads 374
1843 Effect of Fermentation on the Bioavailability of Some Fruit Extracts

Authors: Kubra Ozkan, Osman Sagdic

Abstract:

To better understand the benefits of these fresh and fermented fruits on human health, the consequences of human metabolism and the bioavailability must be known. In this study, brine with 10% salt content, sugar, and vinegar (5% acetic acid) was added to fruits (Prunus domestica L. and Prunus amygdalus Batsch) in different formulations. Samples were stored at 20±2˚C for their fermentation for 21 days. The effects of in vitro digestion were determined on the bioactive compounds in fresh and fermented fruits ((Prunus domestica L. and Prunus amygdalus Batsch). Total phenolic compounds, total flavonoid compounds and antioxidant capacities of post gastric (PG), IN (with small intestinal absorbers) and OUT (without small intestine absorbers) samples obtained as gastric and intestinal digestion in vitro were measured. Bioactive compounds and antioxidant capacity were determined by spectrophotometrically. Antioxidant capacity was tested by the CUPRAC methods, the total phenolic content (TPC) was determined by the Folin-Ciocalteu method, the total flavonoid content (TFC) determined by Aluminium trichloride (AlCl3) method. While the antioxidant capacity of fresh Prunus domestica L. and Prunus amygdalus Batsch samples were 2.21±0.05 mg TEAC/g, 4.39±0.02mg TEAC/g; these values for fermented fruits were found 2.37±0.08mg TEAC/g, 5.38±0.07mg TEAC/g respectively. While the total phenolic contents of fresh fruits namely, Prunus domestica L. and Prunus amygdalus Batsch samples were 0.51±0.01mg GAE/g, 5.56±0.01mg GAE/g; these values for fermented fruits were found as 0.52±0.01mg GAE/g, 6.81±0.03mg GAE/g, respectively. While the total flavonoid amounts of fresh Prunus domestica L. and Prunus amygdalus Batsch samples were 0.19±0.01mg CAE/g, 2.68±0.02mg CAE/g, these values for fermented fruits were found 0.20±0.01mg CAE/g, 2.93±0.02mg CAE/g, respectively. This study showed that phenolic, flavonoid compounds and antioxidant capacities of the samples were increased during the fermantation process. As a result of digestion, the amounts of bioactive components decreased in the stomach and intestinal environment. The bioavailability values of the phenolic compounds in fresh and fermented Prunus domestica L. fruits are 40.89% and 43.28%, respectively. The bioavailability values of the phenolic compounds in fresh and fermented Prunus amygdalus Batsch fruits 4.27% and 3.82%, respectively. The bioavailability values of the flavonoid compounds in fresh and fermented Prunus domestica L. fruits are 5.32% and 19.98%, respectively. The bioavailability values of the flavonoid compounds in fresh and fermented Prunus amygdalus Batsch fruits 2.22% and 1.53%, respectively. The bioavailability values of antioxidant capacity in fresh and fermented Prunus domestica L. fruits are 33.06% and 33.51, respectively. The bioavailability values of antioxidant capacity in fresh and fermented Prunus amygdalus Batsch fruits 14.50% and 15.31%, respectively. Fermentation process; Prunus amygdalus Batsch decreased bioavailability while Prunus domestica increased bioavailability. When two fruits are compared; Prunus domestica bioavailability is more than Prunus amygdalus Batsch.

Keywords: bioactivity, bioavailability, fermented, fruit, nutrition

Procedia PDF Downloads 161
1842 Optimal Capacitors Placement and Sizing Improvement Based on Voltage Reduction for Energy Efficiency

Authors: Zilaila Zakaria, Muhd Azri Abdul Razak, Muhammad Murtadha Othman, Mohd Ainor Yahya, Ismail Musirin, Mat Nasir Kari, Mohd Fazli Osman, Mohd Zaini Hassan, Baihaki Azraee

Abstract:

Energy efficiency can be realized by minimizing the power loss with a sufficient amount of energy used in an electrical distribution system. In this report, a detailed analysis of the energy efficiency of an electric distribution system was carried out with an implementation of the optimal capacitor placement and sizing (OCPS). The particle swarm optimization (PSO) will be used to determine optimal location and sizing for the capacitors whereas energy consumption and power losses minimization will improve the energy efficiency. In addition, a certain number of busbars or locations are identified in advance before the PSO is performed to solve OCPS. In this case study, three techniques are performed for the pre-selection of busbar or locations which are the power-loss-index (PLI). The particle swarm optimization (PSO) is designed to provide a new population with improved sizing and location of capacitors. The total cost of power losses, energy consumption and capacitor installation are the components considered in the objective and fitness functions of the proposed optimization technique. Voltage magnitude limit, total harmonic distortion (THD) limit, power factor limit and capacitor size limit are the parameters considered as the constraints for the proposed of optimization technique. In this research, the proposed methodologies implemented in the MATLAB® software will transfer the information, execute the three-phase unbalanced load flow solution and retrieve then collect the results or data from the three-phase unbalanced electrical distribution systems modeled in the SIMULINK® software. Effectiveness of the proposed methods used to improve the energy efficiency has been verified through several case studies and the results are obtained from the test systems of IEEE 13-bus unbalanced electrical distribution system and also the practical electrical distribution system model of Sultan Salahuddin Abdul Aziz Shah (SSAAS) government building in Shah Alam, Selangor.

Keywords: particle swarm optimization, pre-determine of capacitor locations, optimal capacitors placement and sizing, unbalanced electrical distribution system

Procedia PDF Downloads 434
1841 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 217
1840 A Double Ended AC Series Arc Fault Location Algorithm Based on Currents Estimation and a Fault Map Trace Generation

Authors: Edwin Calderon-Mendoza, Patrick Schweitzer, Serge Weber

Abstract:

Series arc faults appear frequently and unpredictably in low voltage distribution systems. Many methods have been developed to detect this type of faults and commercial protection systems such AFCI (arc fault circuit interrupter) have been used successfully in electrical networks to prevent damage and catastrophic incidents like fires. However, these devices do not allow series arc faults to be located on the line in operating mode. This paper presents a location algorithm for series arc fault in a low-voltage indoor power line in an AC 230 V-50Hz home network. The method is validated through simulations using the MATLAB software. The fault location method uses electrical parameters (resistance, inductance, capacitance, and conductance) of a 49 m indoor power line. The mathematical model of a series arc fault is based on the analysis of the V-I characteristics of the arc and consists basically of two antiparallel diodes and DC voltage sources. In a first step, the arc fault model is inserted at some different positions across the line which is modeled using lumped parameters. At both ends of the line, currents and voltages are recorded for each arc fault generation at different distances. In the second step, a fault map trace is created by using signature coefficients obtained from Kirchhoff equations which allow a virtual decoupling of the line’s mutual capacitance. Each signature coefficient obtained from the subtraction of estimated currents is calculated taking into account the Discrete Fast Fourier Transform of currents and voltages and also the fault distance value. These parameters are then substituted into Kirchhoff equations. In a third step, the same procedure described previously to calculate signature coefficients is employed but this time by considering hypothetical fault distances where the fault can appear. In this step the fault distance is unknown. The iterative calculus from Kirchhoff equations considering stepped variations of the fault distance entails the obtaining of a curve with a linear trend. Finally, the fault distance location is estimated at the intersection of two curves obtained in steps 2 and 3. The series arc fault model is validated by comparing current registered from simulation with real recorded currents. The model of the complete circuit is obtained for a 49m line with a resistive load. Also, 11 different arc fault positions are considered for the map trace generation. By carrying out the complete simulation, the performance of the method and the perspectives of the work will be presented.

Keywords: indoor power line, fault location, fault map trace, series arc fault

Procedia PDF Downloads 137
1839 Challenges and Pitfalls of Nutrition Labeling Policy in Iran: A Policy Analysis

Authors: Sareh Edalati, Nasrin Omidvar, Arezoo Haghighian Roudsari, Delaram Ghodsi, Azizollaah Zargaran

Abstract:

Background and aim: Improving consumer’s food choices and providing a healthy food environment by governments is one of the essential approaches to prevent non-communicable diseases and to fulfill the sustainable development goals (SDGs). The present study aimed to provide an analysis of the nutrition labeling policy as one of the main components of the healthy food environment to provide learning lessons for the country and other low and middle-income countries. Methods: Data were collected by reviewing documents and conducting semi-structured interviews with stakeholders. Respondents were selected through purposive and snowball sampling and continued until data saturation. MAXQDA software was used to manage data analysis. A deductive content analysis was used by applying the Kingdon multiple streams and the policy triangulation framework. Results: Iran is the first country in the Middle East and North Africa region, which has implemented nutrition traffic light labeling. The implementation process has gone through two phases: voluntary and mandatory. In the voluntary labeling, volunteer food manufacturers who chose to have the labels would receive an honorary logo and this helped to reduce the food-sector resistance gradually. After this phase, the traffic light labeling became mandatory. Despite these efforts, there has been poor involvement of media for public awareness and sensitization. Also, the inconsistency of nutrition traffic light colors which are based on food standard guidelines, lack of consistency between nutrition traffic light colors, the healthy/unhealthy nature of some food products such as olive oil and diet cola and the absence of a comprehensive evaluation plan were among the pitfalls and policy challenges identified. Conclusions: Strengthening the governance through improving collaboration within health and non-health sectors for implementation, more transparency of truthfulness of nutrition traffic labeling initiating with real ingredients, and applying international and local scientific evidence or any further revision of the program is recommended. Also, developing public awareness campaigns and revising school curriculums to improve students’ skills on nutrition label applications should be highly emphasized.

Keywords: nutrition labeling, policy analysis, food environment, Iran

Procedia PDF Downloads 192
1838 Equilibrium, Kinetic and Thermodynamic Studies of the Biosorption of Textile Dye (Yellow Bemacid) onto Brahea edulis

Authors: G. Henini, Y. Laidani, F. Souahi, A. Labbaci, S. Hanini

Abstract:

Environmental contamination is a major problem being faced by the society today. Industrial, agricultural, and domestic wastes, due to the rapid development in the technology, are discharged in the several receivers. Generally, this discharge is directed to the nearest water sources such as rivers, lakes, and seas. While the rates of development and waste production are not likely to diminish, efforts to control and dispose of wastes are appropriately rising. Wastewaters from textile industries represent a serious problem all over the world. They contain different types of synthetic dyes which are known to be a major source of environmental pollution in terms of both the volume of dye discharged and the effluent composition. From an environmental point of view, the removal of synthetic dyes is of great concern. Among several chemical and physical methods, adsorption is a promising technique due to the ease of use and low cost compared to other applications in the process of discoloration, especially if the adsorbent is inexpensive and readily available. The focus of the present study was to assess the potentiality of Brahea edulis (BE) for the removal of synthetic dye Yellow bemacid (YB) from aqueous solutions. The results obtained here may transfer to other dyes with a similar chemical structure. Biosorption studies were carried out under various parameters such as mass adsorbent particle, pH, contact time, initial dye concentration, and temperature. The biosorption kinetic data of the material (BE) was tested by the pseudo first-order and the pseudo-second-order kinetic models. Thermodynamic parameters including the Gibbs free energy ΔG, enthalpy ΔH, and entropy ΔS have revealed that the adsorption of YB on the BE is feasible, spontaneous, and endothermic. The equilibrium data were analyzed by using Langmuir, Freundlich, Elovich, and Temkin isotherm models. The experimental results show that the percentage of biosorption increases with an increase in the biosorbent mass (0.25 g: 12 mg/g; 1.5 g: 47.44 mg/g). The maximum biosorption occurred at around pH value of 2 for the YB. The equilibrium uptake was increased with an increase in the initial dye concentration in solution (Co = 120 mg/l; q = 35.97 mg/g). Biosorption kinetic data were properly fitted with the pseudo-second-order kinetic model. The best fit was obtained by the Langmuir model with high correlation coefficient (R2 > 0.998) and a maximum monolayer adsorption capacity of 35.97 mg/g for YB.

Keywords: adsorption, Brahea edulis, isotherm, yellow Bemacid

Procedia PDF Downloads 177
1837 The Impact of Streptococcus pneumoniae Colonization on Viral Bronchiolitis

Authors: K. Genise, S. Murthy

Abstract:

Introductory Statement: The results of this retrospective chart review suggest the effects of bacterial colonization in critically ill children with viral bronchiolitis, currently unproven, are clinically insignificant. Background: Viral bronchiolitis is one of the most prevalent causes of illness requiring hospitalization among children worldwide and one of the most common reasons for admission to pediatric intensive care. It has been hypothesized that co-infection with bacteria results in more severe clinical outcomes. Conversely, the effects of bacterial colonization in critically ill patients with bronchiolitis are poorly defined. Current clinical management of colonized patients consists primarily of supportive therapies with the role of antibiotics remaining controversial. Methods: A retrospective review of all critically ill children admitted to the BC Children’s Hospital Pediatric Intensive Care Unit (PICU) from 2014-2017 with a diagnosis of bronchiolitis was performed. Routine testing in this time frame consisted of complete pathogen testing, including PCR for Streptococcus pneumoniae. Analyses were performed to determine the impact of bacterial colonization and antibiotic use on a primary outcome of PICU length-of-stay, with secondary outcomes of hospital length-of-stay and duration of ventilation. Results: There were 92 patients with complete pathogen testing performed during the assessed timeframe. A comparison between children with detected Streptococcus pneumoniae (n=22) and those without (n=70) revealed no significant (p=0.20) differences in severity of illness on presentation as per Pediatric Risk of Mortality III scores (mean=3.0). Patients colonized with S. pneumoniae had significantly shorter PICU stays (p=0.002), hospital stays (p=0.0001) and duration of non-invasive ventilation (p=0.002). Multivariate analyses revealed that these effects on length of PICU stay and duration of ventilation do not persist after controlling for antibiotic use, presence of radiographic consolidation, age, and severity of illness (p=0.15, p=0.32). The relationship between colonization and duration of hospital stay persists after controlling for these variables (p=0.008). Conclusions: Children with viral bronchiolitis colonized with S. pneumoniae do not appear to have significantly different PICU length-of-stays or duration of ventilation compared to children who are not colonized. Colonized children appear to have shorter hospital stays. The results of this study suggest bacterial colonization is not associated with increased severity of presenting illness or negative clinical outcomes.

Keywords: bronchiolitis, colonization, critical care, pediatrics, pneumococcal, infection

Procedia PDF Downloads 515
1836 Urban River As Living Infrastructure: Tidal Flooding And Sea Level Rise In A Working Waterway In Hampton Roads, Virginia

Authors: William Luke Hamel

Abstract:

Existing conceptions of urban flooding caused by tidal fluctuations and sea-level rise have been inadequately conceptualized by metrics of resilience and methods of flow modeling. While a great deal of research has been devoted to the effects of urbanization on pluvial flooding, the kind of tidal flooding experienced by locations like Hampton Roads, Virginia, has not been adequately conceptualized as being a result of human factors such as urbanization and gray infrastructure. Resilience from sea level rise and its associated flooding has been pioneered in the region with the 2015 Norfolk Resilience Plan from 100 Resilient Cities as well as the 2016 Norfolk Vision 2100 plan, which envisions different patterns of land use for the city. Urban resilience still conceptualizes the city as having the ability to maintain an equilibrium in the face of disruptions. This economic and social equilibrium relies on the Elizabeth River, narrowly conceptualized. Intentionally or accidentally, the river was made to be a piece of infrastructure. Its development was meant to serve the docks, shipyards, naval yards, and port infrastructure that gives the region so much of its economic life. Inasmuch as it functions to permit the movement of cargo; the raising and lowering of ships to be repaired, commissioned, or decommissioned; or the provisioning of military vessels, the river as infrastructure is functioning properly. The idea that the infrastructure is malfunctioning when high tides and sea-level rise create flooding is predicated on the idea that the infrastructure is truly a human creation and can be controlled. The natural flooding cycles of an urban river, combined with the action of climate change and sea-level rise, are only abnormal so much as they encroach on the development that first encroached on the river. The urban political ecology of water provides the ability to view the river as an infrastructural extension of urban networks while also calling for its emancipation from stationarity and human control. Understanding the river and city as a hydrosocial territory or as a socio-natural system liberates both actors from the duality of the natural and the social while repositioning river flooding as a normal part of coexistence on a floodplain. This paper argues for the adoption of an urban political ecology lens in the analysis and governance of urban rivers like the Elizabeth River as a departure from the equilibrium-seeking and stability metrics of urban resilience.

Keywords: urban flooding, political ecology, Elizabeth river, Hampton roads

Procedia PDF Downloads 169
1835 Comparison of Effectiveness When Ketamine was Used as an Adjuvant in Intravenous Patient-Controlled Analgesia Used to Control Cancer Pain

Authors: Donghee Kang

Abstract:

Background: Cancer pain is very difficult to control as the mechanism of pain is varied, and the patient has several co-morbidities. The use of Intravenous Patient-Controlled Analgesia (IV-PCA) can effectively control underlying pain and breakthrough pain. Ketamine is used in many pain patients due to its unique analgesic effect. In this study, it was checked whether there was a difference in the amount of analgesic usage, pain control degree, and side effects between patients who controlled pain with fentanyl-based IV-PCA and those who added Ketamine for pain control. Methods: Among the patients referred to this department for cancer pain, IV-PCA was applied to patients who were taking sufficient oral analgesics but could not control them or had blood clotting disorders that made the procedure difficult, and this patient group was targeted. In IV-PCA, 3000 mcg of Fentanyl, 160 mg of Nefopam, and 0.3 mg of Ramosetrone were mixed with normal saline to make a total volume of 100 ml. Group F used this IV-PCA as it is, and group K mixed 250 mg of Ketamine with normal saline to make a total volume of 100 ml. For IV-PCA, the basal rate was 0.5ml/h, the bolus was set to 1ml when pressed once, and the lockout time was set to 15 minutes. If pain was not controlled after IV-PCA application, 500 mcg of Fentanyl was added, and if excessive sedation or breathing difficulties occurred, the use was stopped for one hour. After that, the degree of daily pain control, analgesic usage, and side effects were investigated for seven days using this IV-PCA. Results: There was no difference between the two groups in the demographic data. Both groups had adequate pain control. Initial morphine milligram equivalents did not differ between the two groups, but the total amount of Fentanyl used for seven days was significantly different between the two groups [p=0.014], and group F used more Fentanyl through IV-PCA. In addition, the amount of sleeping pills used during the seven days was higher in Group F [p<0.01]. Overall, there was no difference in the frequency of side effects between the two groups, but the nausea was more frequent in Group F [p=0.031]. Discussion: When the two groups were compared, pain control was good in both groups. This seems to be because Fentanyl-based IV-PCA showed an adequate pain control effect. However, there was a significant difference in the total amount of opioid (Fentanyl) used, which is thought to be the opioid-sparing effect of Ketamine. Also, among the side effects, nausea was significantly less, which is thought to be possible because the amount of opioids used in the Ketamine group was small. The frequency of requesting sleeping pills was significantly less in the group using Ketamine, and it seems that Ketamine also helped improve sleep quality. In conclusion, using Ketamine with an opioid to control pain seems to have advantages. IV-PCA, which can be used effectively when other procedures are difficult, is more effective and safer when used together with Ketamine than opioids alone.

Keywords: cancer pain, intravenous patient-controlled analgesia, Ketamine, opioid

Procedia PDF Downloads 82
1834 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards

Authors: Hanna Schübel, Ivo Wallimann-Helmer

Abstract:

In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.

Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility

Procedia PDF Downloads 120
1833 The Implementation of a Nurse-Driven Palliative Care Trigger Tool

Authors: Sawyer Spurry

Abstract:

Problem: Palliative care providers at an academic medical center in Maryland stated medical intensive care unit (MICU) patients are often referred late in their hospital stay. The MICU has performed well below the hospital quality performance metric of 80% of patients who expire with expected outcomes should have received a palliative care consult within 48 hours of admission. Purpose: The purpose of this quality improvement (QI) project is to increase palliative care utilization in the MICU through the implementation of a Nurse-Driven PalliativeTriggerTool to prompt the need for specialty palliative care consult. Methods: MICU nursing staff and providers received education concerning the implications of underused palliative care services and the literature data supporting the use of nurse-driven palliative care tools as a means of increasing utilization of palliative care. A MICU population specific criteria of palliative triggers (Palliative Care Trigger Tool) was formulated by the QI implementation team, palliative care team, and patient care services department. Nursing staff were asked to assess patients daily for the presence of palliative triggers using the Palliative Care Trigger Tool and present findings during bedside rounds. MICU providers were asked to consult palliative medicinegiven the presence of palliative triggers; following interdisciplinary rounds. Rates of palliative consult, given the presence of triggers, were collected via electronic medical record e-data pull, de-identified, and recorded in the data collection tool. Preliminary Results: Over 140 MICU registered nurses were educated on the palliative trigger initiative along with 8 nurse practitioners, 4 intensivists, 2 pulmonary critical care fellows, and 2 palliative medicine physicians. Over 200 patients were admitted to the MICU and screened for palliative triggers during the 15-week implementation period. Primary outcomes showed an increase in palliative care consult rates to those patients presenting with triggers, a decreased mean time from admission to palliative consult, and increased recognition of unmet palliative care needs by MICU nurses and providers. Conclusions: Anticipatory findings of this QI project would suggest a positive correlation between utilizing palliative care trigger criteria and decreased time to palliative care consult. The direct outcomes of effective palliative care results in decreased length of stay, healthcare costs, and moral distress, as well as improved symptom management and quality of life (QOL).

Keywords: palliative care, nursing, quality improvement, trigger tool

Procedia PDF Downloads 195
1832 Role of Total Neoadjuvant Therapy in Sphincter Preservation in Locally Advanced Rectal Cancer: A Case Series

Authors: Arpit Gite

Abstract:

Purpose: We have evaluated the role of Total Neoadjuvant Therapy in patients with Locally Advanced Rectal cancer by giving Chemoradiotherapy followed by consolidation chemotherapy (CRT-CNCT) and, after that, the strategy of wait and watch. Methods: In this prospective case series, we evaluated the results of three locally advanced Rectal cancers, two cases Stage II (cT3N0) and one case Stage III ( cT4aN2). All three patients' growth was 4-6 cm from the anal verge. We have treated with Chemoradiotherapy to dose of 45Gy/25 Fractions to elective nodal regions (Inguinal node in anal canal Involvement)and Primary and mesorectum (Phase I) followed by 14.4Gy/8 Fractions to Primary and Mesorectum(Phase II) to a total dose of 59.4Gy/33 Fractions with concurrent chemotherapy Tab Capecitabine 825mg/m2 PO BD with Radiation therapy. After 6 weeks of completion of Chemoradiotherapy, advised six cycles of consolidative chemotherapy, CAPEOX regimen, Oxaliplatin 130mg/m2 on day 1 and Capecitabine 1000mg/m2 PO BD on days 1-14 repeated on a 21-day cycle for a total of six cycles. The primary endpoint is Disease-free survival (DFS); the secondary endpoint is adverse events related to chemoradiotherapy. Radiation toxicity is assessed by RTOG criteria, and chemotherapy toxicity is assessed by Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. Results: After 6 weeks of completion of Chemoradiotherapy, we did PET-CT of all three patients; all three patients had a clinically complete response and we advised 6 cycles of consolidative chemotherapy. After completion of consolidative chemotherapy, again PET-CT and sigmoidoscopy, all three patients had complete response on PET-CT and no lesions on sigmoidoscopy and kept all three patients on wait and watch.2 patients had Grade 2 skin toxicities,1 patient had Grade 1 skin toxicity, .2 patients had Grade 2 lower GI toxicities, and 1 patient had Grade lower GI toxicity, both according to RTOG criteria. 3 patients had Grade 2 diarrhea due to capecitabine, and 1 patient had Grade 1 thrombocytopenia due to oxaliplatin assessed by Common Terminology Criteria for Adverse Events (CTCAE) Version 5.0. Conclusion: Sphincter Preservation is possible with this regimen in those who don’t want to opt for surgery or in case of low-lying rectal cancer.

Keywords: locally advanced rectal cancer, sphincter preservation, chemoradiotherapy, consolidative chemotherapy

Procedia PDF Downloads 41
1831 Lung Tissue Damage under Diesel Exhaust Exposure: Modification of Proteins, Cells and Functions in Just 14 Days

Authors: Ieva Bruzauskaite, Jovile Raudoniute, Karina Poliakovaite, Danguole Zabulyte, Daiva Bironaite, Ruta Aldonyte

Abstract:

Introduction: Air pollution is a growing global problem which has been shown to be responsible for various adverse health outcomes. Immunotoxicity, such as dysregulated inflammation, has been proposed as one of the main mechanisms in air pollution-associated diseases. Chronic obstructive pulmonary disease (COPD) is among major morbidity and mortality causes worldwide and is characterized by persistent airflow limitation caused by the small airways disease (obstructive bronchiolitis) and irreversible parenchymal destruction (emphysema). Exact pathways explaining the air pollution induced and mediated disease states are still not clear. However, modern societies understand dangers of polluted air, seek to mitigate such effects and are in need for reliable biomarkers of air pollution. We hypothesise that post-translational modifications of structural proteins, e.g. citrullination, might be a good candidate biomarker. Thus, we have designed this study, where mice were exposed to diesel exhaust and the ongoing protein modifications and inflammation in lungs and other tissues were assessed. Materials And Methods: To assess the effects of diesel exhaust a in vivo study was designed. Mice (n=10) were subjected to everyday 2-hour exposure to diesel exhaust for 14 days. Control mice were treated the same way without diesel exhaust. The effects within lung and other tissues were assessed by immunohistochemistry of formalin-fixed and paraffin-embedded tissues. Levels of inflammation and citrullination related markers were investigated. Levels of parenchymal damage were also measured. Results: In vivo study corroborates our own data from in vitro and reveals diesel exhaust initiated inflammatory shift and modulation of lung peptidyl arginine deiminase 4 (PAD4), citrullination associated enzyme, levels. In addition, high levels of citrulline were observed in exposed lung tissue sections co-localising with increased parenchymal destruction. Conclusions: Subacute exposure to diesel exhaust renders mice lungs inflammatory and modifies certain structural proteins. Such structural changes of proteins may pave a pathways to lost/gain function of affected molecules and also propagate autoimmune processes within the lung and systemically.

Keywords: air pollution, citrullination, in vivo, lungs

Procedia PDF Downloads 156
1830 Reasons for Food Losses and Waste in Basic Production of Meat Sector in Poland

Authors: Sylwia Laba, Robert Laba, Krystian Szczepanski, Mikolaj Niedek, Anna Kaminska-Dworznicka

Abstract:

Meat and its products are considered food products, having the most unfavorable effect on the environment that requires rational management of these products and waste, originating throughout the whole chain of manufacture, processing, transport, and trade of meat. From the economic and environmental viewpoints, it is important to limit the losses and food wastage and the food waste in the whole meat sector. The link to basic production includes obtaining raw meat, i.e., animal breeding, management, and transport of animals to the slaughterhouse. Food is any substance or product, intended to be consumed by humans. It was determined (for the needs of the present studies) when the raw material is considered as a food. It is the moment when the animals are prepared to loading with the aim to be transported to a slaughterhouse and utilized for food purposes. The aim of the studies was to determine the reasons for loss generation in the basic production of the meat sector in Poland during the years 2017 – 2018. The studies on food losses and waste in the meat sector in basic production were carried out in two areas: red meat i.e., pork and beef and poultry meat. The studies of basic production were conducted in the period of March-May 2019 at the territory of the whole country on a representative trial of 278 farms, including 102 pork production, 55–beef production, and 121 poultry meat production. The surveys were carried out with the utilization of questionnaires by the PAPI (Paper & Pen Personal Interview) method; the pollsters conducted direct questionnaire interviews. Research results indicate that it is followed that any losses were not recorded during the preparation, loading, and transport of the animals to the slaughterhouse in 33% of the visited farms. In the farms where the losses were indicated, the crushing and suffocations, occurring during the production of pigs, beef cattle and poultry, were the main reasons for these losses. They constituted ca. 40% of the reported reasons. The stress generated by loading and transport caused 16 – 17% (depending on the season of the year) of the loss reasons. In the case of poultry production, in 2017, additionally, 10.7% of losses were caused by inappropriate conditions of loading and transportation, while in 2018 – 11.8%. The diseases were one of the reasons for the losses in pork and beef production (7% of the losses). The losses and waste, generated during livestock production and in meat processing and trade cannot be managed or recovered. They have to be disposed of. It is, therefore, important to prevent and minimize the losses throughout the whole production chain. It is possible to introduce the appropriate measures, connected mainly with the appropriate conditions and methods of animal loading and transport.

Keywords: food losses, food waste, livestock production, meat sector

Procedia PDF Downloads 144
1829 Direct Phoenix Identification and Antimicrobial Susceptibility Testing from Positive Blood Culture Broths

Authors: Waad Al Saleemi, Badriya Al Adawi, Zaaima Al Jabri, Sahim Al Ghafri, Jalila Al Hadhramia

Abstract:

Objectives: Using standard lab methods, a positive blood culture requires a minimum of two days (two occasions of overnight incubation) to obtain a final identification (ID) and antimicrobial susceptibility results (AST) report. In this study, we aimed to evaluate the accuracy and precision of identification and antimicrobial susceptibility testing of an alternative method (direct method) that will reduce the turnaround time by 24 hours. This method involves the direct inoculation of positive blood culture broths into the Phoenix system using serum separation tubes (SST). Method: This prospective study included monomicrobial-positive blood cultures obtained from January 2022 to May 2023 in SQUH. Blood cultures containing a mixture of organisms, fungi, or anaerobic organisms were excluded from this study. The result of the new “direct method” under study was compared with the current “standard method” used in the lab. The accuracy and precision were evaluated for the ID and AST using Clinical and Laboratory Standards Institute (CLSI) recommendations. The categorical agreement, essential agreement, and the rates of very major errors (VME), major errors (ME), and minor errors (MIE) for both gram-negative and gram-positive bacteria were calculated. Passing criteria were set according to CLSI. Result: The results of ID and AST were available for a total of 158 isolates. Of 77 isolates of gram-negative bacteria, 71 (92%) were correctly identified at the species level. Of 70 isolates of gram-positive bacteria, 47(67%) isolates were correctly identified. For gram-negative bacteria, the essential agreement of the direct method was ≥92% when compared to the standard method, while the categorical agreement was ≥91% for all tested antibiotics. The precision of ID and AST were noted to be 100% for all tested isolates. For gram-positive bacteria, the essential agreement was >93%, while the categorical agreement was >92% for all tested antibiotics except moxifloxacin. Many antibiotics were noted to have an unacceptable higher rate of very major errors including penicillin, cotrimoxazole, clindamycin, ciprofloxacin, and moxifloxacin. However, no error was observed in the results of vancomycin, linezolid, and daptomycin. Conclusion: The direct method of ID and AST for positive blood cultures using SST is reliable for gram negative bacteria. It will significantly decrease the turnaround time and will facilitate antimicrobial stewardship.

Keywords: bloodstream infection, oman, direct ast, blood culture, rapid identification, antimicrobial susceptibility, phoenix, direct inoculation

Procedia PDF Downloads 64
1828 Development of an Asset Database to Enhance the Circular Business Models for the European Solar Industry: A Design Science Research Approach

Authors: Ässia Boukhatmi, Roger Nyffenegger

Abstract:

The expansion of solar energy as a means to address the climate crisis is undisputed, but the increasing number of new photovoltaic (PV) modules being put on the market is simultaneously leading to increased challenges in terms of managing the growing waste stream. Many of the discarded modules are still fully functional but are often damaged by improper handling after disassembly or not properly tested to be considered for a second life. In addition, the collection rate for dismantled PV modules in several European countries is only a fraction of previous projections, partly due to the increased number of illegal exports. The underlying problem for those market imperfections is an insufficient data exchange between the different actors along the PV value chain, as well as the limited traceability of PV panels during their lifetime. As part of the Horizon 2020 project CIRCUSOL, an asset database prototype was developed to tackle the described problems. In an iterative process applying the design science research methodology, different business models, as well as the technical implementation of the database, were established and evaluated. To explore the requirements of different stakeholders for the development of the database, surveys and in-depth interviews were conducted with various representatives of the solar industry. The proposed database prototype maps the entire value chain of PV modules, beginning with the digital product passport, which provides information about materials and components contained in every module. Product-related information can then be expanded with performance data of existing installations. This information forms the basis for the application of data analysis methods to forecast the appropriate end-of-life strategy, as well as the circular economy potential of PV modules, already before they arrive at the recycling facility. The database prototype could already be enriched with data from different data sources along the value chain. From a business model perspective, the database offers opportunities both in the area of reuse as well as with regard to the certification of sustainable modules. Here, participating actors have the opportunity to differentiate their business and exploit new revenue streams. Future research can apply this approach to further industry and product sectors, validate the database prototype in a practical context, and can serve as a basis for standardization efforts to strengthen the circular economy.

Keywords: business model, circular economy, database, design science research, solar industry

Procedia PDF Downloads 128
1827 Experimental Study on Bending and Torsional Strength of Bulk Molding Compound Seat Back Frame Part

Authors: Hee Yong Kang, Hyeon Ho Shin, Jung Cheol Yoo, Il Taek Lee, Sung Mo Yang

Abstract:

Lightweight technology using composites is being developed for vehicle seat structures, and its design must meet the safety requirements. According to the Federal Motor Vehicle Safety Standard (FMVSS) 207 seating systems test procedure, the back moment load is applied to the seat back frame structure for the safety evaluation of the vehicle seat. The seat back frame using the composites is divided into three parts: upper part frame, and left- and right-side frame parts following the manufacturing process. When a rear moment load is applied to the seat back frame, the side frame receives the bending load and the torsional load at the same time. This results in the largest loaded strength. Therefore, strength test of the component unit is required. In this study, a component test method based on the FMVSS 207 seating systems test procedure was proposed for the strength analysis of bending load and torsional load of the automotive Bulk Molding Compound (BMC) Seat Back Side Frame. Moreover, strength evaluation according to the carbon band reinforcement was performed. The back-side frame parts of the seat that are applied to the test were manufactured through BMC that is composed of vinyl ester Matrix and short carbon fiber. Then, two kinds of reinforced and non-reinforced parts of carbon band were formed through a high-temperature compression molding process. In addition, the structure that is applied to the component test was constructed by referring to the FMVSS 207. Then, the bending load and the torsional load were applied through the displacement control to perform the strength test for four load conditions. The results of each test are shown through the load-displacement curves of the specimen. The failure strength of the parts caused by the reinforcement of the carbon band was analyzed. Additionally, the fracture characteristics of the parts for four strength tests were evaluated, and the weakness structure of the back-side frame of the seat structure was confirmed according to the test conditions. Through the bending and torsional strength test methods, we confirmed the strength and fracture characteristics of BMC Seat Back Side Frame according to the carbon band reinforcement. And we proposed a method of testing the part strength of a seat back frame for vehicles that can meet the FMVSS 207.

Keywords: seat back frame, bending and torsional strength, BMC (Bulk Molding Compound), FMVSS 207 seating systems

Procedia PDF Downloads 210
1826 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays

Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir

Abstract:

Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.

Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis

Procedia PDF Downloads 113
1825 Multi-Size Continuous Particle Separation on a Dielectrophoresis-Based Microfluidics Chip

Authors: Arash Dalili, Hamed Tahmouressi, Mina Hoorfar

Abstract:

Advances in lab-on-a-chip (LOC) devices have led to significant advances in the manipulation, separation, and isolation of particles and cells. Among the different active and passive particle manipulation methods, dielectrophoresis (DEP) has been proven to be a versatile mechanism as it is label-free, cost-effective, simple to operate, and has high manipulation efficiency. DEP has been applied for a wide range of biological and environmental applications. A popular form of DEP devices is the continuous manipulation of particles by using co-planar slanted electrodes, which utilizes a sheath flow to focus the particles into one side of the microchannel. When particles enter the DEP manipulation zone, the negative DEP (nDEP) force generated by the slanted electrodes deflects the particles laterally towards the opposite side of the microchannel. The lateral displacement of the particles is dependent on multiple parameters including the geometry of the electrodes, the width, length and height of the microchannel, the size of the particles and the throughput. In this study, COMSOL Multiphysics® modeling along with experimental studies are used to investigate the effect of the aforementioned parameters. The electric field between the electrodes and the induced DEP force on the particles are modelled by COMSOL Multiphysics®. The simulation model is used to show the effect of the DEP force on the particles, and how the geometry of the electrodes (width of the electrodes and the gap between them) plays a role in the manipulation of polystyrene microparticles. The simulation results show that increasing the electrode width to a certain limit, which depends on the height of the channel, increases the induced DEP force. Also, decreasing the gap between the electrodes leads to a stronger DEP force. Based on these results, criteria for the fabrication of the electrodes were found, and soft lithography was used to fabricate interdigitated slanted electrodes and microchannels. Experimental studies were run to find the effect of the flow rate, geometrical parameters of the microchannel such as length, width, and height as well as the electrodes’ angle on the displacement of 5 um, 10 um and 15 um polystyrene particles. An empirical equation is developed to predict the displacement of the particles under different conditions. It is shown that the displacement of the particles is more for longer and lower height channels, lower flow rates, and bigger particles. On the other hand, the effect of the angle of the electrodes on the displacement of the particles was negligible. Based on the results, we have developed an optimum design (in terms of efficiency and throughput) for three size separation of particles.

Keywords: COMSOL Multiphysics, Dielectrophoresis, Microfluidics, Particle separation

Procedia PDF Downloads 186
1824 Facile Wick and Oil Flame Synthesis of High-Quality Hydrophilic Carbon Nano Onions for Flexible Binder-Free Supercapacitor

Authors: Debananda Mohapatra, Subramanya Badrayyana, Smrutiranjan Parida

Abstract:

Carbon nano-onions (CNOs) are the spherical graphitic nanostructures composed of concentric shells of graphitic carbon can be hypothesized as the intermediate state between fullerenes and graphite. These are very important members in fullerene family also known as the multi-shelled fullerenes can be envisioned as promising supercapacitor electrode with high energy & power density as they provide easy access to ions at electrode-electrolyte interface due to their curvature. There is still very sparse report concerning on CNOs as electrode despite having an excellent electrodechemical performance record due to their unavailability and lack of convenient methods for their high yield preparation and purification. Keeping all these current pressing issues in mind, we present a facile scalable and straightforward flame synthesis method of pure and highly dispersible CNOs without contaminated by any other forms of carbon; hence, a post processing purification procedure is not necessary. To the best of our knowledge, this is the very first time; we developed an extremely simple, light weight, novel inexpensive, flexible free standing pristine CNOs electrode without using any binder element. Locally available daily used cotton wipe has been used for fabrication of such an ideal electrode by ‘dipping and drying’ process providing outstanding stretchability and mechanical flexibility with strong adhesion between CNOs and porous wipe. The specific capacitance 102 F/g, energy density 3.5 Wh/kg and power density 1224 W/kg at 20 mV/s scan rate are the highest values that ever recorded and reported so far in symmetrical two electrode cell configuration with 1M Na2SO4 electrolyte; indicating a very good synthesis conditions employed with optimum pore size in agreement with electrolyte ion size. This free standing CNOs electrode also showed an excellent cyclic performance and stability retaining 95% original capacity after 5000 charge –discharge cycles. Furthermore, this unique method not only affords binder free - freestanding electrode but also provide a general way of fabricating such multifunctional promising CNOs based nanocomposites for their potential device applications in flexible solar cells and lithium-ion batteries.

Keywords: binder-free, flame synthesis, flexible, carbon nano onion

Procedia PDF Downloads 204
1823 Simultaneous Detection of Cd⁺², Fe⁺², Co⁺², and Pb⁺² Heavy Metal Ions by Stripping Voltammetry Using Polyvinyl Chloride Modified Glassy Carbon Electrode

Authors: Sai Snehitha Yadavalli, K. Sruthi, Swati Ghosh Acharyya

Abstract:

Heavy metal ions are toxic to humans and all living species when exposed in large quantities or for long durations. Though Fe acts as a nutrient, when intake is in large quantities, it becomes toxic. These toxic heavy metal ions, when consumed through water, will cause many disorders and are harmful to all flora and fauna through biomagnification. Specifically, humans are prone to innumerable diseases ranging from skin to gastrointestinal, neurological, etc. In higher quantities, they even cause cancer in humans. Detection of these toxic heavy metal ions in water is thus important. Traditionally, the detection of heavy metal ions in water has been done by techniques like Inductively Coupled Plasma Mass Spectroscopy (ICPMS) and Atomic Absorption Spectroscopy (AAS). Though these methods offer accurate quantitative analysis, they require expensive equipment and cannot be used for on-site measurements. Anodic Stripping Voltammetry is a good alternative as the equipment is affordable, and measurements can be made at the river basins or lakes. In the current study, Square Wave Anodic Stripping Voltammetry (SWASV) was used to detect the heavy metal ions in water. Literature reports various electrodes on which deposition of heavy metal ions was carried out like Bismuth, Polymers, etc. The working electrode used in this study is a polyvinyl chloride (PVC) modified glassy carbon electrode (GCE). Ag/AgCl reference electrode and Platinum counter electrode were used. Biologic Potentiostat SP 300 was used for conducting the experiments. Through this work of simultaneous detection, four heavy metal ions were successfully detected at a time. The influence of modifying GCE with PVC was studied in comparison with unmodified GCE. The simultaneous detection of Cd⁺², Fe⁺², Co⁺², Pb⁺² heavy metal ions was done using PVC modified GCE by drop casting 1 wt.% of PVC dissolved in Tetra Hydro Furan (THF) solvent onto GCE. The concentration of all heavy metal ions was 0.2 mg/L, as shown in the figure. The scan rate was 0.1 V/s. Detection parameters like pH, scan rate, temperature, time of deposition, etc., were optimized. It was clearly understood that PVC helped in increasing the sensitivity and selectivity of detection as the current values are higher for PVC-modified GCE compared to unmodified GCE. The peaks were well defined when PVC-modified GCE was used.

Keywords: cadmium, cobalt, electrochemical sensing, glassy carbon electrodes, heavy metal Ions, Iron, lead, polyvinyl chloride, potentiostat, square wave anodic stripping voltammetry

Procedia PDF Downloads 103
1822 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 183
1821 The Impact of Entrepreneurship Education on the Entrepreneurial Tendencies of Students: A Quasi-Experimental Design

Authors: Lamia Emam

Abstract:

The attractiveness of entrepreneurship education stems from its perceived value as a venue through which students can develop an entrepreneurial mindset, skill set, and practice, which may not necessarily lead to them starting a new business, but could, more importantly, be manifested as a life skill that could be applied to all types of organizations and career endeavors. This, in turn, raises important questions about what happens in our classrooms; our role as educators, the role of students, center of learning, and the instructional approach; all of which eventually contribute to achieving the desired EE outcomes. With application to an undergraduate entrepreneurship course -Entrepreneurship as Practice- the current paper aims to explore the effect of entrepreneurship education on the development of students’ general entrepreneurial tendencies. Towards that purpose, the researcher herein uses a pre-test and post-test quasi-experimental research design where the Durham University General Enterprising Tendency Test (GET2) is administered to the same group of students before and after course delivery. As designed and delivered, the Entrepreneurship as Practice module is a highly applied and experiential course where students are required to develop an idea for a start-up while practicing the entrepreneurship-related knowledge, mindset, and skills that are taught in class, both individually and in groups. The course is delivered using a combination of short lectures, readings, group discussions, case analysis, guest speakers, and, more importantly, actively engaging in a series of activities that are inspired by diverse methods for developing successful and innovative business ideas, including design thinking, lean-start up and business feasibility analysis. The instructional approach of the course particularly aims at developing the students' critical thinking, reflective, analytical, and creativity-based problem-solving skills that are needed to launch one’s own start-up. The analysis and interpretation of the experiment’s outcomes shall simultaneously incorporate the views of both the educator and students. As presented, the study responds to the rising call for the application of experimental designs in entrepreneurship in general and EE in particular. While doing so, the paper presents an educator’s perspective of EE to complement the dominant stream of research which is constrained to the students’ point of view. Finally, the study sheds light on EE in the MENA region, where the study is applied.

Keywords: entrepreneurship education, andragogy and heutagogy, scholarship of teaching and learning, experiment

Procedia PDF Downloads 127
1820 Effect of Lifestyle Modification for Two Years on Obesity and Metabolic Syndrome Components in Elementary Students: A Community-Based Trial

Authors: Bita Rabbani, Hossein Chiti, Faranak Sharifi, Saeedeh Mazloomzadeh

Abstract:

Background: Lifestyle modifications, especially improving nutritional patterns and increasing physical activity, are the most important factors in preventing obesity and metabolic syndrome in children and adolescents. For this purpose, the following interventional study was designed to investigate the effects of educational programs for students, as well as changes in diet and physical activity, on obesity and components of the metabolic syndrome. Methods: This study is part of an interventional research project (elementary school) conducted on all students of Sama schools in Zanjan and Abhar in three levels of elementary, middle, and high school, including 1000 individuals in Zanjan (intervention group) and 1000 individuals (control group) in Abhar in 2011. Interventions were based on educating students, teachers, and parents, changes in food services, and physical activity. We primarily measured anthropometric indices, fasting blood sugar, lipid profiles, and blood pressure and completed standard nutrition and physical activity questionnaires. Also, blood insulin levels were randomly measured in a number of students. Data analysis was done by SPSS software version 16.0. Results: Overall, 589 individuals (252 male, 337 female) entered the case group, and 803 individuals (344 male, 459 female) entered the control group. After two years of intervention, mean waist circumference (63.8 ± 10.9) and diastolic BP (63.8 ± 10.4) were significantly lower; however, mean systolic BP (10.1.0 ± 12.5), food score (25.0 ± 5.0) and drinking score (12.1 ± 2.3) were higher in the intervention group (p<0.001). Comparing components of metabolic syndrome between the second year and at time of recruitment within the intervention group showed that although number of overweight/obese individuals, individuals with hypertriglyceridemia and high LDL increased, abdominal obesity, high BP, hyperglycemia, and insulin resistance decreased (p<0.001). On the other hand, in the control group, number of individuals with high BP increased significantly. Conclusion: The prevalence of abdominal obesity and hypertension, which are two major components of metabolic syndrome, are much higher in our study than in other regions of country. However, interventions for modification of diet and increase in physical activity are effective in lowering their prevalence.

Keywords: metabolic syndrome, obesity, life style, nutrition, hypertension

Procedia PDF Downloads 67
1819 New Recombinant Netrin-a Protein of Lucilia Sericata Larvae by Bac to Bac Expression Vector System in Sf9 Insect Cell

Authors: Hamzeh Alipour, Masoumeh Bagheri, Abbasali Raz, Javad Dadgar Pakdel, Kourosh Azizi, Aboozar Soltani, Mohammad Djaefar Moemenbellah-Fard

Abstract:

Background: Maggot debridement therapy is an appropriate, effective, and controlled method using sterilized larvae of Luciliasericata (L.sericata) to treat wounds. Netrin-A is an enzyme in the Laminins family which secreted from salivary gland of L.sericata with a central role in neural regeneration and angiogenesis. This study aimed to production of new recombinant Netrin-A protein of Luciliasericata larvae by baculovirus expression vector system (BEVS) in SF9. Material and methods: In the first step, gene structure was subjected to the in silico studies, which were include determination of Antibacterial activity, Prion formation risk, homology modeling, Molecular docking analysis, and Optimization of recombinant protein. In the second step, the Netrin-A gene was cloned and amplified in pTG19 vector. After digestion with BamH1 and EcoR1 restriction enzymes, it was cloned in pFastBac HTA vector. It was then transformed into DH10Bac competent cells, and the recombinant Bacmid was subsequently transfected into insect Sf9 cells. The expressed recombinant Netrin-A was thus purified in the Ni-NTA agarose. This protein evaluation was done using SDS-PAGE and western blot, respectively. Finally, its concentration was calculated with the Bradford assay method. Results: The Bacmid vector structure with Netrin-A was successfully constructed and then expressed as Netrin-A protein in the Sf9 cell lane. The molecular weight of this protein was 52 kDa with 404 amino acids. In the in silico studies, fortunately, we predicted that recombinant LSNetrin-A have Antibacterial activity and without any prion formation risk.This molecule hasa high binding affinity to the Neogenin and a lower affinity to the DCC-specific receptors. Signal peptide located between amino acids 24 and 25. The concentration of Netrin-A recombinant protein was calculated to be 48.8 μg/ml. it was confirmed that the characterized gene in our previous study codes L. sericata Netrin-A enzyme. Conclusions: Successful generation of the recombinant Netrin-A, a secreted protein in L.sericata salivary glands, and because Luciliasericata larvae are used in larval therapy. Therefore, the findings of the present study could be useful to researchers in future studies on wound healing.

Keywords: blowfly, BEVS, gene, immature insect, recombinant protein, Sf9

Procedia PDF Downloads 93
1818 Tailoring Piezoelectricity of PVDF Fibers with Voltage Polarity and Humidity in Electrospinning

Authors: Piotr K. Szewczyk, Arkadiusz Gradys, Sungkyun Kim, Luana Persano, Mateusz M. Marzec, Oleksander Kryshtal, Andrzej Bernasik, Sohini Kar-Narayan, Pawel Sajkiewicz, Urszula Stachewicz

Abstract:

Piezoelectric polymers have received great attention in smart textiles, wearables, and flexible electronics. Their potential applications range from devices that could operate without traditional power sources, through self-powering sensors, up to implantable biosensors. Semi-crystalline PVDF is often proposed as the main candidate for industrial-scale applications as it exhibits exceptional energy harvesting efficiency compared to other polymers combined with high mechanical strength and thermal stability. Plenty of approaches have been proposed for obtaining PVDF rich in the desired β-phase with electric polling, thermal annealing, and mechanical stretching being the most prevalent. Electrospinning is a highly tunable technique that provides a one-step process of obtaining highly piezoelectric PVDF fibers without the need for post-treatment. In this study, voltage polarity and relative humidity influence on electrospun PVDF, fibers were investigated with the main focus on piezoelectric β-phase contents and piezoelectric performance. Morphology and internal structure of fibers were investigated using scanning (SEM) and transmission electron microscopy techniques (TEM). Fourier Transform Infrared Spectroscopy (FITR), wide-angle X-ray scattering (WAXS) and differential scanning calorimetry (DSC) were used to characterize the phase composition of electrospun PVDF. Additionally, surface chemistry was verified with X-ray photoelectron spectroscopy (XPS). Piezoelectric performance of individual electrospun PVDF fibers was measured using piezoresponse force microscopy (PFM), and the power output from meshes was analyzed via custom-built equipment. To prepare the solution for electrospinning, PVDF pellets were dissolved in dimethylacetamide and acetone solution in a 1:1 ratio to achieve a 24% solution. Fibers were electrospun with a constant voltage of +/-15kV applied to the stainless steel nozzle with the inner diameter of 0.8mm. The flow rate was kept constant at 6mlh⁻¹. The electrospinning of PVDF was performed at T = 25°C and relative humidity of 30 and 60% for PVDF30+/- and PVDF60+/- samples respectively in the environmental chamber. The SEM and TEM analysis of fibers produced at a lower relative humidity of 30% (PVDF30+/-) showed a smooth surface in opposition to fibers obtained at 60% relative humidity (PVDF60+/-), which had wrinkled surface and additionally internal voids. XPS results confirmed lower fluorine content at the surface of PVDF- fibers obtained by electrospinning with negative voltage polarity comparing to the PVDF+ obtained with positive voltage polarity. Changes in surface composition measured with XPS were found to influence the piezoelectric performance of obtained fibers what was further confirmed by PFM as well as by custom-built fiber-based piezoelectric generator. For PVDF60+/- samples humidity led to an increase of β-phase contents in PVDF fibers as confirmed by FTIR, WAXS, and DSC measurements, which showed almost two times higher concentrations of β-phase. A combination of negative voltage polarity with high relative humidity led to fibers with the highest β-phase contents and the best piezoelectric performance of all investigated samples. This study outlines the possibility to produce electrospun PVDF fibers with tunable piezoelectric performance in a one-step electrospinning process by controlling relative humidity and voltage polarity conditions. Acknowledgment: This research was conducted within the funding from m the Sonata Bis 5 project granted by National Science Centre, No 2015/18/E/ST5/00230, and supported by the infrastructure at International Centre of Electron Microscopy for Materials Science (IC-EM) at AGH University of Science and Technology. The PFM measurements were supported by an STSM Grant from COST Action CA17107.

Keywords: crystallinity, electrospinning, PVDF, voltage polarity

Procedia PDF Downloads 134
1817 Performance Management of Tangible Assets within the Balanced Scorecard and Interactive Business Decision Tools

Authors: Raymond K. Jonkers

Abstract:

The present study investigated approaches and techniques to enhance strategic management governance and decision making within the framework of a performance-based balanced scorecard. The review of best practices from strategic, program, process, and systems engineering management provided for a holistic approach toward effective outcome-based capability management. One technique, based on factorial experimental design methods, was used to develop an empirical model. This model predicted the degree of capability effectiveness and is dependent on controlled system input variables and their weightings. These variables represent business performance measures, captured within a strategic balanced scorecard. The weighting of these measures enhances the ability to quantify causal relationships within balanced scorecard strategy maps. The focus in this study was on the performance of tangible assets within the scorecard rather than the traditional approach of assessing performance of intangible assets such as knowledge and technology. Tangible assets are represented in this study as physical systems, which may be thought of as being aboard a ship or within a production facility. The measures assigned to these systems include project funding for upgrades against demand, system certifications achieved against those required, preventive maintenance to corrective maintenance ratios, and material support personnel capacity against that required for supporting respective systems. The resultant scorecard is viewed as complimentary to the traditional balanced scorecard for program and performance management. The benefits from these scorecards are realized through the quantified state of operational capabilities or outcomes. These capabilities are also weighted in terms of priority for each distinct system measure and aggregated and visualized in terms of overall state of capabilities achieved. This study proposes the use of interactive controls within the scorecard as a technique to enhance development of alternative solutions in decision making. These interactive controls include those for assigning capability priorities and for adjusting system performance measures, thus providing for what-if scenarios and options in strategic decision-making. In this holistic approach to capability management, several cross functional processes were highlighted as relevant amongst the different management disciplines. In terms of assessing an organization’s ability to adopt this approach, consideration was given to the P3M3 management maturity model.

Keywords: management, systems, performance, scorecard

Procedia PDF Downloads 322
1816 The Critical Relevance of Credit and Debt Data in Household Food Security Analysis: The Risks of Ineffective Response Actions

Authors: Siddharth Krishnaswamy

Abstract:

Problem Statement: Currently, when analyzing household food security, the most commonly studied food access indicators are household income and expenditure. Larger studies do take into account other indices such as credit and employment. But these are baselines studies and by definition are conducted infrequently. Food security analysis for access is usually dedicated to analyzing income and expenditure indicators. And both these indicators are notoriously inconsistent. Yet this data can very often end up being the basis on which household food access is calculated; and by extension, be used for decision making. Objectives: This paper argues that along with income and expenditure, credit and debit information should be collected so that an accurate analysis of household food security (and in particular) food access can be determined. The lack of collection and analysis of this information routinely means that there is often a “masking” of the actual situation; a household’s food access and food availability patterns may be adequate mainly as a result of borrowing and may even be due to a long- term dependency (a debt cycle). In other words, such a household is, in reality, worse off than it appears a factor masked by its performance on basic access indicators. Procedures/methodologies/approaches: Existing food security data sets collected in 2005 in Azerbaijan, 2010 across Myanmar and 2014-15 across Uganda were used to support the theory that analyzing income and expenditure of a HHs and analyzing the same in addition to data on credit & borrowing patterns will result in an entirely different scenario of food access of the household. Furthermore, the data analyzed depicts food consumption patterns across groups of households and then relates this to the extent of dependency on credit, i.e. households borrowing money in order to meet food needs. Finally, response options that were based on analyzing only income and expenditure; and response options based on income, expenditure, credit, and borrowing – from the same geographical area of operation are studied and discussed. Results: The purpose of this work was to see if existing methods of household food security analysis could be improved. It is hoped that food security analysts will collect household level information on credit and debit and analyze them against income, expenditure and consumption patterns. This will help determine if a household’s food access and availability are dependent on unsustainable strategies such as borrowing money for food or undertaking sustained debts. Conclusions: The results clearly show the amount of relevant information that is missing in Food Access analysis if debit and borrowing of the household is not analyzed along with the typical Food Access indicators that are usually analyzed. And the serious repercussions this has on Programmatic response and interventions.

Keywords: analysis, food security indicators, response, resilience analysis

Procedia PDF Downloads 331