Search results for: transport parameters
1379 A Review: Role of Chromium in Broiler
Authors: Naveed Zahra, Zahid Kamran, Shakeel Ahmad
Abstract:
Heat stress is one of the most important environmental stressors challenging poultry production worldwide. The detrimental effect of heat stress results in reduction in the productive performance of poultry with high incidences of mortality. Researchers have made efforts to prevent such damage to poultry production through dietary manipulation. Supplementation with Chromium (Cr) might have some positive effects on some aspect of blood parameters and broilers performance. Chromium (Cr) the element whose trivalent Cr (III) organic state is present in trace amounts in animal feed and water is found to be a key element in evading heat stress and thus cutting down the heavy expenditure on air conditioning in broiler sheds. Chromium, along with other essential minerals is lost due to increased excretion during heat stress and thus its inclusion in broiler diet is kind of mandatory in areas of hot climate. Chromium picolinate in broiler diet has shown a hike in growth rate including muscle gain with body fat reduction under environmental stress. Fat reduction is probably linked to the ability of chromium to increase the sensitivity of the insulin receptors on tissues and thus the uptake of sugar from blood increases which decreases the amount of glucose to be converted to amino acids and stored in adipose tissue as triglycerides. Organic chromium has also shown to increase lymphocyte proliferation rate and antioxidant levels. So, the immune competency, muscle gain and fat reduction along with evasion of heat stress are good enough signs that indicate the fruitful inclusion of dietary chromium for broiler. This promising element may bring the much needed break in the local poultry industry. The task is now to set the exact dose of the element in the diet that would be useful enough and still not toxic to broiler. In conclusion there is a growing body of evidence which suggest that chromium may be an essential trace element for livestock and poultry. The nutritional requirement for chromium may vary with different species and physiological state within a species.Keywords: broiler, chromium, heat stress, performance
Procedia PDF Downloads 2901378 Study of the Hysteretic I-V Characteristics in a Polystyrene/ZnO-Nanorods Stack Layer
Authors: You-Lin Wu, Yi-Hsing Sung, Shih-Hung Lin, Jing-Jenn Lin
Abstract:
Performance improvement in optoelectronic devices such as solar cells and photodetectors has been reported when a polymer/ZnO nanorods stack is used. Resistance switching of polymer/ZnO nanocrystals (or nanorods) hybrid has also gained a lot of research interests recently. It has been reported that high- and low-resistance states of a metal/insulator/metal (MIM) structure diode with a polystyrene (PS) and ZnO hybrid as the insulator layer can be switched by applied bias after a high-voltage forming process, while the same device structure merely with a PS layer does not show any forming behavior. In this work, we investigated the current-voltage (I-V) characteristics of an MIM device with a PS/ZnO nanorods stack deposited on fluorine-doped tin oxide (FTO) glass substrate. The ZnO nanorods were grown by a hydrothermal method using a mixture of zinc nitrate, hexamethylenetetramine, and DI water. Following that, a PS layer was deposited by spin coating. Finally, the device with a structure of Ti/ PS/ZnO nanorods/FTO was completed by e-gun evaporated Ti layer on top of the PS layer. Semiconductor parameters analyzer Agilent 4156C was then used to measure the I-V characteristics of the device by applying linear ramp sweep voltage with sweep sequence of 0V → 4V → 0V → 3V → 0V → 2V → 0V → 1V → 0V in both positive and negative directions. It is interesting to find that the I-V characteristics are bias dependent and hysteretic, indicating that the device Ti/PS/ZnO nanorods/FTO structure has ferroelectricity. Our results also show that the maximum hysteresis loop height of the I-V characteristics as well as the voltage at which the maximum hysteresis loop height of each scan occurs increase with increasing maximum sweep voltage. It should be noticed that, although ferroelectricity has been found in ZnO at its melting temperature (1975℃) and in Li- or Co-doped ZnO, neither PS nor ZnO has ferroelectricity at room temperature. Using the same structure but with a PS or ZnO layer only as the insulator does not give and hysteretic I-V characteristics. It is believed that a charge polarization layer is induced near the PS/ZnO nanorods stack interface and thus causes the ferroelectricity in the device with Ti/PS/ZnO nanorods/FTO structure. Our results show that the PS/ZnO stack can find a potential application in a resistive switching memory device with MIM structure.Keywords: ferroelectricity, hysteresis, polystyrene, resistance switching, ZnO nanorods
Procedia PDF Downloads 3121377 Determination of the Volatile Organic Compounds, Antioxidant and Antimicrobial Properties of Microwave-Assisted Green Extracted Ficus Carica Linn Leaves
Authors: Pelin Yilmaz, Gizemnur Yildiz Uysal, Elcin Demirhan, Belma Ozbek
Abstract:
The edible fig plant, Ficus carica Linn, belongs to the Moraceae family, and the leaves are mainly considered agricultural waste after harvesting. It has been demonstrated in the literature that fig leaves contain appealing properties such as high vitamins, fiber, amino acids, organic acids, and phenolic or flavonoid content. The extraction of these valuable products has gained importance. Microwave-assisted extraction (MAE) is a method using microwave energy to heat the solvents, thereby transferring the bioactive compounds from the sample to the solvent. The main advantage of the MAE is the rapid extraction of bioactive compounds. In the present study, the MAE was applied to extract the bioactive compounds from Ficus carica L. leaves, and the effect of microwave power (180-900 W), extraction time (60-180 s), and solvent to sample amount (mL/g) (10-30) on the antioxidant property of the leaves. Then, the volatile organic component profile was determined at the specified extraction point. Additionally, antimicrobial studies were carried out to determine the minimum inhibitory concentration of the microwave-extracted leaves. As a result, according to the data obtained from the experimental studies, the highest antimicrobial properties were obtained under the process parameters such as 540 W, 180 s, and 20 mL/g concentration. The volatile organic compound profile showed that isobergapten, which belongs to the furanocoumarins family exhibiting anticancer, antioxidant, and antimicrobial activity besides promoting bone health, was the main compound. Acknowledgments: This work has been supported by Yildiz Technical University Scientific Research Projects Coordination Unit under project number FBA-2021-4409. The authors would like to acknowledge the financial support from Tubitak 1515 - Frontier R&D Laboratory Support Programme.Keywords: Ficus carica Linn leaves, volatile organic component, GC-MS, microwave extraction, isobergapten, antimicrobial
Procedia PDF Downloads 821376 Effects of Sintering Temperature on Microstructure and Mechanical Properties of Nanostructured Ni-17Cr Alloy
Authors: B. J. Babalola, M. B. Shongwe
Abstract:
Spark Plasma Sintering technique is a novel processing method that produces limited grain growth and highly dense variety of materials; alloys, superalloys, and carbides just to mention a few. However, initial particle size and spark plasma sintering parameters are factors which influence the grain growth and mechanical properties of sintered materials. Ni-Cr alloys are regarded as the most promising alloys for aerospace turbine blades, owing to the fact that they meet the basic requirements of desirable mechanical strength at high temperatures and good resistance to oxidation. The conventional method of producing this alloy often results in excessive grain growth and porosity levels that are detrimental to its mechanical properties. The effect of sintering temperature was evaluated on the microstructure and mechanical properties of the nanostructured Ni-17Cr alloy. Nickel and chromium powder were milled using high energy ball milling independently for 30 hours, milling speed of 400 revs/min and ball to powder ratio (BPR) of 10:1. The milled powders were mixed in the composition of Nickel having 83 wt % and chromium, 17 wt %. This was sintered at varied temperatures from 800°C, 900°C, 1000°C, 1100°C and 1200°C. The structural characteristics such as porosity, grain size, fracture surface and hardness were analyzed by scan electron microscopy and X-ray diffraction, Archimedes densitometry, micro-hardness tester. The corresponding results indicated an increase in the densification and hardness property of the alloy as the temperature increases. The residual porosity of the alloy reduces with respect to the sintering temperature and in contrast, the grain size was enhanced. The study of the mechanical properties, including hardness, densification shows that optimum properties were obtained for the sintering temperature of 1100°C. The advantages of high sinterability of Ni-17Cr alloy using milled powders and microstructural details were discussed.Keywords: densification, grain growth, milling, nanostructured materials, sintering temperature
Procedia PDF Downloads 4031375 Realizing Teleportation Using Black-White Hole Capsule Constructed by Space-Time Microstrip Circuit Control
Authors: Mapatsakon Sarapat, Mongkol Ketwongsa, Somchat Sonasang, Preecha Yupapin
Abstract:
The designed and performed preliminary tests on a space-time control circuit using a two-level system circuit with a 4-5 cm diameter microstrip for realistic teleportation have been demonstrated. It begins by calculating the parameters that allow a circuit that uses the alternative current (AC) at a specified frequency as the input signal. A method that causes electrons to move along the circuit perimeter starting at the speed of light, which found satisfaction based on the wave-particle duality. It is able to establish the supersonic speed (faster than light) for the electron cloud in the middle of the circuit, creating a timeline and propulsive force as well. The timeline is formed by the stretching and shrinking time cancellation in the relativistic regime, in which the absolute time has vanished. In fact, both black holes and white holes are created from time signals at the beginning, where the speed of electrons travels close to the speed of light. They entangle together like a capsule until they reach the point where they collapse and cancel each other out, which is controlled by the frequency of the circuit. Therefore, we can apply this method to large-scale circuits such as potassium, from which the same method can be applied to form the system to teleport living things. In fact, the black hole is a hibernation system environment that allows living things to live and travel to the destination of teleportation, which can be controlled from position and time relative to the speed of light. When the capsule reaches its destination, it increases the frequency of the black holes and white holes canceling each other out to a balanced environment. Therefore, life can safely teleport to the destination. Therefore, there must be the same system at the origin and destination, which could be a network. Moreover, it can also be applied to space travel as well. The design system will be tested on a small system using a microstrip circuit system that we can create in the laboratory on a limited budget that can be used in both wired and wireless systems.Keywords: quantum teleportation, black-white hole, time, timeline, relativistic electronics
Procedia PDF Downloads 761374 Comparison of the Postoperative Analgesic Effects of Morphine, Paracetamol, and Ketorolac in Patient-Controlled Analgesia in the Patients Undergoing Open Cholecystectomy
Authors: Siamak Yaghoubi, Vahideh Rashtchi, Marzieh Khezri, Hamid Kayalha, Monadi Hamidfar
Abstract:
Background and objectives: Effective postoperative pain management in abdominal surgeries, which are painful procedures, plays an important role in reducing postoperative complications and increasing patient’s satisfaction. There are many techniques for pain control, one of which is Patient-Controlled Analgesia (PCA). The aim of this study was to compare the analgesic effects of morphine, paracetamol and ketorolac in the patients undergoing open cholecystectomy, using PCA method. Material and Methods: This randomized controlled trial was performed on 330 ASA (American Society of Anesthesiology) I-II patients ( three equal groups, n=110) who were scheduled for elective open cholecystectomy in Shahid Rjaee hospital of Qazvin, Iran from August 2013 until September 2015. All patients were managed by general anesthesia with TIVA (Total Intra Venous Anesthesia) technique. The control group received morphine with maximum dose of 0.02mg/kg/h, the paracetamol group received paracetamol with maximum dose of 1mg/kg/h, and the ketorolac group received ketorolac with maximum daily dose of 60mg using IV-PCA method. The parameters of pain, nausea, hemodynamic variables (BP and HR), pruritus, arterial oxygen desaturation, patient’s satisfaction and pain score were measured every two hours for 8 hours following operation in all groups. Results: There were no significant differences in demographic data between the three groups. there was a statistically significant difference with regard to the mean pain score at all times between morphine and paracetamol, morphine and ketorolac, and paracetamol and ketorolac groups (P<0.001). Results indicated a reduction with time in the mean level of postoperative pain in all three groups. At all times the mean level of pain in ketorolac group was less than that in the other two groups (p<0.001). Conclusion: According to the results of this study ketorolac is more effective than morphine and paracetamol in postoperative pain control in the patients undergoing open cholecystectomy, using PCA method.Keywords: analgesia, cholecystectomy, ketorolac, morphine, paracetamol
Procedia PDF Downloads 1981373 Comparison of Stereotactic Body Radiation Therapy Virtual Treatment Plans Obtained With Different Collimators in the Cyberknife System in Partial Breast Irradiation: A Retrospective Study
Authors: Öznur Saribaş, Si̇bel Kahraman Çeti̇ntaş
Abstract:
It is aimed to compare target volume and critical organ doses by using CyberKnife (CK) in accelerated partial breast irradiation (APBI) in patients with early stage breast cancer. Three different virtual plans were made for Iris, fixed and multi-leaf collimator (MLC) for 5 patients who received radiotherapy in the CyberKnife system. CyberKnife virtual plans were created, with 6 Gy per day totaling 30 Gy. Dosimetric parameters for the three collimators were analyzed according to the restrictions in the NSABP-39/RTOG 0413 protocol. The plans ensured critical organs were protected and GTV received 95 % of the prescribed dose. The prescribed dose was defined by the isodose curve of a minimum of 80. Homogeneity index (HI), conformity index (CI), treatment time (min), monitor unit (MU) and doses taken by critical organs were compared. As a result of the comparison of the plans, a significant difference was found for the duration of treatment, MU. However, no significant difference was found for HI, CI. V30 and V15 values of the ipsi-lateral breast were found in the lowest MLC. There was no significant difference between Dmax values for lung and heart. However, the mean MU and duration of treatment were found in the lowest MLC. As a result, the target volume received the desired dose in each collimator. The contralateral breast and contralateral lung doses were the lowest in the Iris. Fixed collimator was found to be more suitable for cardiac doses. But these values did not make a significant difference. The use of fixed collimators may cause difficulties in clinical applications due to the long treatment time. The choice of collimator in breast SBRT applications with CyberKnife may vary depending on tumor size, proximity to critical organs and tumor localization.Keywords: APBI, CyberKnife, early stage breast cancer, radiotherapy.
Procedia PDF Downloads 1211372 Electronics Thermal Management Driven Design of an IP65-Rated Motor Inverter
Authors: Sachin Kamble, Raghothama Anekal, Shivakumar Bhavi
Abstract:
Thermal management of electronic components packaged inside an IP65 rated enclosure is of prime importance in industrial applications. Electrical enclosure protects the multiple board configurations such as inverter, power, controller board components, busbars, and various power dissipating components from harsh environments. Industrial environments often experience relatively warm ambient conditions, and the electronic components housed in the enclosure dissipate heat, due to which the enclosures and the components require thermal management as well as reduction of internal ambient temperatures. Design of Experiments based thermal simulation approach with MOSFET arrangement, Heat sink design, Enclosure Volume, Copper and Aluminum Spreader, Power density, and Printed Circuit Board (PCB) type were considered to optimize air temperature inside the IP65 enclosure to ensure conducive operating temperature for controller board and electronic components through the different modes of heat transfer viz. conduction, natural convection and radiation using Ansys ICEPAK. MOSFET’s with the parallel arrangement, IP65 enclosure molded heat sink with rectangular fins on both enclosures, specific enclosure volume to satisfy the power density, Copper spreader to conduct heat to the enclosure, optimized power density value and selecting Aluminum clad PCB which improves the heat transfer were the contributors towards achieving a conducive operating temperature inside the IP-65 rated Motor Inverter enclosure. A reduction of 52 ℃ was achieved in internal ambient temperature inside the IP65 enclosure between baseline and final design parameters, which met the operative temperature requirements of the electronic components inside the IP-65 rated Motor Inverter.Keywords: Ansys ICEPAK, aluminium clad PCB, IP 65 enclosure, motor inverter, thermal simulation
Procedia PDF Downloads 1251371 Combined Tarsal Coalition Resection and Arthroereisis in Treatment of Symptomatic Rigid Flat Foot in Pediatric Population
Authors: Michael Zaidman, Naum Simanovsky
Abstract:
Introduction. Symptomatic tarsal coalition with rigid flat foot often demands operative solution. An isolated coalition resection does not guarantee pain relief; correction of co-existing foot deformity may be required. The objective of the study was to analyze the results of combination of tarsal coalition resection and arthroereisis. Patients and methods. We retrospectively reviewed medical records and radiographs of children operatively treated in our institution for symptomatic calcaneonavicular or talocalcaneal coalition between the years 2019 and 2022. Eight patients (twelve feet), 4 boys and 4 girls with mean age 11.2 years, were included in the study. In six patients (10 feet) calcaneonavicular coalition was diagnosed, two patients (two feet) sustained talonavicular coalition. To quantify degrees of foot deformity, we used calcaneal pitch angle, lateral talar-first metatarsal (Meary's) angle, and talonavicular coverage angle. The clinical results were assessed using the American Orthopaedic Foot and Ankle Society (AOFAS) Ankle Hindfoot Score. Results. The mean follow-up was 28 month. The preoperative mean talonavicular coverage angle was 17,75º as compared with postoperative mean angle of 5.4º. The calcaneal pitch angle improved from mean 6,8º to 16,4º. The mean preoperative Meary’s angle of -11.3º improved to mean 2.8º. The preoperative mean AOFAS score improved from 54.7 to 93.1 points post-operatively. In nine of twelve feet, overall clinical outcome judged by AOFAS scale was excellent (90-100 points), in three feet was good (80-90 points). Six patients (ten feet) obviously improved their subtalar range of motion. Conclusion. For symptomatic stiff or rigid flat feet associated with tarsal coalition, the combination of coalition resection and arthroereisis leads to normalization of radiographic parameters, clinical and functional improvement with good patient’s satisfaction and likely to be more effective than the isolated procedures.Keywords: rigid flat foot, tarsal coalition resection, arthroereisis, outcome
Procedia PDF Downloads 651370 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 1301369 Examining Statistical Monitoring Approach against Traditional Monitoring Techniques in Detecting Data Anomalies during Conduct of Clinical Trials
Authors: Sheikh Omar Sillah
Abstract:
Introduction: Monitoring is an important means of ensuring the smooth implementation and quality of clinical trials. For many years, traditional site monitoring approaches have been critical in detecting data errors but not optimal in identifying fabricated and implanted data as well as non-random data distributions that may significantly invalidate study results. The objective of this paper was to provide recommendations based on best statistical monitoring practices for detecting data-integrity issues suggestive of fabrication and implantation early in the study conduct to allow implementation of meaningful corrective and preventive actions. Methodology: Electronic bibliographic databases (Medline, Embase, PubMed, Scopus, and Web of Science) were used for the literature search, and both qualitative and quantitative studies were sought. Search results were uploaded into Eppi-Reviewer Software, and only publications written in the English language from 2012 were included in the review. Gray literature not considered to present reproducible methods was excluded. Results: A total of 18 peer-reviewed publications were included in the review. The publications demonstrated that traditional site monitoring techniques are not efficient in detecting data anomalies. By specifying project-specific parameters such as laboratory reference range values, visit schedules, etc., with appropriate interactive data monitoring, statistical monitoring can offer early signals of data anomalies to study teams. The review further revealed that statistical monitoring is useful to identify unusual data patterns that might be revealing issues that could impact data integrity or may potentially impact study participants' safety. However, subjective measures may not be good candidates for statistical monitoring. Conclusion: The statistical monitoring approach requires a combination of education, training, and experience sufficient to implement its principles in detecting data anomalies for the statistical aspects of a clinical trial.Keywords: statistical monitoring, data anomalies, clinical trials, traditional monitoring
Procedia PDF Downloads 821368 Adsorptive Removal of Cd(II) Ions from Aqueous Systems by Wood Ash-Alginate Composite Beads
Authors: Tichaona Nharingo, Hope Tauya, Mambo Moyo
Abstract:
Wood ash has been demonstrated to have favourable adsorption capacity for heavy metal ions but suffers the application problem of difficult to separate/isolate from the batch adsorption systems. Fabrication of wood ash beads using multifunctional group and non-toxic carbohydrate, alginate, may improve the applicability of wood ash in environmental pollutant remediation. In this work, alginate-wood ash beads (AWAB) were fabricated and applied to the removal of cadmium ions from aqueous systems. The beads were characterized by FTIR, TGA/DSC, SEM-EDX and their pHZPC before and after the adsorption of Cd(II) ions. Important adsorption parameters i.e. pH, AWAB dosage, contact time and ionic strength were optimized and the effect of initial concentration of Cd(II) ions to the adsorption process was established. Adsorption kinetics, adsorption isotherms, adsorption mechanism and application of AWAB to real water samples spiked with Cd(II) ions were ascertained. The composite adsorbent was characterized by a heterogeneous macro pore surface comprising of metal oxides, multiple hydroxyl groups and carbonyl groups that were involved in electrostatic interaction and Lewis acid-base interactions with the Cd(II) ions. The pseudo second order and the Freundlich isotherm models best fitted the adsorption kinetics and isotherm data respectively suggesting chemical sorption process and surface heterogeneity. The presence of Pb(II) ions inhibited the adsorption of Cd(II) ions (reduced by 40 %) attributed to the competition for the adsorption sites. The Cd(II) loaded beads could be regenerated using 0.1 M HCl and could be applied to four sorption-desorption cycles without significant loss in its initial adsorption capacity. The high maximum adsorption capacity, stability, selectivity and reusability of AWAB make the adsorbent ideal for application in the removal of Cd(II) ions from real water samples. Column type adsorption experiments need to be explored to establish the potential of the adsorbent in removing Cd(II) ions using continuous flow systems.Keywords: adsorption, Cd(II) ions, regeneration, wastewater, wood ash-alginate beads
Procedia PDF Downloads 2471367 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process
Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani
Abstract:
Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process
Procedia PDF Downloads 3401366 Designed Purine Molecules and in-silico Evaluation of Aurora Kinase Inhibition in Breast Cancer
Authors: Pooja Kumari, Anandkumar Tengli
Abstract:
Aurora kinase enzyme, a protein on overexpression, leads to metastasis and is extremely important for women’s health in terms of prevention or treatment. While creating a targeted technique, the aim of the work is to design purine molecules that inhibit in aurora kinase enzyme and helps to suppress breast cancer. Purine molecules attached to an amino acid in DNA block protein synthesis or halt the replication and metastasis caused by the aurora kinase enzyme. Various protein related to the overexpression of aurora protein was docked with purine molecule using Biovia Drug Discovery, the perpetual software. Various parameters like X-ray crystallographic structure, presence of ligand, Ramachandran plot, resolution, etc., were taken into consideration for selecting the target protein. A higher negative binding scored molecule has been taken for simulation studies. According to the available research and computational analyses, purine compounds may be powerful enough to demonstrate a greater affinity for the aurora target. Despite being clinically effective now, purines were originally meant to fight breast cancer by inhibiting the aurora kinase enzyme. In in-silico studies, it is observed that purine compounds have a moderate to high potency compared to other molecules, and our research into the literature revealed that purine molecules have a lower risk of side effects. The research involves the design, synthesis, and identification of active purine molecules against breast cancer. Purines are structurally similar to the normal metabolites of adenine and guanine; hence interfere/compete with protein synthesis and suppress the abnormal proliferation of cells/tissues. As a result, purine target metastasis cells and stop the growth of kinase; purine derivatives bind with DNA and aurora protein which may stop the growth of protein or inhibits replication and stop metastasis of overexpressed aurora kinase enzyme.Keywords: aurora kinases, in silico studies, medicinal chemistry, combination therapies, chronic cancer, clinical translation
Procedia PDF Downloads 871365 Assessing Future Offshore Wind Farms in the Gulf of Roses: Insights from Weather Research and Forecasting Model Version 4.2
Authors: Kurias George, Ildefonso Cuesta Romeo, Clara Salueña Pérez, Jordi Sole Olle
Abstract:
With the growing prevalence of wind energy there is a need, for modeling techniques to evaluate the impact of wind farms on meteorology and oceanography. This study presents an approach that utilizes the WRF (Weather Research and Forecasting )with that include a Wind Farm Parametrization model to simulate the dynamics around Parc Tramuntana project, a offshore wind farm to be located near the Gulf of Roses off the coast of Barcelona, Catalonia. The model incorporates parameterizations for wind turbines enabling a representation of the wind field and how it interacts with the infrastructure of the wind farm. Current results demonstrate that the model effectively captures variations in temeperature, pressure and in both wind speed and direction over time along with their resulting effects on power output from the wind farm. These findings are crucial for optimizing turbine placement and operation thus improving efficiency and sustainability of the wind farm. In addition to focusing on atmospheric interactions, this study delves into the wake effects within the turbines in the farm. A range of meteorological parameters were also considered to offer a comprehensive understanding of the farm's microclimate. The model was tested under different horizontal resolutions and farm layouts to scrutinize the wind farm's effects more closely. These experimental configurations allow for a nuanced understanding of how turbine wakes interact with each other and with the broader atmospheric and oceanic conditions. This modified approach serves as a potent tool for stakeholders in renewable energy, environmental protection, and marine spatial planning. environmental protection and marine spatial planning. It provides a range of information regarding the environmental and socio economic impacts of offshore wind energy projects.Keywords: weather research and forecasting, wind turbine wake effects, environmental impact, wind farm parametrization, sustainability analysis
Procedia PDF Downloads 741364 The Source of Fibre and Roxazyme® G2 Interacted to Influence the Length of Villi in the Ileal Epithelium of Growing Pigs Fed Fibrous Maize-Soybean Diets
Authors: F. Fushai, M.Tekere, M. Masafu, F. Siebrits, A. Kanengoni, F. Nherera
Abstract:
The effects of dietary fibre source on the histomorphology of the ileal epithelium were examined in growing pigs fed high fibre (242-250 g total dietary fibre kg-1 dry matter) diets fortified with Roxazyme® G2. The control was a standard, low fibre (141 g total dietary fibre kg-1 dry matter) diet formulated from dehulled soybean (Glycine max), maize (Zea Mays) meal and hominy chop. Five fibrous diets were evaluated in which fibre was increased by partial substitution of the grains in the control diet with maize cobs, soybean hulls, barley (Hordeum vulgare L) brewer’s grains, Lucerne (Medicago sativa) hay or wheat (Triticum aestivum) bran. Each diet was duplicated and 220 mg Roxazyme® G2 kg-1 dry mater was added to one of the mixtures. Seventy-two intact Large White X Landrace male pigs of weight 32 ± 5.6 kg pigs were randomly allocated to the diets in a complete randomised design with a 2 (fibre source) X (enzyme) factorial arrangement of treatments. The pigs were fed ad libitum for 10 weeks. Ileal tissue samples were taken at slaughter, at a point 50cm above the ileal-caecal valve. Villi length and area, and crypt depth were measured by computerised image analyses. The villi length: crypt ratio was calculated. The diet and the supplemental enzyme cocktail did not affect (p>0.05) any of the measured parameters. Significant (p=0.016) diet X enzyme interaction was observed for villi length whereby the enzyme reduced the villi length of pigs on the soy-hulls, standard and wheat bran diets, with an opposite effect on pigs on the maize cob, brewer’s grain, Lucerne diets. The results suggested fibre-source dependent changes in the morphology of the ileal epithelium of pigs fed high fibre, maize-soybean diets fortified with Roxazyme® G2.Keywords: fibre, growing pigs, histomorphology, ileum, Roxazyme® G2
Procedia PDF Downloads 4721363 Assessing Students’ Readiness for an Open and Distance Learning Higher Education Environment
Authors: Upasana G. Singh, Meera Gungea
Abstract:
Learning is no more confined to the traditional classroom, teacher, and student interaction. Many universities offer courses through the Open and Distance Learning (ODL) mode, attracting a diversity of learners in terms of age, gender, and profession to name a few. The ODL mode has surfaced as one of the famous sought-after modes of learning, allowing learners to invest in their educational growth without hampering their personal and professional commitments. This mode of learning, however, requires that those who ultimately choose to adopt it must be prepared to undertake studies through such medium. The purpose of this research is to assess whether students who join universities offering courses through the ODL mode are ready to embark and study within such a framework. This study will be helpful to unveil the challenges students face in such an environment and thus contribute to developing a framework to ease adoption and integration into the ODL environment. Prior to the implementation of e-learning, a readiness assessment is essential for any institution that wants to adopt any form of e-learning. Various e-learning readiness assessment models have been developed over the years. However, this study is based on a conceptual model for e-Learning Readiness Assessment which is a ‘hybrid model’. This hybrid model consists of 4 main parameters: 1) Technological readiness, 2) Culture readiness, 3) Content readiness, and 4) Demographics factors, with 4 sub-areas, namely, technology, innovation, people and self-development. The model also includes the attitudes of users towards the adoption of e-learning as an important aspect of assessing e-learning readiness. For this study, some factors and sub-factors of the hybrid model have been considered and adapted, together with the ‘Attitude’ component. A questionnaire was designed based on the models and students where the target population were students enrolled at the Open University of Mauritius, in undergraduate and postgraduate courses. Preliminary findings indicate that most (68%) learners have an average knowledge about ODL form of learning, despite not many (72%) having previous experience with ODL. Despite learning through ODL 74% of learners preferred hard copy learning material and 48% found difficulty in reading learning material on electronic devices.Keywords: open learning, distance learning, student readiness, a hybrid model
Procedia PDF Downloads 1101362 Development of an Experiment for Impedance Measurement of Structured Sandwich Sheet Metals by Using a Full Factorial Multi-Stage Approach
Authors: Florian Vincent Haase, Adrian Dierl, Anna Henke, Ralf Woll, Ennes Sarradj
Abstract:
Structured sheet metals and structured sandwich sheet metals are three-dimensional, lightweight structures with increased stiffness which are used in the automotive industry. The impedance, a figure of resistance of a structure to vibrations, will be determined regarding plain sheets, structured sheets, and structured sandwich sheets. The aim of this paper is generating an experimental design in order to minimize costs and duration of experiments. The design of experiments will be used to reduce the large number of single tests required for the determination of correlation between the impedance and its influencing factors. Full and fractional factorials are applied in order to systematize and plan the experiments. Their major advantages are high quality results given the relatively small number of trials and their ability to determine the most important influencing factors including their specific interactions. The developed full factorial experimental design for the study of plain sheets includes three factor levels. In contrast to the study of plain sheets, the respective impedance analysis used on structured sheets and structured sandwich sheets should be split into three phases. The first phase consists of preliminary tests which identify relevant factor levels. These factor levels are subsequently employed in main tests, which have the objective of identifying complex relationships between the parameters and the reference variable. Possible post-tests can follow up in case additional study of factor levels or other factors are necessary. By using full and fractional factorial experimental designs, the required number of tests is reduced by half. In the context of this paper, the benefits from the application of design for experiments are presented. Furthermore, a multistage approach is shown to take into account unrealizable factor combinations and minimize experiments.Keywords: structured sheet metals, structured sandwich sheet metals, impedance measurement, design of experiment
Procedia PDF Downloads 3751361 A Molecular Dynamic Simulation Study to Explore Role of Chain Length in Predicting Useful Characteristic Properties of Commodity and Engineering Polymers
Authors: Lokesh Soni, Sushanta Kumar Sethi, Gaurav Manik
Abstract:
This work attempts to use molecular simulations to create equilibrated structures of a range of commercially used polymers. Generated equilibrated structures for polyvinyl acetate (isotactic), polyvinyl alcohol (atactic), polystyrene, polyethylene, polyamide 66, poly dimethyl siloxane, poly carbonate, poly ethylene oxide, poly amide 12, natural rubber, poly urethane, and polycarbonate (bisphenol-A) and poly ethylene terephthalate are employed to estimate the correct chain length that will correctly predict the chain parameters and properties. Further, the equilibrated structures are used to predict some properties like density, solubility parameter, cohesive energy density, surface energy, and Flory-Huggins interaction parameter. The simulated densities for polyvinyl acetate, polyvinyl alcohol, polystyrene, polypropylene, and polycarbonate are 1.15 g/cm3, 1.125 g/cm3, 1.02 g/cm3, 0.84 g/cm3 and 1.223 g/cm3 respectively are found to be in good agreement with the available literature estimates. However, the critical repeating units or the degree of polymerization after which the solubility parameter showed saturation were 15, 20, 25, 10 and 20 respectively. This also indicates that such properties that dictate the miscibility of two or more polymers in their blends are strongly dependent on the chosen polymer or its characteristic properties. An attempt has been made to correlate such properties with polymer properties like Kuhn length, free volume and the energy term which plays a vital role in predicting the mentioned properties. These results help us to screen and propose a useful library which may be used by the research groups in estimating the polymer properties using the molecular simulations of chains with the predicted critical lengths. The library shall help to obviate the need for researchers to spend efforts in finding the critical chain length needed for simulating the mentioned polymer properties.Keywords: Kuhn length, Flory Huggins interaction parameter, cohesive energy density, free volume
Procedia PDF Downloads 1961360 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach
Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong
Abstract:
Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach
Procedia PDF Downloads 3971359 Management of Autoimmune Diseases with Ayurveda
Authors: Simmi Chopra
Abstract:
In the last few years, there has been a surge of Autoimmune diseases that have become more like an epidemic all over the world. The reasons vary from stress, insufficient sleep, smoking, genetics, environmental pollution, adulterated foods, and a diet full of “the deadly white,” which is white sugar and white flour. Most of the people diagnosed with these diseases are given steroids, opioids, supplements, or elimination diets to manage their lives, but most of them continue suffering to varying degrees. On the other hand, Ayurveda can help manage autoimmune problems effectively. Ayurveda is a 5000 years old holistic medical system from India that has an individualistic approach where health problems are looked at from the lens of balancing body and mind and by targeting the root cause of the problem. A combination of diet and lifestyle according to Ayurvedic principles, Ayurvedic herbal formulations and Ayurvedic therapies can help in the management of autoimmune and other chronic diseases. Panchkarma, which is an intense six weeks detox method, helps balance our body and mind, and has been very effective in managing autoimmune problems. The paper will introduce the basic concepts of Ayurveda and describe the terminologies- doshas, agni and ama. The paper will discuss the importance of diet and lifestyle according to the individual’s imbalance in the three functional parameters - doshas, which govern every aspect of our body and mind, our cells and tissues. The significance of agni, which can be correlated to digestive strength and ama, which can be correlated to toxins that are formed in our body leading to health problems, will be outlined. The Ayurvedic pathophysiology of autoimmune diseases will be discussed with emphasis on Rheumatoid arthritis, Multiple sclerosis and Psoriasis. Ayurvedic management will be discussed for these autoimmune conditions. As Ayurveda is an individualistic system, one protocol will not work for everyone. Therefore, case studies with Ayurvedic protocols for the above autoimmune disease will be presented. Conclusion: Ayurveda can help in managing as well as arresting the progression of autoimmune problems. Ayurveda is an ancient medical system, is much more needed today than ever. It is a tried and tested holistic system which has been practiced for the past many generations in India.Keywords: ayurveda, autoimmune, diseases, nutrition
Procedia PDF Downloads 681358 In vivo Antidiabetic and Antioxidant Potential of Pseudovaria macrophylla Extract
Authors: Aditya Arya, Hairin Taha, Ataul Karim Khan, Nayiar Shahid, Hapipah Mohd Ali, Mustafa Ali Mohd
Abstract:
This study has investigated the antidiabetic and antioxidant potential of Pseudovaria macrophylla bark extract on streptozotocin–nicotinamide induced type 2 diabetic rats. LCMS-QTOF and NMR experiments were done to determine the chemical composition in the methanolic bark extract. For in vivo experiments, the STZ (60 mg/kg/b.w, 15 min after 120 mg/kg/1 nicotinamide, i.p.) induced diabetic rats were treated with methanolic extract of Pseuduvaria macrophylla (200 and 400 mg/kg∙bw) and glibenclamide (2.5 mg/kg) as positive control respectively. Biochemical parameters were assayed in the blood samples of all groups of rats. The pro-inflammatory cytokines, antioxidant status and plasma transforming growth factor βeta-1 (TGF-β1) were evaluated. The histological study of the pancreas was examined and its expression level of insulin was observed by immunohistochemistry. In addition, the expression of glucose transporters (GLUT 1, 2 and 4) were assessed in pancreas tissue by western blot analysis. The outcomes of the study displayed that the bark methanol extract of Pseuduvaria macrophylla has potentially normalized the elevated blood glucose levels and improved serum insulin and C-peptide levels with significant increase in the antioxidant enzyme, reduced glutathione (GSH) and decrease in the level of lipid peroxidation (LPO). Additionally, the extract has markedly decreased the levels of serum pro-inflammatory cytokines and transforming growth factor beta-1 (TGF-β1). Histopathology analysis demonstrated that Pseuduvaria macrophylla has the potential to protect the pancreas of diabetic rats against peroxidation damage by downregulating oxidative stress and elevated hyperglycaemia. Furthermore, the expression of insulin protein, GLUT-1, GLUT-2 and GLUT-4 in pancreatic cells was enhanced. The findings of this study support the anti-diabetic claims of Pseudovaria macrophylla bark.Keywords: diabetes mellitus, Pseuduvaria macrophylla, alkaloids, caffeic acid
Procedia PDF Downloads 3581357 Biopotential of Introduced False Indigo and Albizia’s Weevils in Host Plant Control and Duration of Its Development Stages in Southern Regions of Panonian Basin
Authors: Renata Gagić-Serdar, Miroslava Markovic, Ljubinko Rakonjac, Aleksandar Lučić
Abstract:
The paper present the results of the entomological experimental studies of the biological, ecological, and (bionomic) insect performances, such as seasonal adaptation of introduced monophagous false indigo and albizias weevil’s Acanthoscelides pallidipennis Motschulsky. and Bruchidius terrenus (Sharp), Coleoptera: Chrysomelidae: Bruchinae, to phenological phases of aggressive invasive host plant Amorpha fruticosa L. and Albizia julibrissin (Fabales: Fabaceae) on the territory of Republic of Serbia with special attention on assessing and monitoring of new formed and detected inter species relations between autochthons parasite wasps from fauna (Hymenoptera: Chalcidoidea) and herbaceous seed weevil beetle. During 15 years (2006-2021), on approximately 30 localities, data analyses were done for observed experimental host plants from samples with statistical significance. Status of genera from families Hymenoptera: Chalcidoidea.: Pteromalidae and Eulophidae, after intensive investigations, has been trophicly identified. Recorded seed pest species of A. fruticosa or A. julibrissin (Fabales: Fabaceae) was introduced in Serbia and planted as ornamental trees, they also were put undergo different kinds of laboratory and field research tests during this period in a goal of collecting data about lasting each of develop stage of their seed beetles. Field generations in different stages were also monitored by continuous infested seed collecting and its disection. Established host plant-seed predator linkage was observed in correlation with different environment parameters, especially water level fluctuations in bank corridor formation stands and riparian cultures.Keywords: amorpha, albizia, chalcidoid wasp, invasiveness, weevils
Procedia PDF Downloads 961356 Hydrological Analysis for Urban Water Management
Authors: Ranjit Kumar Sahu, Ramakar Jha
Abstract:
Urban Water Management is the practice of managing freshwater, waste water, and storm water as components of a basin-wide management plan. It builds on existing water supply and sanitation considerations within an urban settlement by incorporating urban water management within the scope of the entire river basin. The pervasive problems generated by urban development have prompted, in the present work, to study the spatial extent of urbanization in Golden Triangle of Odisha connecting the cities Bhubaneswar (20.2700° N, 85.8400° E), Puri (19.8106° N, 85.8314° E) and Konark (19.9000° N, 86.1200° E)., and patterns of periodic changes in urban development (systematic/random) in order to develop future plans for (i) urbanization promotion areas, and (ii) urbanization control areas. Remote Sensing, using USGS (U.S. Geological Survey) Landsat8 maps, supervised classification of the Urban Sprawl has been done for during 1980 - 2014, specifically after 2000. This Work presents the following: (i) Time series analysis of Hydrological data (ground water and rainfall), (ii) Application of SWMM (Storm Water Management Model) and other soft computing techniques for Urban Water Management, and (iii) Uncertainty analysis of model parameters (Urban Sprawl and correlation analysis). The outcome of the study shows drastic growth results in urbanization and depletion of ground water levels in the area that has been discussed briefly. Other relative outcomes like declining trend of rainfall and rise of sand mining in local vicinity has been also discussed. Research on this kind of work will (i) improve water supply and consumption efficiency (ii) Upgrade drinking water quality and waste water treatment (iii) Increase economic efficiency of services to sustain operations and investments for water, waste water, and storm water management, and (iv) engage communities to reflect their needs and knowledge for water management.Keywords: Storm Water Management Model (SWMM), uncertainty analysis, urban sprawl, land use change
Procedia PDF Downloads 4281355 Operating Parameters and Costs Assessments of a Real Fishery Wastewater Effluent Treated by Electrocoagulation Process
Authors: Mirian Graciella Dalla Porta, Humberto Jorge José, Danielle de Bem Luiz, Regina de F. P. M.Moreira
Abstract:
Similar to most processing industries, fish processing produces large volumes of wastewater, which contains especially organic contaminants, salts and oils dispersed therein. Different processes have been used for the treatment of fishery wastewaters, but the most commonly used are chemical coagulation and flotation. These techniques are well known but sometimes the characteristics of the treated effluent do not comply with legal standards for discharge. Electrocoagulation (EC) is an electrochemical process that can be used to treat wastewaters in terms of both organic matter and nutrient removal. The process is based on the use of sacrificial electrodes such as aluminum, iron or zinc, that are oxidized to produce metal ions that can be used to coagulate and react with organic matter and nutrients in the wastewater. While EC processes are effective to treatment of several types of wastewaters, applications have been limited due to the high energy demands and high current densities. Generally, the for EC process can be performed without additional chemicals or pre-treatment, but the costs should be reduced for EC processes to become more applicable. In this work, we studied the treatment of a real wastewater from fishmeal industry by electrocoagulation process. Removal efficiencies for chemical oxygen demand (COD), total organic carbon (TOC) turbidity, phosphorous and nitrogen concentration were determined as a function of the operating conditions, such as pH, current density and operating time. The optimum operating conditions were determined to be operating time of 10 minutes, current density 100 A.m-2, and initial pH 4.0. COD, TOC, phosphorous concentration, and turbidity removal efficiencies at the optimum operating conditions were higher than 90% for aluminum electrode. Operating costs at the optimum conditions were calculated as US$ 0.37/m3 (US$ 0.038/kg COD) for Al electrode. These results demonstrate that the EC process is a promising technology to remove nutrients from fishery wastewaters, as the process has both a high efficiency of nutrient removal, and low energy requirements.Keywords: electrocoagulation, fish, food industry, wastewater
Procedia PDF Downloads 2511354 Nonstationary Modeling of Extreme Precipitation in the Wei River Basin, China
Authors: Yiyuan Tao
Abstract:
Under the impact of global warming together with the intensification of human activities, the hydrological regimes may be altered, and the traditional stationary assumption was no longer satisfied. However, most of the current design standards of water infrastructures were still based on the hypothesis of stationarity, which may inevitably result in severe biases. Many critical impacts of climate on ecosystems, society, and the economy are controlled by extreme events rather than mean values. Therefore, it is of great significance to identify the non-stationarity of precipitation extremes and model the precipitation extremes in a nonstationary framework. The Wei River Basin (WRB), located in a continental monsoon climate zone in China, is selected as a case study in this study. Six extreme precipitation indices were employed to investigate the changing patterns and stationarity of precipitation extremes in the WRB. To identify if precipitation extremes are stationary, the Mann-Kendall trend test and the Pettitt test, which is used to examine the occurrence of abrupt changes are adopted in this study. Extreme precipitation indices series are fitted with non-stationary distributions that selected from six widely used distribution functions: Gumbel, lognormal, Weibull, gamma, generalized gamma and exponential distributions by means of the time-varying moments model generalized additive models for location, scale and shape (GAMLSS), where the distribution parameters are defined as a function of time. The results indicate that: (1) the trends were not significant for the whole WRB, but significant positive/negative trends were still observed in some stations, abrupt changes for consecutive wet days (CWD) mainly occurred in 1985, and the assumption of stationarity is invalid for some stations; (2) for these nonstationary extreme precipitation indices series with significant positive/negative trends, the GAMLSS models are able to capture well the temporal variations of the indices, and perform better than the stationary model. Finally, the differences between the quantiles of nonstationary and stationary models are analyzed, which highlight the importance of nonstationary modeling of precipitation extremes in the WRB.Keywords: extreme precipitation, GAMLSSS, non-stationary, Wei River Basin
Procedia PDF Downloads 1251353 Optimization of SOL-Gel Copper Oxide Layers for Field-Effect Transistors
Authors: Tomas Vincze, Michal Micjan, Milan Pavuk, Martin Weis
Abstract:
In recent years, alternative materials are gaining attention to replace polycrystalline and amorphous silicon, which are a standard for low requirement devices, where silicon is unnecessarily and high cost. For that reason, metal oxides are envisioned as the new materials for these low-requirement applications such as sensors, solar cells, energy storage devices, or field-effect transistors. Their most common way of layer growth is sputtering; however, this is a high-cost fabrication method, and a more industry-suitable alternative is the sol-gel method. In this group of materials, many oxides exhibit a semiconductor-like behavior with sufficiently high mobility to be applied as transistors. The sol-gel method is a cost-effective deposition technique for semiconductor-based devices. Copper oxides, as p-type semiconductors with free charge mobility up to 1 cm2/Vs., are suitable replacements for poly-Si or a-Si:H devices. However, to reach the potential of silicon devices, a fine-tuning of material properties is needed. Here we focus on the optimization of the electrical parameters of copper oxide-based field-effect transistors by modification of precursor solvent (usually 2-methoxy ethanol). However, to achieve solubility and high-quality films, a better solvent is required. Since almost no solvents have both high dielectric constant and high boiling point, an alternative approach was proposed with blend solvents. By mixing isopropyl alcohol (IPA) and 2-methoxy ethanol (2ME) the precursor reached better solubility. The quality of the layers fabricated using mixed solutions was evaluated in accordance with the surface morphology and electrical properties. The IPA:2ME solution mixture reached optimum results for the weight ratio of 1:3. The cupric oxide layers for optimal mixture had the highest crystallinity and highest effective charge mobility.Keywords: copper oxide, field-effect transistor, semiconductor, sol-gel method
Procedia PDF Downloads 1371352 Scheduling Building Projects: The Chronographical Modeling Concept
Authors: Adel Francis
Abstract:
Most of scheduling methods and software apply the critical path logic. This logic schedule activities, apply constraints between these activities and try to optimize and level the allocated resources. The extensive use of this logic produces a complex an erroneous network hard to present, follow and update. Planning and management building projects should tackle the coordination of works and the management of limited spaces, traffic, and supplies. Activities cannot be performed without the resources available and resources cannot be used beyond the capacity of workplaces. Otherwise, workspace congestion will negatively affect the flow of works. The objective of the space planning is to link the spatial and temporal aspects, promote efficient use of the site, define optimal site occupancy rates, and ensures suitable rotation of the workforce in the different spaces. The Chronographic scheduling modelling belongs to this category and models construction operations as well as their processes, logical constraints, association and organizational models, which help to better illustrate the schedule information using multiple flexible approaches. The model defined three categories of areas (punctual, surface and linear) and four different layers (space creation, systems, closing off space, finishing, and reduction of space). The Chronographical modelling is a more complete communication method, having the ability to alternate from one visual approach to another by manipulation of graphics via a set of parameters and their associated values. Each individual approach can help to schedule a certain project type or specialty. Visual communication can also be improved through layering, sheeting, juxtaposition, alterations, and permutations, allowing for groupings, hierarchies, and classification of project information. In this way, graphic representation becomes a living, transformable image, showing valuable information in a clear and comprehensible manner, simplifying the site management while simultaneously utilizing the visual space as efficiently as possible.Keywords: building projects, chronographic modelling, CPM, critical path, precedence diagram, scheduling
Procedia PDF Downloads 1571351 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia
Authors: Anik Sarminingsih, Krishna V. Pradana
Abstract:
The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river
Procedia PDF Downloads 2951350 A Strategy to Oil Production Placement Zones Based on Maximum Closeness
Authors: Waldir Roque, Gustavo Oliveira, Moises Santos, Tatiana Simoes
Abstract:
Increasing the oil recovery factor of an oil reservoir has been a concern of the oil industry. Usually, the production placement zones are defined after some analysis of geological and petrophysical parameters, being the rock porosity, permeability and oil saturation of fundamental importance. In this context, the determination of hydraulic flow units (HFUs) renders an important step in the process of reservoir characterization since it may provide specific regions in the reservoir with similar petrophysical and fluid flow properties and, in particular, techniques supporting the placement of production zones that favour the tracing of directional wells. A HFU is defined as a representative volume of a total reservoir rock in which petrophysical and fluid flow properties are internally consistent and predictably distinct of other reservoir rocks. Technically, a HFU is characterized as a rock region that exhibit flow zone indicator (FZI) points lying on a straight line of the unit slope. The goal of this paper is to provide a trustful indication for oil production placement zones for the best-fit HFUs. The FZI cloud of points can be obtained from the reservoir quality index (RQI), a function of effective porosity and permeability. Considering log and core data the HFUs are identified and using the discrete rock type (DRT) classification, a set of connected cell clusters can be found and by means a graph centrality metric, the maximum closeness (MaxC) cell is obtained for each cluster. Considering the MaxC cells as production zones, an extensive analysis, based on several oil recovery factor and oil cumulative production simulations were done for the SPE Model 2 and the UNISIM-I-D synthetic fields, where the later was build up from public data available from the actual Namorado Field, Campos Basin, in Brazil. The results have shown that the MaxC is actually technically feasible and very reliable as high performance production placement zones.Keywords: hydraulic flow unit, maximum closeness centrality, oil production simulation, production placement zone
Procedia PDF Downloads 331