Search results for: graphics processing units
932 Effect of Pretreatment and Drying Method on Selected Quality Parameters of Dried Bell Pepper
Authors: Toyosi Yewande Tunde-Akintunde, Grace Oluwatoyin Ogunlakin, Bosede Folake Olanipekun
Abstract:
Peppers are excellent sources of nutrients but its high moisture content makes it susceptible to spoilage. Drying, a common processing method, results in a reduction of these nutrients in the final product. Pre-treatment of pepper before drying can be used to reduce the level of degradation of nutrients. Thus this study investigated the effect of pre-treatment (hot water blanching and soaking in brine-sodium chloride) and drying methods (oven, microwave and sun) on selected quality parameters (proximate composition, capsaicin, reducing sugar and phenolic content, pH, total solid (TS), Titratable acidity (TA), water absorption capacity (WAC) and colour) of pepper. The protein and moisture content value ranged from 9.09 to 10.23% and 5.63 to 8.48% respectively. Sun dried samples had the highest value while oven dried samples had the lowest. Brine treated samples had higher protein but lower moisture content than blanched samples. Capsaicin, reducing sugar and phenolic content values ranged from 0.68 to 0.87 mg/dm3; 3.18 to 3.79 µg/ml; and 40.67 to 84.01 mg GAE/100 g d.m respectively. The sun dried samples had higher values while the lowest values were from microwave dried samples. The brine treated samples had higher values in capsaicin while the blanched samples had higher reducing sugar and phenolic contents. The values of L, a* and b* for the dried pepper varied from 58.76 to 63.13; 7.09 to 7.34; and 11.79 to 12.36 respectively. Oven dried samples had the lowest values for a*, while its L values were the highest. The L and a* values for brine treated samples were higher than blanched samples. The pre-treatment and drying method considered resulted in different values of the quality parameters considered which indicates that drying and pre-treatment has an effect on the quality of the final dried pepper samples.Keywords: Bell pepper, microwave drying, oven drying, quality, sun drying
Procedia PDF Downloads 345931 An Analysis on Clustering Based Gene Selection and Classification for Gene Expression Data
Authors: K. Sathishkumar, V. Thiagarasu
Abstract:
Due to recent advances in DNA microarray technology, it is now feasible to obtain gene expression profiles of tissue samples at relatively low costs. Many scientists around the world use the advantage of this gene profiling to characterize complex biological circumstances and diseases. Microarray techniques that are used in genome-wide gene expression and genome mutation analysis help scientists and physicians in understanding of the pathophysiological mechanisms, in diagnoses and prognoses, and choosing treatment plans. DNA microarray technology has now made it possible to simultaneously monitor the expression levels of thousands of genes during important biological processes and across collections of related samples. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. A first step toward addressing this challenge is the use of clustering techniques, which is essential in the data mining process to reveal natural structures and identify interesting patterns in the underlying data. This work presents an analysis of several clustering algorithms proposed to deals with the gene expression data effectively. The existing clustering algorithms like Support Vector Machine (SVM), K-means algorithm and evolutionary algorithm etc. are analyzed thoroughly to identify the advantages and limitations. The performance evaluation of the existing algorithms is carried out to determine the best approach. In order to improve the classification performance of the best approach in terms of Accuracy, Convergence Behavior and processing time, a hybrid clustering based optimization approach has been proposed.Keywords: microarray technology, gene expression data, clustering, gene Selection
Procedia PDF Downloads 323930 Remote Observation of Environmental Parameters on the Surface of the Maricunga Salt Flat, Atacama Region, Chile
Authors: Lican Guzmán, José Manuel Lattus, Mariana Cervetto, Mauricio Calderón
Abstract:
Today the estimation of effects produced by climate change in high Andean wetland environments is confronted by big challenges. This study provides a way to an analysis by remote sensing how some Ambiental aspects have evolved on the Maricunga salt flat in the last 30 years, divided into the summer and winter seasons, and if global warming is conditioning these changes. The first step to achieve this goal was the recompilation of geological, hydrological, and morphometric antecedents to ensure an adequate contextualization of its environmental parameters. After this, software processing and analysis of Landsat 5,7 and 8 satellite imagery was required to get the vegetation, water, surface temperature, and soil moisture indexes (NDVI, NDWI, LST, and SMI) in order to see how their spatial-temporal conditions have evolved in the area of study during recent decades. Results show a tendency of regular increase in surface temperature and disponibility of water during both seasons but with slight drought periods during summer. Soil moisture factor behaves as a constant during the dry season and with a tendency to increase during wintertime. Vegetation analysis shows an areal and quality increase of its surface sustained through time that is consistent with the increase of water supply and temperature in the basin mentioned before. Roughly, the effects of climate change can be described as positive for the Maricunga salt flat; however, the lack of exact correlation in dates of the imagery available to remote sensing analysis could be a factor for misleading in the interpretation of results.Keywords: global warming, geology, SIG, Atacama Desert, Salar de Maricunga, environmental geology, NDVI, SMI, LST, NDWI, Landsat
Procedia PDF Downloads 81929 Development of Surface Modification Technology for Control Element Drive Mechanism Nozzle and Fatigue Enhancement of Ni-Based Alloys
Authors: Auezhan Amanov, Inho Cho, Young-Sik Pyun
Abstract:
Control element drive mechanism (CEDM) nozzle is manufactured as welded on the reactor vessel and currently uses Alloy 690 material. The top of the reactor is equipped with about 100 CEDM nozzles with an internal diameter of about 70 mm. Relatively large Inlet/Outlet nozzles are equipped with two outlet nozzles and four inlet nozzles on the reactor wall. The inner diameter of the nozzle is vulnerable to stress corrosion cracking (SCC), and in order to solve this problem, an ultrasonic nanocrystal surface modification (UNSM) treatment is performed on the inner diameter of the nozzle and the weld surface. The ultimate goal is to improve the service life of parts by applying compressive residual stress and suppressing primary water stress corrosion cracking (PWSCC). The main purpose is to design and fabricate a UNSM treatment device for the internal diameter processing of CEDM nozzles and inlet/outlet nozzles. In order to develop the system, the basic technology such as the development of UNSM tooling is developed and the mechanical properties and fatigue performance of before and after UNSM treatment of reactor nozzle material made of Ni-based alloys using the specimen are compared and evaluated. The inner diameter of the nozzle was treated by a newly developed UNSM treatment under the optimized treatment parameters. It was found that the mechanical properties and fatigue performance of nozzle were improved in comparison with the untreated nozzle, which may be attributed to the increase in hardness, induced compressive residual stress.Keywords: control element drive mechanism nozzle, fatigue, Ni-based alloy, ultrasonic nanocrystal surface modification, UNSM
Procedia PDF Downloads 111928 Dairy Wastewater Treatment by Electrochemical and Catalytic Method
Authors: Basanti Ekka, Talis Juhna
Abstract:
Dairy industrial effluents originated by the typical processing activities are composed of various organic and inorganic constituents, and these include proteins, fats, inorganic salts, antibiotics, detergents, sanitizers, pathogenic viruses, bacteria, etc. These contaminants are harmful to not only human beings but also aquatic flora and fauna. Because consisting of large classes of contaminants, the specific targeted removal methods available in the literature are not viable solutions on the industrial scale. Therefore, in this on-going research, a series of coagulation, electrochemical, and catalytic methods will be employed. The bulk coagulation and electrochemical methods can wash off most of the contaminants, but some of the harmful chemicals may slip in; therefore, specific catalysts designed and synthesized will be employed for the removal of targeted chemicals. In the context of Latvian dairy industries, presently, work is under progress on the characterization of dairy effluents by total organic carbon (TOC), Inductively Coupled Plasma Mass Spectrometry (ICP-MS)/ Inductively Coupled Plasma Optical Emission Spectrometry (ICP-OES), High-Performance Liquid Chromatography (HPLC), Gas Chromatography-Mass Spectrometry (GC-MS), and Mass Spectrometry. After careful evaluation of the dairy effluents, a cost-effective natural coagulant will be employed prior to advanced electrochemical technology such as electrocoagulation and electro-oxidation as a secondary treatment process. Finally, graphene oxide (GO) based hybrid materials will be used for post-treatment of dairy wastewater as graphene oxide has been widely applied in various fields such as environmental remediation and energy production due to the presence of various oxygen-containing groups. Modified GO will be used as a catalyst for the removal of remaining contaminants after the electrochemical process.Keywords: catalysis, dairy wastewater, electrochemical method, graphene oxide
Procedia PDF Downloads 144927 Poly (3,4-Ethylenedioxythiophene) Prepared by Vapor Phase Polymerization for Stimuli-Responsive Ion-Exchange Drug Delivery
Authors: M. Naveed Yasin, Robert Brooke, Andrew Chan, Geoffrey I. N. Waterhouse, Drew Evans, Darren Svirskis, Ilva D. Rupenthal
Abstract:
Poly(3,4-ethylenedioxythiophene) (PEDOT) is a robust conducting polymer (CP) exhibiting high conductivity and environmental stability. It can be synthesized by either chemical, electrochemical or vapour phase polymerization (VPP). Dexamethasone sodium phosphate (dexP) is an anionic drug molecule which has previously been loaded onto PEDOT as a dopant via electrochemical polymerisation; however this technique requires conductive surfaces from which polymerization is initiated. On the other hand, VPP produces highly organized biocompatible CP structures while polymerization can be achieved onto a range of surfaces with a relatively straight forward scale-up process. Following VPP of PEDOT, dexP can be loaded and subsequently released via ion-exchange. This study aimed at preparing and characterising both non-porous and porous VPP PEDOT structures including examining drug loading and release via ion-exchange. Porous PEDOT structures were prepared by first depositing a sacrificial polystyrene (PS) colloidal template on a substrate, heat curing this deposition and then spin coating it with the oxidant solution (iron tosylate) at 1500 rpm for 20 sec. VPP of both porous and non-porous PEDOT was achieved by exposing to monomer vapours in a vacuum oven at 40 mbar and 40 °C for 3 hrs. Non-porous structures were prepared similarly on the same substrate but without any sacrificial template. Surface morphology, compositions and behaviour were then characterized by atomic force microscopy (AFM), scanning electron microscopy (SEM), x-ray photoelectron spectroscopy (XPS) and cyclic voltammetry (CV) respectively. Drug loading was achieved by 50 CV cycles in a 0.1 M dexP aqueous solution. For drug release, each sample was exposed to 20 mL of phosphate buffer saline (PBS) placed in a water bath operating at 37 °C and 100 rpm. Film was stimulated (continuous pulse of ± 1 V at 0.5 Hz for 17 mins) while immersed into PBS. Samples were collected at 1, 2, 6, 23, 24, 26 and 27 hrs and were analysed for dexP by high performance liquid chromatography (HPLC Agilent 1200 series). AFM and SEM revealed the honey comb nature of prepared porous structures. XPS data showed the elemental composition of the dexP loaded film surface, which related well with that of PEDOT and also showed that one dexP molecule was present per almost three EDOT monomer units. The reproducible electroactive nature was shown by several cycles of reduction and oxidation via CV. Drug release revealed success in drug loading via ion-exchange, with stimulated porous and non-porous structures exhibiting a proof of concept burst release upon application of an electrical stimulus. A similar drug release pattern was observed for porous and non-porous structures without any significant statistical difference, possibly due to the thin nature of these structures. To our knowledge, this is the first report to explore the potential of VPP prepared PEDOT for stimuli-responsive drug delivery via ion-exchange. The produced porous structures were ordered and highly porous as indicated by AFM and SEM. These porous structures exhibited good electroactivity as shown by CV. Future work will investigate porous structures as nano-reservoirs to increase drug loading while sealing these structures to minimize spontaneous drug leakage.Keywords: PEDOT for ion-exchange drug delivery, stimuli-responsive drug delivery, template based porous PEDOT structures, vapour phase polymerization of PEDOT
Procedia PDF Downloads 231926 Functionalized Nano porous Ceramic Membranes for Electrodialysis Treatment of Harsh Wastewater
Authors: Emily Rabe, Stephanie Candelaria, Rachel Malone, Olivia Lenz, Greg Newbloom
Abstract:
Electrodialysis (ED) is a well-developed technology for ion removal in a variety of applications. However, many industries generate harsh wastewater streams that are incompatible with traditional ion exchange membranes. Membrion® has developed novel ceramic-based ion exchange membranes (IEMs) offering several advantages over traditional polymer membranes: high performance in low pH, chemical resistance to oxidizers, and a rigid structure that minimizes swelling. These membranes are synthesized with our patented silane-based sol-gel techniques. The pore size, shape, and network structure are engineered through a molecular self-assembly process where thermodynamic driving forces are used to direct where and how pores form. Either cationic or anionic groups can be added within the membrane nanopore structure to create cation- and anion-exchange membranes. The ceramic IEMs are produced on a roll-to-roll manufacturing line with low-temperature processing. Membrane performance testing is conducted using in-house permselectivity, area-specific resistance, and ED stack testing setups. Ceramic-based IEMs show comparable performance to traditional IEMs and offer some unique advantages. Long exposure to highly acidic solutions has a negligible impact on ED performance. Additionally, we have observed stable performance in the presence of strong oxidizing agents such as hydrogen peroxide. This stability is expected, as the ceramic backbone of these materials is already in a fully oxidized state. This data suggests ceramic membranes, made using sol-gel chemistry, could be an ideal solution for acidic and/or oxidizing wastewater streams from processes such as semiconductor manufacturing and mining.Keywords: ion exchange, membrane, silane chemistry, nanostructure, wastewater
Procedia PDF Downloads 86925 Navigating the Case-Based Learning Multimodal Learning Environment: A Qualitative Study Across the First-Year Medical Students
Authors: Bhavani Veasuvalingam
Abstract:
Case-based learning (CBL) is a popular instructional method aimed to bridge theory to clinical practice. This study aims to explore CBL mixed modality curriculum in influencing students’ learning styles and strategies that support learning. An explanatory sequential mixed method study was employed with initial phase, 44-itemed Felderman’s Index of Learning Style (ILS) questionnaire employed across year one medical students (n=142) using convenience sampling to describe the preferred learning styles. The qualitative phase utilised three focus group discussions (FGD) to explore in depth on the multimodal learning style exhibited by the students. Most students preferred combination of learning stylesthat is reflective, sensing, visual and sequential i.e.: RSVISeq style (24.64%) from the ILS analysis. The frequency of learning preference from processing to understanding were well balanced, with sequential-global domain (66.2%); sensing-intuitive (59.86%), active- reflective (57%), and visual-verbal (51.41%). The qualitative data reported three major themes, namely Theme 1: CBL mixed modalities navigates learners’ learning style; Theme 2: Multimodal learners active learning strategies supports learning. Theme 3: CBL modalities facilitating theory into clinical knowledge. Both quantitative and qualitative study strongly reports the multimodal learning style of the year one medical students. Medical students utilise multimodal learning styles to attain the clinical knowledge when learning with CBL mixed modalities. Educators’ awareness of the multimodal learning style is crucial in delivering the CBL mixed modalities effectively, considering strategic pedagogical support students to engage and learn CBL in bridging the theoretical knowledge into clinical practice.Keywords: case-based learning, learnign style, medical students, learning
Procedia PDF Downloads 95924 Formation of the Investment Portfolio of Intangible Assets with a Wide Pairwise Comparison Matrix Application
Authors: Gulnara Galeeva
Abstract:
The Analytic Hierarchy Process is widely used in the economic and financial studies, including the formation of investment portfolios. In this study, a generalized method of obtaining a vector of priorities for the case with separate pairwise comparisons of the expert opinion being presented as a set of several equal evaluations on a ratio scale is examined. The author claims that this method allows solving an important and up-to-date problem of excluding vagueness and ambiguity of the expert opinion in the decision making theory. The study describes the authentic wide pairwise comparison matrix. Its application in the formation of the efficient investment portfolio of intangible assets of a small business enterprise with limited funding is considered. The proposed method has been successfully approbated on the practical example of a functioning dental clinic. The result of the study confirms that the wide pairwise comparison matrix can be used as a simple and reliable method for forming the enterprise investment policy. Moreover, a comparison between the method based on the wide pairwise comparison matrix and the classical analytic hierarchy process was conducted. The results of the comparative analysis confirm the correctness of the method based on the wide matrix. The application of a wide pairwise comparison matrix also allows to widely use the statistical methods of experimental data processing for obtaining the vector of priorities. A new method is available for simple users. Its application gives about the same accuracy result as that of the classical hierarchy process. Financial directors of small and medium business enterprises get an opportunity to solve the problem of companies’ investments without resorting to services of analytical agencies specializing in such studies.Keywords: analytic hierarchy process, decision processes, investment portfolio, intangible assets
Procedia PDF Downloads 266923 Metabolic Profiling of Populus trichocarpa Family 1 UDP-Glycosyltransferases
Authors: Patricia M. B. Saint-Vincent, Anna Furches, Stephanie Galanie, Erica Teixeira Prates, Piet Jones, Nancy Engle, David Kainer, Wellington Muchero, Daniel Jacobson, Timothy J. Tschaplinski
Abstract:
Uridine diphosphate-glycosyltransferases (UGTs) are enzymes that catalyze sugar transfer to a variety of plant metabolites. UGT substrates, which include plant secondary metabolites involved in lignification, demonstrate new activities and incorporation when glycosylated. Knowledge of UGT function, substrate specificity, and enzyme products is important for plant engineering efforts, especially related to increasing plant biomass through lignification. UGTs in Populus trichocarpa, a biofuel feedstock, and model woody plant, were selected from a pool of gene candidates using rapid prioritization strategies. A functional genomics workflow, consisting of a metabolite genome-wide association study (mGWAS), expression of synthetic codon-optimized genes, and high-throughput biochemical assays with mass spectrometry-based analysis, was developed for determining the substrates and products of previously-uncharacterized enzymes. A total of 40 UGTs from P. trichocarpa were profiled, and the biochemical assay results were compared to predicted mGWAS connections. Assay results confirmed seven of 11 leaf mGWAS associations and demonstrated varying levels of substrate specificity among candidate UGTs. P. trichocarpa UGT substrate processing confirms the role of these newly-characterized enzymes in lignan, flavonoid, and phytohormone metabolism, with potential implications for cell wall biosynthesis, nitrogen uptake, and biotic and abiotic stress responses.Keywords: Populus, metabolite-gene associations, GWAS, bio feedstocks, glycosyltransferase
Procedia PDF Downloads 114922 Hydrometallurgical Processing of a Nigerian Chalcopyrite Ore
Authors: Alafara A. Baba, Kuranga I. Ayinla, Folahan A. Adekola, Rafiu B. Bale
Abstract:
Due to increasing demands and diverse applications of copper oxide as pigment in ceramics, cuprammonium hydroxide solution for rayon, p-type semi-conductor, dry cell batteries production and as safety disposal of hazardous materials, a study on the hydrometallurgical operations involving leaching, solvent extraction and precipitation for the recovery of copper for producing high grade copper oxide from a Nigerian chalcopyrite ore in chloride media has been examined. At a particular set of experimental parameter with respect to acid concentration, reaction temperature and particle size, the leaching investigation showed that the ore dissolution increases with increasing acid concentration, temperature and decreasing particle diameter at a moderate stirring. The kinetics data has been analyzed and was found to follow diffusion control mechanism. At optimal conditions, the extent of ore dissolution reached 94.3%. The recovery of the total copper from the hydrochloric acid-leached chalcopyrite ore was undertaken by solvent extraction and precipitation techniques, prior to the beneficiation of the purified solution as copper oxide. The purification of the leach liquor was firstly done by precipitation of total iron and manganese using Ca(OH)2 and H2O2 as oxidizer at pH 3.5 and 4.25, respectively. An extraction efficiency of 97.3% total copper was obtained by 0.2 mol/L Dithizone in kerosene at 25±2ºC within 40 minutes, from which ≈98% Cu from loaded organic phase was successfully stripped by 0.1 mol/L HCl solution. The beneficiation of the recovered pure copper solution was carried out by crystallization through alkali addition followed by calcination at 600ºC to obtain high grade copper oxide (Tenorite, CuO: 05-0661). Finally, a simple hydrometallurgical scheme for the operational extraction procedure amenable for industrial utilization and economic sustainability was provided.Keywords: chalcopyrite ore, Nigeria, copper, copper oxide, solvent extraction
Procedia PDF Downloads 394921 Hybrid Energy System for the German Mining Industry: An Optimized Model
Authors: Kateryna Zharan, Jan C. Bongaerts
Abstract:
In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy
Procedia PDF Downloads 344920 Upflow Anaerobic Sludge Blanket Reactor Followed by Dissolved Air Flotation Treating Municipal Sewage
Authors: Priscila Ribeiro dos Santos, Luiz Antonio Daniel
Abstract:
Inadequate access to clean water and sanitation has become one of the most widespread problems affecting people throughout the developing world, leading to an unceasing need for low-cost and sustainable wastewater treatment systems. The UASB technology has been widely employed as a suitable and economical option for the treatment of sewage in developing countries, which involves low initial investment, low energy requirements, low operation and maintenance costs, high loading capacity, short hydraulic retention times, long solids retention times and low sludge production. Whereas dissolved air flotation process is a good option for the post-treatment of anaerobic effluents, being capable of producing high quality effluents in terms of total suspended solids, chemical oxygen demand, phosphorus, and even pathogens. This work presents an evaluation and monitoring, over a period of 6 months, of one compact full-scale system with this configuration, UASB reactors followed by dissolved air flotation units (DAF), operating in Brazil. It was verified as a successful treatment system, and an issue of relevance since dissolved air flotation process treating UASB reactor effluents is not widely encompassed in the literature. The study covered the removal and behavior of several variables, such as turbidity, total suspend solids (TSS), chemical oxygen demand (COD), Escherichia coli, total coliforms and Clostridium perfringens. The physicochemical variables were analyzed according to the protocols established by the Standard Methods for Examination of Water and Wastewater. For microbiological variables, such as Escherichia coli and total coliforms, it was used the “pour plate” technique with Chromocult Coliform Agar (Merk Cat. No.1.10426) serving as the culture medium, while the microorganism Clostridium perfringens was analyzed through the filtering membrane technique, with the Ágar m-CP (Oxoid Ltda, England) serving as the culture medium. Approximately 74% of total COD was removed in the UASB reactor, and the complementary removal done during the flotation process resulted in 88% of COD removal from the raw sewage, thus the initial concentration of COD of 729 mg.L-1 decreased to 87 mg.L-1. Whereas, in terms of particulate COD, the overall removal efficiency for the whole system was about 94%, decreasing from 375 mg.L-1 in raw sewage to 29 mg.L-1 in final effluent. The UASB reactor removed on average 77% of the TSS from raw sewage. While the dissolved air flotation process did not work as expected, removing only 30% of TSS from the anaerobic effluent. The final effluent presented an average concentration of 38 mg.L-1 of TSS. The turbidity was significantly reduced, leading to an overall efficiency removal of 80% and a final turbidity of 28 NTU.The treated effluent still presented a high concentration of fecal pollution indicators (E. coli, total coliforms, and Clostridium perfringens), showing that the system did not present a good performance in removing pathogens. Clostridium perfringens was the organism which suffered the higher removal by the treatment system. The results can be considered satisfactory for the physicochemical variables, taking into account the simplicity of the system, besides that, it is necessary a post-treatment to improve the microbiological quality of the final effluent.Keywords: dissolved air flotation, municipal sewage, UASB reactor, treatment
Procedia PDF Downloads 331919 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound
Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo
Abstract:
Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.Keywords: mitral regurgitation, vena contracta, color doppler, image processing
Procedia PDF Downloads 370918 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features
Authors: Bushra Zafar, Usman Qamar
Abstract:
Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection
Procedia PDF Downloads 316917 Water Management Scheme: Panacea to Development Using Nigeria’s University of Ibadan Water Supply Scheme as a Case Study
Authors: Sunday Olufemi Adesogan
Abstract:
The supply of potable water at least is a very important index in national development. Water tariffs depend on the treatment cost which carries the highest percentage of the total operation cost in any water supply scheme. In order to keep water tariffs as low as possible, treatment costs have to be minimized. The University of Ibadan, Nigeria, water supply scheme consists of a treatment plant with three distribution stations (Amina way, Kurumi and Lander) and two raw water supply sources (Awba dam and Eleyele dam). An operational study of the scheme was carried out to ascertain the efficiency of the supply of potable water on the campus to justify the need for water supply schemes in tertiary institutions. The study involved regular collection, processing and analysis of periodic operational data. Data collected include supply reading (water production on daily basis) and consumers metered reading for a period of 22 months (October 2013 - July 2015), and also collected, were the operating hours of both plants and human beings. Applying the required mathematical equations, total loss was determined for the distribution system, which was translated into monetary terms. Adequacies of the operational functions were also determined. The study revealed that water supply scheme is justified in tertiary institutions. It was also found that approximately 10.7 million Nigerian naira (Keywords: development, panacea, supply, water
Procedia PDF Downloads 209916 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation
Authors: Othman Maklouf, Abdunnaser Tresh
Abstract:
Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.Keywords: GPS, IMU, Kalman filter, materials engineering
Procedia PDF Downloads 422915 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications
Authors: Arijit Saha, Hassan Kassem, Leo Hoening
Abstract:
Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb
Procedia PDF Downloads 96914 Structural Protein-Protein Interactions Network of Breast Cancer Lung and Brain Metastasis Corroborates Conformational Changes of Proteins Lead to Different Signaling
Authors: Farideh Halakou, Emel Sen, Attila Gursoy, Ozlem Keskin
Abstract:
Protein–Protein Interactions (PPIs) mediate major biological processes in living cells. The study of PPIs as networks and analyze the network properties contribute to the identification of genes and proteins associated with diseases. In this study, we have created the sub-networks of brain and lung metastasis from primary tumor in breast cancer. To do so, we used seed genes known to cause metastasis, and produced their interactions through a network-topology based prioritization method named GUILDify. In order to have the experimental support for the sub-networks, we further curated them using STRING database. We proceeded by modeling structures for the interactions lacking complex forms in Protein Data Bank (PDB). The functional enrichment analysis shows that KEGG pathways associated with the immune system and infectious diseases, particularly the chemokine signaling pathway, are important for lung metastasis. On the other hand, pathways related to genetic information processing are more involved in brain metastasis. The structural analyses of the sub-networks vividly demonstrated their difference in terms of using specific interfaces in lung and brain metastasis. Furthermore, the topological analysis identified genes such as RPL5, MMP2, CCR5 and DPP4, which are already known to be associated with lung or brain metastasis. Additionally, we found 6 and 9 putative genes that are specific for lung and brain metastasis, respectively. Our analysis suggests that variations in genes and pathways contributing to these different breast metastasis types may arise due to change in tissue microenvironment. To show the benefits of using structural PPI networks instead of traditional node and edge presentation, we inspect two case studies showing the mutual exclusiveness of interactions and effects of mutations on protein conformation which lead to different signaling.Keywords: breast cancer, metastasis, PPI networks, protein conformational changes
Procedia PDF Downloads 244913 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing
Authors: Jonathan Martino, Kristof Harri
Abstract:
In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration
Procedia PDF Downloads 270912 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic
Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla
Abstract:
Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion
Procedia PDF Downloads 295911 The Istrian Istrovenetian-Croatian Bilingual Corpus
Authors: Nada Poropat Jeletic, Gordana Hrzica
Abstract:
Bilingual conversational corpora represent a meaningful and the most comprehensive data source for investigating the genuine contact phenomena in non-monitored bi-lingual speech productions. They can be particularly useful for bilingual research since some features of bilingual interaction can hardly be accessed with more traditional methodologies (e.g., elicitation tasks). The method of language sampling provides the resources for describing language interaction in a bilingual community and/or in bilingual situations (e.g. code-switching, amount of languages used, number of languages used, etc.). To capture these phenomena in genuine communication situations, such sampling should be as close as possible to spontaneous communication. Bilingual spoken corpus design is methodologically demanding. Therefore this paper aims at describing the methodological challenges that apply to the corpus design of the conversational corpus design of the Istrian Istrovenetian-Croatian Bilingual Corpus. Croatian is the first official language of the Croatian-Italian officially bilingual Istria County, while Istrovenetian is a diatopic subvariety of Venetian, a longlasting lingua franca in the Istrian peninsula, the mother tongue of the members of the Italian National Community in Istria and the primary code of informal everyday communication among the Istrian Italophone population. Within the CLARIN infrastructure, TalkBank is being used, as it provides relevant procedures for designing and analyzing bilingual corpora. Furthermore, it allows public availability allows for easy replication of studies and cumulative progress as a research community builds up around the corpus, while the tools developed within the field of corpus linguistics enable easy retrieval and analysis of information. The method of language sampling employed is kept at the level of spontaneous communication, in order to maximise the naturalness of the collected conversational data. All speakers have provided written informed consent in which they agree to be recorded at a random point within the period of one month after signing the consent. Participants are administered a background questionnaire providing information about the socioeconomic status and the exposure and language usage in the participants social networks. Recording data are being transcribed, phonologically adapted within a standard-sized orthographic form, coded and segmented (speech streams are being segmented into communication units based on syntactic criteria) and are being marked following the CHAT transcription system and its associated CLAN suite of programmes within the TalkBank toolkit. The corpus consists of transcribed sound recordings of 36 bilingual speakers, while the target is to publish the whole corpus by the end of 2020, by sampling spontaneous conversations among approximately 100 speakers from all the bilingual areas of Istria for ensuring representativeness (the participants are being recruited across three generations of native bilingual speakers in all the bilingual areas of the peninsula). Conversational corpora are still rare in TalkBank, so the Corpus will contribute to BilingBank as a highly relevant and scientifically reliable resource for an internationally established and active research community. The impact of the research of communities with societal bilingualism will contribute to the growing body of research on bilingualism and multilingualism, especially regarding topics of language dominance, language attrition and loss, interference and code-switching etc.Keywords: conversational corpora, bilingual corpora, code-switching, language sampling, corpus design methodology
Procedia PDF Downloads 145910 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection
Authors: Yulan Wu
Abstract:
With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.Keywords: fake news, deep learning, natural language processing, multiple domains
Procedia PDF Downloads 97909 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength
Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos
Abstract:
Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables
Procedia PDF Downloads 338908 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector
Authors: Roopa Singh, Anurag Singh, Ajay
Abstract:
Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector
Procedia PDF Downloads 353907 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 136906 Nutritional Advantages of Millet (Panucum Miliaceum L) and Opportunities for Its Processing as Value Added Foods
Authors: Fatima Majeed Almonajim
Abstract:
Panucum miliaceum L is a plant from the genus Gramineae, In the world, millets are regarded as a significant grain, however, they are very little exploited. Millet grain is abundant in nutrients and health-beneficial phenolic compounds, making it suitable as food and feed. The plant has received considerable attention for its high content of phenolic compounds, low glycemic index, the presence of unsaturated fats and lack of gluten which are beneficial to human health, and thus, have made the plant being effective in treating celiac disease, diabetes, lowering blood lipids (cholesterol) and preventing tumors. Moreover, the plant requires little water to grow, a property that is worth considering. This study provides an overview of the nutritional and health benefits provided by millet types grown in 2 areas Iraq and Iran, aiming to compare the effect of climate on the components of millet. In this research, millet samples collected from the both Babylon (Iraqi) and Isfahan (Iranian) types were extracted and after HPTLC, the resulted pattern of the two samples were compared. As a result, the Iranian millet showed more terpenoid compounds than Iraqi millet, and therefore, Iranian millet has a higher priority than Iraqi millet in increasing the human body's immunity. On the other hand, in view of the number of essential amino acids, the Iraqi millet contains more nutritional value compared to the Iranian millet. Also, due to the higher amount of histidine in the Iranian millet, compiled to the lack of gluten found from previous studies, we came to the conclusion that the addition of millet in the diet of children, more specifically those children with irritable bowel syndrome, can be considered beneficial. Therefore, as a component of dairy products, millet can be used in preparing food for children such as dry milk.Keywords: HPTLC, phytochemicals, specialty foods, Panucum miliaceum L, nutrition
Procedia PDF Downloads 95905 Combined Analysis of Land use Change and Natural Flow Path in Flood Analysis
Authors: Nowbuth Manta Devi, Rasmally Mohammed Hussein
Abstract:
Flood is one of the most devastating climate impacts that many countries are facing. Many different causes have been associated with the intensity of floods being recorded over time. Unplanned development, low carrying capacity of drains, clogged drains, construction in flood plains or increasing intensity of rainfall events. While a combination of these causes can certainly aggravate the flood conditions, in many cases, increasing drainage capacity has not reduced flood risk to the level that was expected. The present study analyzed the extent to which land use is contributing to aggravating impacts of flooding in a city. Satellite images have been analyzed over a period of 20 years at intervals of 5 years. Both unsupervised and supervised classification methods have been used with the image processing module of ArcGIS. The unsupervised classification was first compared to the basemap available in ArcGIS to get a first overview of the results. These results also aided in guiding data collection on-site for the supervised classification. The island of Mauritius is small, and there are large variations in land use over small areas, both within the built areas and in agricultural zones involving food crops. Larger plots of agricultural land under sugar cane plantations are relatively more easily identified. However, the growth stage and health of plants vary and this had to be verified during ground truthing. The results show that although there have been changes in land use as expected over a span of 20 years, this was not significant enough to cause a major increase in flood risk levels. A digital elevation model was analyzed for further understanding. It could not be noted that overtime, development tampered with natural flow paths in addition to increasing the impermeable areas. This situation results in backwater flows, hence increasing flood risks.Keywords: climate change, flood, natural flow paths, small islands
Procedia PDF Downloads 8904 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria
Authors: Isaac Kayode Ogunlade
Abstract:
Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device
Procedia PDF Downloads 92903 Removal of Pb²⁺ from Waste Water Using Nano Silica Spheres Synthesized on CaCO₃ as a Template: Equilibrium and Thermodynamic Studies
Authors: Milton Manyangadze, Joseph Govha, T. Bala Narsaiah, Ch. Shilpa Chakra
Abstract:
The availability and access to fresh water is today a serious global challenge. This has been a direct result of factors such as the current rapid industrialization and industrial growth, persistent droughts in some parts of the world, especially in the sub-Saharan Africa as well as population growth. Growth of the chemical processing industry has also seen an increase in the levels of pollutants in our water bodies which include heavy metals among others. Heavy metals are known to be dangerous to both human and aquatic life. As such, they have been linked to several diseases. This is mainly because they are highly toxic. They are also known to be bio accumulative and non-biodegradable. Lead for example, has been linked to a number of health problems which include damage of vital internal body systems like the nervous and reproductive system as well as the kidneys. From this background therefore, the removal of the toxic heavy metal, Pb2+ from waste water was investigated using nano silica hollow spheres (NSHS) as the adsorbent. Synthesis of NSHS was done using a three-stage process in which CaCO3 nanoparticles were initially prepared as a template. This was followed by treatment of the formed oxide particles with NaSiO3 to give a nanocomposite. Finally, the template was destroyed using 2.0M HCl to give NSHS. Characterization of the nanoparticles was done using analytical techniques like XRD, SEM, and TGA. For the adsorption process, both thermodynamic and equilibrium studies were carried out. Thermodynamic studies were carried out and the Gibbs free energy, Enthalpy and Entropy of the adsorption process were determined. The results revealed that the adsorption process was both endothermic and spontaneous. Equilibrium studies were also carried out in which the Langmuir and Freundlich isotherms were tested. The results showed that the Langmuir model best described the adsorption equilibrium.Keywords: characterization, endothermic, equilibrium studies, Freundlich, Langmuir, nanoparticles, thermodynamic studies
Procedia PDF Downloads 215