Search results for: Processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3550

Search results for: Processing

880 Hybrid Energy System for the German Mining Industry: An Optimized Model

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.

Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy

Procedia PDF Downloads 312
879 A Framework to Analyze Project Management Cognitive Process Using MNE-Python

Authors: Tulio Sulbaran, Krishna Kisi

Abstract:

Significant research has been done in project management aiming to understand and improve processes, methodologies, and outcomes of managing projects across various industries and domains. However, project management research in the cognitive processes underlying decision-making, problem-solving, and information processing is limited. Thus, the problem that this research paper addresses is this limited research that could be due to several reasons such as interdisciplinary nature, practical constraints, lack of awareness of the opportunities, complexity, and lack of an analysis framework that can be used by researchers of project management. Therefore, the objective of this paper is to present a comprehensive and simple framework utilizing MNE-Python to investigate the cognitive processes involved in project management tasks. MNE-Python was selected because it is a powerful Python library for analyzing brain activity data from magnetoencephalography (MEG) and electroencephalography (EEG) experiments. The methodology used in this research was the qualitative method for building conceptual frameworks for phenomena that are linked to multidisciplinary bodies of knowledge. The resulting framework is organized in several key stages: import data fNIRS Raw Data, preprocess data and visualization, analysis and statistical testing, and interpret findings. The intellectual merit of this work is bridging the gap between neuroscience and project management research by providing a framework for studying the cognitive processes underlying project management tasks. The proposed framework holds promise for advancing our understanding of how project managers navigate complex environments, make strategic decisions, and optimize project outcomes. The broad impact of this work is that the insights gained from this research can inform the development of cognitive interventions and training programs to enhance project management performance and decision-making efficacy with the potential to enhance project success rates and optimize resource allocation.

Keywords: analysis, framework, MNE-Python, project management.

Procedia PDF Downloads 2
878 Quantitative Evaluation of Mitral Regurgitation by Using Color Doppler Ultrasound

Authors: Shang-Yu Chiang, Yu-Shan Tsai, Shih-Hsien Sung, Chung-Ming Lo

Abstract:

Mitral regurgitation (MR) is a heart disorder which the mitral valve does not close properly when the heart pumps out blood. MR is the most common form of valvular heart disease in the adult population. The diagnostic echocardiographic finding of MR is straightforward due to the well-known clinical evidence. In the determination of MR severity, quantification of sonographic findings would be useful for clinical decision making. Clinically, the vena contracta is a standard for MR evaluation. Vena contracta is the point in a blood stream where the diameter of the stream is the least, and the velocity is the maximum. The quantification of vena contracta, i.e. the vena contracta width (VCW) at mitral valve, can be a numeric measurement for severity assessment. However, manually delineating the VCW may not accurate enough. The result highly depends on the operator experience. Therefore, this study proposed an automatic method to quantify VCW to evaluate MR severity. Based on color Doppler ultrasound, VCW can be observed from the blood flows to the probe as the appearance of red or yellow area. The corresponding brightness represents the value of the flow rate. In the experiment, colors were firstly transformed into HSV (hue, saturation and value) to be closely align with the way human vision perceives red and yellow. Using ellipse to fit the high flow rate area in left atrium, the angle between the mitral valve and the ultrasound probe was calculated to get the vertical shortest diameter as the VCW. Taking the manual measurement as the standard, the method achieved only 0.02 (0.38 vs. 0.36) to 0.03 (0.42 vs. 0.45) cm differences. The result showed that the proposed automatic VCW extraction can be efficient and accurate for clinical use. The process also has the potential to reduce intra- or inter-observer variability at measuring subtle distances.

Keywords: mitral regurgitation, vena contracta, color doppler, image processing

Procedia PDF Downloads 344
877 A Novel Heuristic for Analysis of Large Datasets by Selecting Wrapper-Based Features

Authors: Bushra Zafar, Usman Qamar

Abstract:

Large data sample size and dimensions render the effectiveness of conventional data mining methodologies. A data mining technique are important tools for collection of knowledgeable information from variety of databases and provides supervised learning in the form of classification to design models to describe vital data classes while structure of the classifier is based on class attribute. Classification efficiency and accuracy are often influenced to great extent by noisy and undesirable features in real application data sets. The inherent natures of data set greatly masks its quality analysis and leave us with quite few practical approaches to use. To our knowledge first time, we present a new approach for investigation of structure and quality of datasets by providing a targeted analysis of localization of noisy and irrelevant features of data sets. Machine learning is based primarily on feature selection as pre-processing step which offers us to select few features from number of features as a subset by reducing the space according to certain evaluation criterion. The primary objective of this study is to trim down the scope of the given data sample by searching a small set of important features which may results into good classification performance. For this purpose, a heuristic for wrapper-based feature selection using genetic algorithm and for discriminative feature selection an external classifier are used. Selection of feature based on its number of occurrence in the chosen chromosomes. Sample dataset has been used to demonstrate proposed idea effectively. A proposed method has improved average accuracy of different datasets is about 95%. Experimental results illustrate that proposed algorithm increases the accuracy of prediction of different diseases.

Keywords: data mining, generic algorithm, KNN algorithms, wrapper based feature selection

Procedia PDF Downloads 289
876 Water Management Scheme: Panacea to Development Using Nigeria’s University of Ibadan Water Supply Scheme as a Case Study

Authors: Sunday Olufemi Adesogan

Abstract:

The supply of potable water at least is a very important index in national development. Water tariffs depend on the treatment cost which carries the highest percentage of the total operation cost in any water supply scheme. In order to keep water tariffs as low as possible, treatment costs have to be minimized. The University of Ibadan, Nigeria, water supply scheme consists of a treatment plant with three distribution stations (Amina way, Kurumi and Lander) and two raw water supply sources (Awba dam and Eleyele dam). An operational study of the scheme was carried out to ascertain the efficiency of the supply of potable water on the campus to justify the need for water supply schemes in tertiary institutions. The study involved regular collection, processing and analysis of periodic operational data. Data collected include supply reading (water production on daily basis) and consumers metered reading for a period of 22 months (October 2013 - July 2015), and also collected, were the operating hours of both plants and human beings. Applying the required mathematical equations, total loss was determined for the distribution system, which was translated into monetary terms. Adequacies of the operational functions were also determined. The study revealed that water supply scheme is justified in tertiary institutions. It was also found that approximately 10.7 million Nigerian naira (N) is lost to leakages during the 22-month study period; the system’s storage capacity is no longer adequate, especially for peak water production. The capacity of the system as a whole is insufficient for the present university population and that the existing water supply system is not being operated in an optimal manner especially due to personnel, power and system ageing constraints.

Keywords: development, panacea, supply, water

Procedia PDF Downloads 181
875 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.

Keywords: GPS, IMU, Kalman filter, materials engineering

Procedia PDF Downloads 392
874 Numerical Investigation of Turbulent Inflow Strategy in Wind Energy Applications

Authors: Arijit Saha, Hassan Kassem, Leo Hoening

Abstract:

Ongoing climate change demands the increasing use of renewable energies. Wind energy plays an important role in this context since it can be applied almost everywhere in the world. To reduce the costs of wind turbines and to make them more competitive, simulations are very important since experiments are often too costly if at all possible. The wind turbine on a vast open area experiences the turbulence generated due to the atmosphere, so it was of utmost interest from this research point of view to generate the turbulence through various Inlet Turbulence Generation methods like Precursor cyclic and Kaimal Spectrum Exponential Coherence (KSEC) in the computational simulation domain. To be able to validate computational fluid dynamic simulations of wind turbines with the experimental data, it is crucial to set up the conditions in the simulation as close to reality as possible. This present work, therefore, aims at investigating the turbulent inflow strategy and boundary conditions of KSEC and providing a comparative analysis alongside the Precursor cyclic method for Large Eddy Simulation within the context of wind energy applications. For the generation of the turbulent box through KSEC method, firstly, the constrained data were collected from an auxiliary channel flow, and later processing was performed with the open-source tool PyconTurb, whereas for the precursor cyclic, only the data from the auxiliary channel were sufficient. The functionality of these methods was studied through various statistical properties such as variance, turbulent intensity, etc with respect to different Bulk Reynolds numbers, and a conclusion was drawn on the feasibility of KSEC method. Furthermore, it was found necessary to verify the obtained data with DNS case setup for its applicability to use it as a real field CFD simulation.

Keywords: Inlet Turbulence Generation, CFD, precursor cyclic, KSEC, large Eddy simulation, PyconTurb

Procedia PDF Downloads 60
873 Structural Protein-Protein Interactions Network of Breast Cancer Lung and Brain Metastasis Corroborates Conformational Changes of Proteins Lead to Different Signaling

Authors: Farideh Halakou, Emel Sen, Attila Gursoy, Ozlem Keskin

Abstract:

Protein–Protein Interactions (PPIs) mediate major biological processes in living cells. The study of PPIs as networks and analyze the network properties contribute to the identification of genes and proteins associated with diseases. In this study, we have created the sub-networks of brain and lung metastasis from primary tumor in breast cancer. To do so, we used seed genes known to cause metastasis, and produced their interactions through a network-topology based prioritization method named GUILDify. In order to have the experimental support for the sub-networks, we further curated them using STRING database. We proceeded by modeling structures for the interactions lacking complex forms in Protein Data Bank (PDB). The functional enrichment analysis shows that KEGG pathways associated with the immune system and infectious diseases, particularly the chemokine signaling pathway, are important for lung metastasis. On the other hand, pathways related to genetic information processing are more involved in brain metastasis. The structural analyses of the sub-networks vividly demonstrated their difference in terms of using specific interfaces in lung and brain metastasis. Furthermore, the topological analysis identified genes such as RPL5, MMP2, CCR5 and DPP4, which are already known to be associated with lung or brain metastasis. Additionally, we found 6 and 9 putative genes that are specific for lung and brain metastasis, respectively. Our analysis suggests that variations in genes and pathways contributing to these different breast metastasis types may arise due to change in tissue microenvironment. To show the benefits of using structural PPI networks instead of traditional node and edge presentation, we inspect two case studies showing the mutual exclusiveness of interactions and effects of mutations on protein conformation which lead to different signaling.

Keywords: breast cancer, metastasis, PPI networks, protein conformational changes

Procedia PDF Downloads 203
872 Experimental and Modal Determination of the State-Space Model Parameters of a Uni-Axial Shaker System for Virtual Vibration Testing

Authors: Jonathan Martino, Kristof Harri

Abstract:

In some cases, the increase in computing resources makes simulation methods more affordable. The increase in processing speed also allows real time analysis or even more rapid tests analysis offering a real tool for test prediction and design process optimization. Vibration tests are no exception to this trend. The so called ‘Virtual Vibration Testing’ offers solution among others to study the influence of specific loads, to better anticipate the boundary conditions between the exciter and the structure under test, to study the influence of small changes in the structure under test, etc. This article will first present a virtual vibration test modeling with a main focus on the shaker model and will afterwards present the experimental parameters determination. The classical way of modeling a shaker is to consider the shaker as a simple mechanical structure augmented by an electrical circuit that makes the shaker move. The shaker is modeled as a two or three degrees of freedom lumped parameters model while the electrical circuit takes the coil impedance and the dynamic back-electromagnetic force into account. The establishment of the equations of this model, describing the dynamics of the shaker, is presented in this article and is strongly related to the internal physical quantities of the shaker. Those quantities will be reduced into global parameters which will be estimated through experiments. Different experiments will be carried out in order to design an easy and practical method for the identification of the shaker parameters leading to a fully functional shaker model. An experimental modal analysis will also be carried out to extract the modal parameters of the shaker and to combine them with the electrical measurements. Finally, this article will conclude with an experimental validation of the model.

Keywords: lumped parameters model, shaker modeling, shaker parameters, state-space, virtual vibration

Procedia PDF Downloads 246
871 A Sustainable Approach for Waste Management: Automotive Waste Transformation into High Value Titanium Nitride Ceramic

Authors: Mohannad Mayyas, Farshid Pahlevani, Veena Sahajwalla

Abstract:

Automotive shredder residue (ASR) is an industrial waste, generated during the recycling process of End-of-life vehicles. The large increasing production volumes of ASR and its hazardous content have raised concerns worldwide, leading some countries to impose more restrictions on ASR waste disposal and encouraging researchers to find efficient solutions for ASR processing. Although a great deal of research work has been carried out, all proposed solutions, to our knowledge, remain commercially and technically unproven. While the volume of waste materials continues to increase, the production of materials from new sustainable sources has become of great importance. Advanced ceramic materials such as nitrides, carbides and borides are widely used in a variety of applications. Among these ceramics, a great deal of attention has been recently paid to Titanium nitride (TiN) owing to its unique characteristics. In our study, we propose a new sustainable approach for ASR management where TiN nanoparticles with ideal particle size ranging from 200 to 315 nm can be synthesized as a by-product. In this approach, TiN is thermally synthesized by nitriding pressed mixture of automotive shredder residue (ASR) incorporated with titanium oxide (TiO2). Results indicated that TiO2 influences and catalyses degradation reactions of ASR and helps to achieve fast and full decomposition. In addition, the process resulted in titanium nitride (TiN) ceramic with several unique structures (porous nanostructured, polycrystalline, micro-spherical and nano-sized structures) that were simply obtained by tuning the ratio of TiO2 to ASR, and a product with appreciable TiN content of around 85% was achieved after only one hour nitridation at 1550 °C.

Keywords: automotive shredder residue, nano-ceramics, waste treatment, titanium nitride, thermal conversion

Procedia PDF Downloads 263
870 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 47
869 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: statistical slope stability analysis, skew distributions, probability of failure, functions of random variables

Procedia PDF Downloads 302
868 Chronolgy and Developments in Inventory Control Best Practices for FMCG Sector

Authors: Roopa Singh, Anurag Singh, Ajay

Abstract:

Agriculture contributes a major share in the national economy of India. A major portion of Indian economy (about 70%) depends upon agriculture as it forms the main source of income. About 43% of India’s geographical area is used for agricultural activity which involves 65-75% of total population of India. The given work deals with the Fast moving Consumer Goods (FMCG) industries and their inventories which use agricultural produce as their raw material or input for their final product. Since the beginning of inventory practices, many developments took place which can be categorised into three phases, based on the review of various works. The first phase is related with development and utilization of Economic Order Quantity (EOQ) model and methods for optimizing costs and profits. Second phase deals with inventory optimization method, with the purpose of balancing capital investment constraints and service level goals. The third and recent phase has merged inventory control with electrical control theory. Maintenance of inventory is considered negative, as a large amount of capital is blocked especially in mechanical and electrical industries. But the case is different in food processing and agro-based industries and their inventories due to cyclic variation in the cost of raw materials of such industries which is the reason for selection of these industries in the mentioned work. The application of electrical control theory in inventory control makes the decision-making highly instantaneous for FMCG industries without loss in their proposed profits, which happened earlier during first and second phases, mainly due to late implementation of decision. The work also replaces various inventories and work-in-progress (WIP) related errors with their monetary values, so that the decision-making is fully target-oriented.

Keywords: control theory, inventory control, manufacturing sector, EOQ, feedback, FMCG sector

Procedia PDF Downloads 330
867 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing

Authors: Yehjune Heo

Abstract:

As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.

Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer

Procedia PDF Downloads 100
866 Nutritional Advantages of Millet (Panucum Miliaceum L) and Opportunities for Its Processing as Value Added Foods

Authors: Fatima Majeed Almonajim

Abstract:

Panucum miliaceum L is a plant from the genus Gramineae, In the world, millets are regarded as a significant grain, however, they are very little exploited. Millet grain is abundant in nutrients and health-beneficial phenolic compounds, making it suitable as food and feed. The plant has received considerable attention for its high content of phenolic compounds, low glycemic index, the presence of unsaturated fats and lack of gluten which are beneficial to human health, and thus, have made the plant being effective in treating celiac disease, diabetes, lowering blood lipids (cholesterol) and preventing tumors. Moreover, the plant requires little water to grow, a property that is worth considering. This study provides an overview of the nutritional and health benefits provided by millet types grown in 2 areas Iraq and Iran, aiming to compare the effect of climate on the components of millet. In this research, millet samples collected from the both Babylon (Iraqi) and Isfahan (Iranian) types were extracted and after HPTLC, the resulted pattern of the two samples were compared. As a result, the Iranian millet showed more terpenoid compounds than Iraqi millet, and therefore, Iranian millet has a higher priority than Iraqi millet in increasing the human body's immunity. On the other hand, in view of the number of essential amino acids, the Iraqi millet contains more nutritional value compared to the Iranian millet. Also, due to the higher amount of histidine in the Iranian millet, compiled to the lack of gluten found from previous studies, we came to the conclusion that the addition of millet in the diet of children, more specifically those children with irritable bowel syndrome, can be considered beneficial. Therefore, as a component of dairy products, millet can be used in preparing food for children such as dry milk.

Keywords: HPTLC, phytochemicals, specialty foods, Panucum miliaceum L, nutrition

Procedia PDF Downloads 62
865 Design of a Standard Weather Data Acquisition Device for the Federal University of Technology, Akure Nigeria

Authors: Isaac Kayode Ogunlade

Abstract:

Data acquisition (DAQ) is the process by which physical phenomena from the real world are transformed into an electrical signal(s) that are measured and converted into a digital format for processing, analysis, and storage by a computer. The DAQ is designed using PIC18F4550 microcontroller, communicating with Personal Computer (PC) through USB (Universal Serial Bus). The research deployed initial knowledge of data acquisition system and embedded system to develop a weather data acquisition device using LM35 sensor to measure weather parameters and the use of Artificial Intelligence(Artificial Neural Network - ANN)and statistical approach(Autoregressive Integrated Moving Average – ARIMA) to predict precipitation (rainfall). The device is placed by a standard device in the Department of Meteorology, Federal University of Technology, Akure (FUTA) to know the performance evaluation of the device. Both devices (standard and designed) were subjected to 180 days with the same atmospheric condition for data mining (temperature, relative humidity, and pressure). The acquired data is trained in MATLAB R2012b environment using ANN, and ARIMAto predict precipitation (rainfall). Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Correction Square (R2), and Mean Percentage Error (MPE) was deplored as standardize evaluation to know the performance of the models in the prediction of precipitation. The results from the working of the developed device show that the device has an efficiency of 96% and is also compatible with Personal Computer (PC) and laptops. The simulation result for acquired data shows that ANN models precipitation (rainfall) prediction for two months (May and June 2017) revealed a disparity error of 1.59%; while ARIMA is 2.63%, respectively. The device will be useful in research, practical laboratories, and industrial environments.

Keywords: data acquisition system, design device, weather development, predict precipitation and (FUTA) standard device

Procedia PDF Downloads 62
864 Efficacy of Phonological Awareness Intervention for People with Language Impairment

Authors: I. Wardana Ketut, I. Suparwa Nyoman

Abstract:

This study investigated the form and characteristic of speech sound produced by three Balinese subjects who have recovered from aphasia as well as intervened their language impairment on side of linguistic and neuronal aspects of views. The failure of judging the speech sound was caused by impairment of motor cortex that indicated there were lesions in left hemispheric language zone. Sound articulation phenomena were in the forms of phonemes deletion, replacement or assimilation in individual words and meaning building for anomic aphasia. Therefore, the Balinese sound patterns were stimulated by showing pictures to the subjects and recorded to recognize what individual consonants or vowels they unclearly produced and to find out how the sound disorder occurred. The physiology of sound production by subject’s speech organs could not only show the accuracy of articulation but also any level of severity the lesion they suffered from. The subjects’ speech sounds were investigated, classified and analyzed to know how poor the lingual units were and observed to clarify weaknesses of sound characters occurred either for place or manner of articulation. Many fricative and stopped consonants were replaced by glottal or palatal sounds because the cranial nerve, such as facial, trigeminal, and hypoglossal underwent impairment after the stroke. The phonological intervention was applied through a technique called phonemic articulation drill and the examination was conducted to know any change has been obtained. The finding informed that some weak articulation turned into clearer sound and simple meaning of language has been conveyed. The hierarchy of functional parts of brain played important role of language formulation and processing. From this finding, it can be clearly emphasized that this study supports the role of right hemisphere in recovery from aphasia is associated with functional brain reorganization.

Keywords: aphasia, intervention, phonology, stroke

Procedia PDF Downloads 172
863 Removal of Pb²⁺ from Waste Water Using Nano Silica Spheres Synthesized on CaCO₃ as a Template: Equilibrium and Thermodynamic Studies

Authors: Milton Manyangadze, Joseph Govha, T. Bala Narsaiah, Ch. Shilpa Chakra

Abstract:

The availability and access to fresh water is today a serious global challenge. This has been a direct result of factors such as the current rapid industrialization and industrial growth, persistent droughts in some parts of the world, especially in the sub-Saharan Africa as well as population growth. Growth of the chemical processing industry has also seen an increase in the levels of pollutants in our water bodies which include heavy metals among others. Heavy metals are known to be dangerous to both human and aquatic life. As such, they have been linked to several diseases. This is mainly because they are highly toxic. They are also known to be bio accumulative and non-biodegradable. Lead for example, has been linked to a number of health problems which include damage of vital internal body systems like the nervous and reproductive system as well as the kidneys. From this background therefore, the removal of the toxic heavy metal, Pb2+ from waste water was investigated using nano silica hollow spheres (NSHS) as the adsorbent. Synthesis of NSHS was done using a three-stage process in which CaCO3 nanoparticles were initially prepared as a template. This was followed by treatment of the formed oxide particles with NaSiO3 to give a nanocomposite. Finally, the template was destroyed using 2.0M HCl to give NSHS. Characterization of the nanoparticles was done using analytical techniques like XRD, SEM, and TGA. For the adsorption process, both thermodynamic and equilibrium studies were carried out. Thermodynamic studies were carried out and the Gibbs free energy, Enthalpy and Entropy of the adsorption process were determined. The results revealed that the adsorption process was both endothermic and spontaneous. Equilibrium studies were also carried out in which the Langmuir and Freundlich isotherms were tested. The results showed that the Langmuir model best described the adsorption equilibrium.

Keywords: characterization, endothermic, equilibrium studies, Freundlich, Langmuir, nanoparticles, thermodynamic studies

Procedia PDF Downloads 184
862 Towards the Production of Least Contaminant Grade Biosolids and Biochar via Mild Acid Pre-treatment

Authors: Ibrahim Hakeem

Abstract:

Biosolids are stabilised sewage sludge produced from wastewater treatment processes. Biosolids contain valuable plant nutrient which facilitates their beneficial reuse in agricultural land. However, the increasing levels of legacy and emerging contaminants such as heavy metals (HMs), PFAS, microplastics, pharmaceuticals, microbial pathogens etc., are restraining the direct land application of biosolids. Pyrolysis of biosolids can effectively degrade microbial and organic contaminants; however, HMs remain a persistent problem with biosolids and their pyrolysis-derived biochar. In this work, we demonstrated the integrated processing of biosolids involving the acid pre-treatment for HMs removal and selective reduction of ash-forming elements followed by the bench-scale pyrolysis of the treated biosolids to produce quality biochar and bio-oil enriched with valuable platform chemicals. The pre-treatment of biosolids using 3% v/v H₂SO₄ at room conditions for 30 min reduced the ash content from 30 wt% in raw biosolids to 15 wt% in the treated sample while removing about 80% of limiting HMs without degrading the organic matter. The preservation of nutrients and reduction of HMs concentration and mobility via the developed hydrometallurgical process improved the grade of the treated biosolids for beneficial land reuse. The co-removal of ash-forming elements from biosolids positively enhanced the fluidised bed pyrolysis of the acid-treated biosolids at 700 ℃. Organic matter devolatilisation was improved by 40%, and the produced biochar had higher surface area (107 m²/g), heating value (15 MJ/kg), fixed carbon (35 wt%), organic carbon retention (66% dry-ash free) compared to the raw biosolids biochar with surface area (56 m²/g), heating value (9 MJ/kg), fixed carbon (20 wt%) and organic carbon retention (50%). Pre-treatment also improved microporous structure development of the biochar and substantially decreased the HMs concentration and bioavailability by at least 50% relative to the raw biosolids biochar. The integrated process is a viable approach to enhancing value recovery from biosolids.

Keywords: biosolids, pyrolysis, biochar, heavy metals

Procedia PDF Downloads 40
861 Thermo-Mechanical Analysis of Composite Structures Utilizing a Beam Finite Element Based on Global-Local Superposition

Authors: Andre S. de Lima, Alfredo R. de Faria, Jose J. R. Faria

Abstract:

Accurate prediction of thermal stresses is particularly important for laminated composite structures, as large temperature changes may occur during fabrication and field application. The normal transverse deformation plays an important role in the prediction of such stresses, especially for problems involving thick laminated plates subjected to uniform temperature loads. Bearing this in mind, the present study aims to investigate the thermo-mechanical behavior of laminated composite structures using a new beam element based on global-local superposition, accounting for through-the-thickness effects. The element formulation is based on a global-local superposition in the thickness direction, utilizing a cubic global displacement field in combination with a linear layerwise local displacement distribution, which assures zig-zag behavior of the stresses and displacements. By enforcing interlaminar stress (normal and shear) and displacement continuity, as well as free conditions at the upper and lower surfaces, the number of degrees of freedom in the model is maintained independently of the number of layers. Moreover, the proposed formulation allows for the determination of transverse shear and normal stresses directly from the constitutive equations, without the need of post-processing. Numerical results obtained with the beam element were compared to analytical solutions, as well as results obtained with commercial finite elements, rendering satisfactory results for a range of length-to-thickness ratios. The results confirm the need for an element with through-the-thickness capabilities and indicate that the present formulation is a promising alternative to such analysis.

Keywords: composite beam element, global-local superposition, laminated composite structures, thermal stresses

Procedia PDF Downloads 128
860 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 128
859 Effects of Non-Diagnostic Haptic Information on Consumers' Product Judgments and Decisions

Authors: Eun Young Park, Jongwon Park

Abstract:

A physical touch of a product can provide ample diagnostic information about the product attributes and quality. However, consumers’ product judgments and purchases can be erroneously influenced by non-diagnostic haptic information. For example, consumers’ evaluations of the coffee they drink could be affected by the heaviness of a cup that is used for just serving the coffee. This important issue has received little attention in prior research. The present research contributes to the literature by identifying when and how non-diagnostic haptic information can have an influence and why such influence occurs. Specifically, five studies experimentally varied the content of non-diagnostic haptic information, such as the weight of a cup (heavy vs. light) and the texture of a cup holder (smooth vs. rough), and then assessed the impact of the manipulation on product judgments and decisions. Results show that non-diagnostic haptic information has a biasing impact on consumer judgments. For example, the heavy (vs. light) cup increases consumers’ perception of the richness of coffee in it, and the rough (vs. smooth) texture of a cup holder increases the perception of the healthfulness of fruit juice in it, which in turn increases consumers’ purchase intentions of the product. When consumers are cognitively distracted during the touch experience, the impact of the content of haptic information is no longer evident, but the valence (positive vs. negative) of the haptic experience influences product judgments. However, consumers are able to avoid the impact of non-diagnostic haptic information, if and only if they are both knowledgeable about the product category and undistracted from processing the touch experience. In sum, the nature of the influence by non-diagnostic haptic information (i.e., assimilation effect vs. contrast effect vs. null effect) is determined by the content and valence of haptic information, the relative impact of which depends on whether consumers can identify the content and source of the haptic information. Theoretically, to our best knowledge, this research is the first to document the empirical evidence of the interplay between cognitive and affective processes that determines the impact of non-diagnostic haptic information. Managerial implications are discussed.

Keywords: consumer behavior, haptic information, product judgments, touch effect

Procedia PDF Downloads 132
858 Review on Future Economic Potential Stems from Global Electronic Waste Generation and Sustainable Recycling Practices.

Authors: Shamim Ahsan

Abstract:

Abstract Global digital advances associated with consumer’s strong inclination for the state of art digital technologies is causing overwhelming social and environmental challenges for global community. During recent years not only economic advances of electronic industries has taken place at steadfast rate, also the generation of e-waste outshined the growth of any other types of wastes. The estimated global e-waste volume is expected to reach 65.4 million tons annually by 2017. Formal recycling practices in developed countries are stemming economic liability, opening paths for illegal trafficking to developing countries. Informal crude management of large volume of e-waste is transforming into an emergent environmental and health challenge in. Contrariwise, in several studies formal and informal recycling of e-waste has also exhibited potentials for economic returns both in developed and developing countries. Some research on China illustrated that from large volume of e-wastes generation there are recycling potential in evolving from ∼16 (10−22) billion US$ in 2010, to an anticipated ∼73.4 (44.5−103.4) billion US$ by 2030. While in another study, researcher found from an economic analysis of 14 common categories of waste electric and electronic equipment (WEEE) the overall worth is calculated as €2.15 billion to European markets, with a potential rise to €3.67 billion as volumes increase. These economic returns and environmental protection approaches are feasible only when sustainable policy options are embraced with stricter regulatory mechanism. This study will critically review current researches to stipulate how global e-waste generation and sustainable e-waste recycling practices demonstrate future economic development potential in terms of both quantity and processing capacity, also triggering complex some environmental challenges.

Keywords: E-Waste, , Generation, , Economic Potential, Recycling

Procedia PDF Downloads 278
857 Design and Development of High Strength Aluminium Alloy from Recycled 7xxx-Series Material Using Bayesian Optimisation

Authors: Alireza Vahid, Santu Rana, Sunil Gupta, Pratibha Vellanki, Svetha Venkatesh, Thomas Dorin

Abstract:

Aluminum is the preferred material for lightweight applications and its alloys are constantly improving. The high strength 7xxx alloys have been extensively used for structural components in aerospace and automobile industries for the past 50 years. In the next decade, a great number of airplanes will be retired, providing an obvious source of valuable used metals and great demand for cost-effective methods to re-use these alloys. The design of proper aerospace alloys is primarily based on optimizing strength and ductility, both of which can be improved by controlling the additional alloying elements as well as heat treatment conditions. In this project, we explore the design of high-performance alloys with 7xxx as a base material. These designed alloys have to be optimized and improved to compare with modern 7xxx-series alloys and to remain competitive for aircraft manufacturing. Aerospace alloys are extremely complex with multiple alloying elements and numerous processing steps making optimization often intensive and costly. In the present study, we used Bayesian optimization algorithm, a well-known adaptive design strategy, to optimize this multi-variable system. An Al alloy was proposed and the relevant heat treatment schedules were optimized, using the tensile yield strength as the output to maximize. The designed alloy has a maximum yield strength and ultimate tensile strength of more than 730 and 760 MPa, respectively, and is thus comparable to the modern high strength 7xxx-series alloys. The microstructure of this alloy is characterized by electron microscopy, indicating that the increased strength of the alloy is due to the presence of a high number density of refined precipitates.

Keywords: aluminum alloys, Bayesian optimization, heat treatment, tensile properties

Procedia PDF Downloads 91
856 Utilization of Family Planning Methods and Associated Factors among Women of Reproductive Age Group in Sunsari, Nepal

Authors: Punam Kumari Mandal, Namita Yangden, Bhumika Rai, Achala Niraula, Sabitra Subedi

Abstract:

introduction: Family planning not only improves women’s health but also promotes gender equality, better child health, and improved education outcomes, including poverty reduction. The objective of this study is to assess the utilization of family planning methods and associated factors in Sunsari, Nepal. methodology: A cross-sectional analytical study was conducted among women of the reproductive age group (15-49 years) in Sunsari in 2020. Nonprobability purposive sampling was used to collect information from 212 respondents through face-to-face interviews using a Semi-structured interview schedule from ward no 1 of Barju rural municipality. Data processing was done by using SPSS “statistics for windows, version 17.0(SPSS Inc., Chicago, III.USA”). Descriptive analysis and inferential analysis (binary logistic regression) were used to find the association of the utilization of family planning methods with selected demographic variables. All the variables with P-value <0.1 in bivariate analysis were included in multivariate analysis. A P-value of <0.05 was considered to indicate statistical significance at a level of significance of 5%. results: This study showed that the mean age and standard deviation of the respondents were 26±7.03, and 91.5 % of respondent’s age at marriage was less than 20 years. Likewise, 67.5% of respondents use any methods of family planning, and 55.2% of respondents use family planning services from the government health facility. Furthermore, education (AOR 1.579, CI 1.013-2.462)., husband’s occupation (AOR 1.095, CI 0.744-1.610)., type of family (AOR 2.741, CI 1.210-6.210)., and no of living son (AOR 0.259 CI 0.077-0.872)are the factors associated with the utilization of family planning methods. conclusion: This study concludes that two-thirds of reproductive-age women utilize family planning methods. Furthermore, education, the husband’s occupation, the type of family, and no of living sons are the factors associated with the utilization of family planning methods. This reflects that awareness through mass media, including behavioral communication, is needed to increase the utilization of family planning methods.

Keywords: family planning methods, utilization. factors, women, community

Procedia PDF Downloads 88
855 Web Data Scraping Technology Using Term Frequency Inverse Document Frequency to Enhance the Big Data Quality on Sentiment Analysis

Authors: Sangita Pokhrel, Nalinda Somasiri, Rebecca Jeyavadhanam, Swathi Ganesan

Abstract:

Tourism is a booming industry with huge future potential for global wealth and employment. There are countless data generated over social media sites every day, creating numerous opportunities to bring more insights to decision-makers. The integration of Big Data Technology into the tourism industry will allow companies to conclude where their customers have been and what they like. This information can then be used by businesses, such as those in charge of managing visitor centers or hotels, etc., and the tourist can get a clear idea of places before visiting. The technical perspective of natural language is processed by analysing the sentiment features of online reviews from tourists, and we then supply an enhanced long short-term memory (LSTM) framework for sentiment feature extraction of travel reviews. We have constructed a web review database using a crawler and web scraping technique for experimental validation to evaluate the effectiveness of our methodology. The text form of sentences was first classified through Vader and Roberta model to get the polarity of the reviews. In this paper, we have conducted study methods for feature extraction, such as Count Vectorization and TFIDF Vectorization, and implemented Convolutional Neural Network (CNN) classifier algorithm for the sentiment analysis to decide the tourist’s attitude towards the destinations is positive, negative, or simply neutral based on the review text that they posted online. The results demonstrated that from the CNN algorithm, after pre-processing and cleaning the dataset, we received an accuracy of 96.12% for the positive and negative sentiment analysis.

Keywords: counter vectorization, convolutional neural network, crawler, data technology, long short-term memory, web scraping, sentiment analysis

Procedia PDF Downloads 48
854 The Effect of Mixing and Degassing Conditions on the Properties of Epoxy/Anhydride Resin System

Authors: Latha Krishnan, Andrew Cobley

Abstract:

Epoxy resin is most widely used as matrices for composites of aerospace, automotive and electronic applications due to its outstanding mechanical properties. These properties are chiefly predetermined by the chemical structure of the prepolymer and type of hardener but can also be varied by the processing conditions such as prepolymer and hardener mixing, degassing and curing conditions. In this research, the effect of degassing on the curing behaviour and the void occurrence is experimentally evaluated for epoxy /anhydride resin system. The epoxy prepolymer was mixed with an anhydride hardener and accelerator in an appropriate quantity. In order to investigate the effect of degassing on the curing behaviour and void content of the resin, the uncured resin samples were prepared using three different methods: 1) no degassing 2) degassing on prepolymer and 3) degassing on mixed solution of prepolymer and hardener with an accelerator. The uncured resins were tested in differential scanning calorimeter (DSC) to observe the changes in curing behaviour of the above three resin samples by analysing factors such as gel temperature, peak cure temperature and heat of reaction/heat flow in curing. Additionally, the completely cured samples were tested in DSC to identify the changes in the glass transition temperature (Tg) between the three samples. In order to evaluate the effect of degassing on the void content and morphology changes in the cured epoxy resin, the fractured surfaces of cured epoxy resin were examined under the scanning electron microscope (SEM). Also, the changes in the mechanical properties of the cured resin were studied by three-point bending test. It was found that degassing at different stages of resin mixing had significant effects on properties such as glass transition temperature, the void content and void size of the epoxy/anhydride resin system. For example, degassing (vacuum applied on the mixed resin) has shown higher glass transition temperature (Tg) with lower void content.

Keywords: anhydride epoxy, curing behaviour, degassing, void occurrence

Procedia PDF Downloads 313
853 Experimental Optimization in Diamond Lapping of Plasma Sprayed Ceramic Coatings

Authors: S. Gowri, K. Narayanasamy, R. Krishnamurthy

Abstract:

Plasma spraying, from the point of value engineering, is considered as a cost-effective technique to deposit high performance ceramic coatings on ferrous substrates for use in the aero,automobile,electronics and semiconductor industries. High-performance ceramics such as Alumina, Zirconia, and titania-based ceramics have become a key part of turbine blades,automotive cylinder liners,microelectronic and semiconductor components due to their ability to insulate and distribute heat. However, as the industries continue to advance, improved methods are needed to increase both the flexibility and speed of ceramic processing in these applications. The ceramics mentioned were individually coated on structural steel substrate with NiCr bond coat of 50-70 micron thickness with the final thickness in the range of 150 to 200 microns. Optimal spray parameters were selected based on bond strength and porosity. The 'optimal' processed specimens were super finished by lapping using diamond and green SiC abrasives. Interesting results could be observed as follows: The green SiC could improve the surface finish of lapped surfaces almost as that by diamond in case of alumina and titania based ceramics but the diamond abrasives could improve the surface finish of PSZ better than that by green SiC. The conventional random scratches could be absent in alumina and titania ceramics but in PS those marks were found to be less. However, the flatness accuracy could be improved unto 60 to 85%. The surface finish and geometrical accuracy were measured and modeled. The abrasives in the midrange of their particle size could improve the surface quality faster and better than the particles of size in low and high ranges. From the experimental investigations after lapping process, the optimal lapping time, abrasive size, lapping pressure etc could be evaluated.

Keywords: atmospheric plasma spraying, ceramics, lapping, surface qulaity, optimization

Procedia PDF Downloads 389
852 Impacts of Urbanization on Forest and Agriculture Areas in Savannakhet Province, Lao People's Democratic Republic

Authors: Chittana Phompila

Abstract:

The current increased population pushes increasing demands for natural resources and living space. In Laos, urban areas have been expanding rapidly in recent years. The rapid urbanization can have negative impacts on landscapes, including forest and agriculture lands. The primary objective of this research were to map current urban areas in a large city in Savannakhet province, in Laos, 2) to compare changes in urbanization between 1990 and 2018, and 3) to estimate forest and agriculture areas lost due to expansions of urban areas during the last over twenty years within study area. Landsat 8 data was used and existing GIS data was collected including spatial data on rivers, lakes, roads, vegetated areas and other land use/land covers). GIS data was obtained from the government sectors. Object based classification (OBC) approach was applied in ECognition for image processing and analysis of urban area using. Historical data from other Landsat instruments (Landsat 5 and 7) were used to allow us comparing changes in urbanization in 1990, 2000, 2010 and 2018 in this study area. Only three main land cover classes were focused and classified, namely forest, agriculture and urban areas. Change detection approach was applied to illustrate changes in built-up areas in these periods. Our study shows that the overall accuracy of map was 95% assessed, kappa~ 0.8. It is found that that there is an ineffective control over forest and land-use conversions from forests and agriculture to urban areas in many main cities across the province. A large area of agriculture and forest has been decreased due to this conversion. Uncontrolled urban expansion and inappropriate land use planning can lead to creating a pressure in our resource utilisation. As consequence, it can lead to food insecurity and national economic downturn in a long term.

Keywords: urbanisation, forest cover, agriculture areas, Landsat 8 imagery

Procedia PDF Downloads 132
851 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR

Authors: Pascal Mwenge, Tumisang Seodigeng

Abstract:

The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.

Keywords: biodiesel, calibration, chemometrics, methanolysis, multivariate analysis, transesterification, FTIR

Procedia PDF Downloads 122