Search results for: gradient boosting machines (GBM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1576

Search results for: gradient boosting machines (GBM)

616 Constructing a Bayesian Network for Solar Energy in Egypt Using Life Cycle Analysis and Machine Learning Algorithms

Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary

Abstract:

In an era where machines run and shape our world, the need for a stable, non-ending source of energy emerges. In this study, the focus was on the solar energy in Egypt as a renewable source, the most important factors that could affect the solar energy’s market share throughout its life cycle production were analyzed and filtered, the relationships between them were derived before structuring a Bayesian network. Also, forecasted models were built for multiple factors to predict the states in Egypt by 2035, based on historical data and patterns, to be used as the nodes’ states in the network. 37 factors were found to might have an impact on the use of solar energy and then were deducted to 12 factors that were chosen to be the most effective to the solar energy’s life cycle in Egypt, based on surveying experts and data analysis, some of the factors were found to be recurring in multiple stages. The presented Bayesian network could be used later for scenario and decision analysis of using solar energy in Egypt, as a stable renewable source for generating any type of energy needed.

Keywords: ARIMA, auto correlation, Bayesian network, forecasting models, life cycle, partial correlation, renewable energy, SARIMA, solar energy

Procedia PDF Downloads 157
615 Manual to Automated Testing: An Effort-Based Approach for Determining the Priority of Software Test Automation

Authors: Peter Sabev, Katalina Grigorova

Abstract:

Test automation allows performing difficult and time consuming manual software testing tasks efficiently, quickly and repeatedly. However, development and maintenance of automated tests is expensive, so it needs a proper prioritization what to automate first. This paper describes a simple yet efficient approach for such prioritization of test cases based on the effort needed for both manual execution and software test automation. The suggested approach is very flexible because it allows working with a variety of assessment methods, and adding or removing new candidates at any time. The theoretical ideas presented in this article have been successfully applied in real world situations in several software companies by the authors and their colleagues including testing of real estate websites, cryptographic and authentication solutions, OSGi-based middleware framework that has been applied in various systems for smart homes, connected cars, production plants, sensors, home appliances, car head units and engine control units (ECU), vending machines, medical devices, industry equipment and other devices that either contain or are connected to an embedded service gateway.

Keywords: automated testing, manual testing, test automation, software testing, test prioritization

Procedia PDF Downloads 336
614 Adsorption: A Decision Maker in the Photocatalytic Degradation of Phenol on Co-Catalysts Doped TiO₂

Authors: Dileep Maarisetty, Janaki Komandur, Saroj S. Baral

Abstract:

In the current work, photocatalytic degradation of phenol was carried both in UV and visible light to find the slowest step that is limiting the rate of photo-degradation process. Characterization such as XRD, SEM, FT-IR, TEM, XPS, UV-DRS, PL, BET, UPS, ESR and zeta potential experiments were conducted to assess the credibility of catalysts in boosting the photocatalytic activity. To explore the synergy, TiO₂ was doped with graphene and alumina. The orbital hybridization with alumina doping (mediated by graphene) resulted in higher electron transfer from the conduction band of TiO₂ to alumina surface where oxygen reduction reactions (ORR) occur. Besides, the doping of alumina and graphene introduced defects into Ti lattice and helped in improving the adsorptive properties of modified photo-catalyst. Results showed that these defects promoted the oxygen reduction reactions (ORR) on the catalyst’s surface. ORR activity aims at producing reactive oxygen species (ROS). These ROS species oxidizes the phenol molecules which is adsorbed on the surface of photo-catalysts, thereby driving the photocatalytic reactions. Since mass transfer is considered as rate limiting step, various mathematical models were applied to the experimental data to probe the best fit. By varying the parameters, it was found that intra-particle diffusion was the slowest step in the degradation process. Lagergren model gave the best R² values indicating the nature of rate kinetics. Similarly, different adsorption isotherms were employed and realized that Langmuir isotherm suits the best with tremendous increase in uptake capacity (mg/g) of TiO₂-rGO-Al₂O₃ as compared undoped TiO₂. This further assisted in higher adsorption of phenol molecules. The results obtained from experimental, kinetic modelling and adsorption isotherms; it is concluded that apart from changes in surface, optoelectronic and morphological properties that enhanced the photocatalytic activity, the intra-particle diffusion within the catalyst’s pores serve as rate-limiting step in deciding the fate of photo-catalytic degradation of phenol.

Keywords: ORR, phenol degradation, photo-catalyst, rate kinetics

Procedia PDF Downloads 144
613 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach

Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas

Abstract:

Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.

Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality

Procedia PDF Downloads 188
612 Epileptic Seizure Onset Detection via Energy and Neural Synchronization Decision Fusion

Authors: Marwa Qaraqe, Muhammad Ismail, Erchin Serpedin

Abstract:

This paper presents a novel architecture for a patient-specific epileptic seizure onset detector using scalp electroencephalography (EEG). The proposed architecture is based on the decision fusion calculated from energy and neural synchronization related features. Specifically, one level of the detector calculates the condition number (CN) of an EEG matrix to evaluate the amount of neural synchronization present within the EEG channels. On a parallel level, the detector evaluates the energy contained in four EEG frequency subbands. The information is then fed into two independent (parallel) classification units based on support vector machines to determine the onset of a seizure event. The decisions from the two classifiers are then combined together according to two fusion techniques to determine a global decision. Experimental results demonstrate that the detector based on the AND fusion technique outperforms existing detectors with a sensitivity of 100%, detection latency of 3 seconds, while it achieves a 2:76 false alarm rate per hour. The OR fusion technique achieves a sensitivity of 100%, and significantly improves delay latency (0:17 seconds), yet it achieves 12 false alarms per hour.

Keywords: epilepsy, EEG, seizure onset, electroencephalography, neuron, detection

Procedia PDF Downloads 480
611 Simulation on Influence of Environmental Conditions on Part Distortion in Fused Deposition Modelling

Authors: Anto Antony Samy, Atefeh Golbang, Edward Archer, Alistair McIlhagger

Abstract:

Fused deposition modelling (FDM) is one of the additive manufacturing techniques that has become highly attractive in the industrial and academic sectors. However, parts fabricated through FDM are highly susceptible to geometrical defects such as warpage, shrinkage, and delamination that can severely affect their function. Among the thermoplastic polymer feedstock for FDM, semi-crystalline polymers are highly prone to part distortion due to polymer crystallization. In this study, the influence of FDM processing conditions such as chamber temperature and print bed temperature on the induced thermal residual stress and resulting warpage are investigated using the 3D transient thermal model for a semi-crystalline polymer. The thermo-mechanical properties and the viscoelasticity of the polymer, as well as the crystallization physics, which considers the crystallinity of the polymer, are coupled with the evolving temperature gradient of the print model. From the results, it was observed that increasing the chamber temperature from 25°C to 75°C lead to a decrease of 1.5% residual stress, while decreasing bed temperature from 100°C to 60°C, resulted in a 33% increase in residual stress and a significant rise of 138% in warpage. The simulated warpage data is validated by comparing it with the measured warpage values of the samples using 3D scanning.

Keywords: finite element analysis, fused deposition modelling, residual stress, warpage

Procedia PDF Downloads 189
610 Mage Fusion Based Eye Tumor Detection

Authors: Ahmed Ashit

Abstract:

Image fusion is a significant and efficient image processing method used for detecting different types of tumors. This method has been used as an effective combination technique for obtaining high quality images that combine anatomy and physiology of an organ. It is the main key in the huge biomedical machines for diagnosing cancer such as PET-CT machine. This thesis aims to develop an image analysis system for the detection of the eye tumor. Different image processing methods are used to extract the tumor and then mark it on the original image. The images are first smoothed using median filtering. The background of the image is subtracted, to be then added to the original, results in a brighter area of interest or tumor area. The images are adjusted in order to increase the intensity of their pixels which lead to clearer and brighter images. once the images are enhanced, the edges of the images are detected using canny operators results in a segmented image comprises only of the pupil and the tumor for the abnormal images, and the pupil only for the normal images that have no tumor. The images of normal and abnormal images are collected from two sources: “Miles Research” and “Eye Cancer”. The computerized experimental results show that the developed image fusion based eye tumor detection system is capable of detecting the eye tumor and segment it to be superimposed on the original image.

Keywords: image fusion, eye tumor, canny operators, superimposed

Procedia PDF Downloads 365
609 A Mathematical Based Prediction of the Forming Limit of Thin-Walled Sheet Metals

Authors: Masoud Ghermezi

Abstract:

Studying the sheet metals is one of the most important research areas in the field of metal forming due to their extensive applications in the aerospace industries. A useful method for determining the forming limit of these materials and consequently preventing the rupture of sheet metals during the forming process is the use of the forming limit curve (FLC). In addition to specifying the forming limit, this curve also delineates a boundary for the allowed values of strain in sheet metal forming; these characteristics of the FLC along with its accuracy of computation and wide range of applications have made this curve the basis of research in the present paper. This study presents a new model that not only agrees with the results obtained from the above mentioned theory, but also eliminates its shortcomings. In this theory, like in the M-K theory, a thin sheet with an inhomogeneity as a gradient thickness reduction with a sinusoidal function has been chosen and subjected to two-dimensional stress. Through analytical evaluation, ultimately, a governing differential equation has been obtained. The numerical solution of this equation for the range of positive strains (stretched region) yields the results that agree with the results obtained from M-K theory. Also the solution of this equation for the range of negative strains (tension region) completes the FLC curve. The findings obtained by applying this equation on two alloys with the hardening exponents of 0.4 and 0.24 indicate the validity of the presented equation.

Keywords: sheet metal, metal forming, forming limit curve (FLC), M-K theory

Procedia PDF Downloads 366
608 Analysis and Modeling of Vibratory Signals Based on LMD for Rolling Bearing Fault Diagnosis

Authors: Toufik Bensana, Slimane Mekhilef, Kamel Tadjine

Abstract:

The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally non-stationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. the results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, local mean decomposition, rolling element bearing, vibration analysis

Procedia PDF Downloads 408
607 Effects of Fourth Alloying Additive on Microstructure and Mechanical Properties of Sn-Ag-Cu Alloy

Authors: Ugur Buyuk, Sevda Engin

Abstract:

Among the various alloy systems being considered as lead-free solder candidates, Sn-Ag-Cu alloys have been recognized as the most promising because of their excellent reliability and compatibility with current components. Thus, Sn-Ag-Cu alloys have recently attracted considerable attention and have been proposed by the Japanese, the EU and the US consortiums to replace conventional Sn-Pb eutectic solder. However, many problems or unknown characteristics of the Sn-Ag-Cu alloy system such as the best composition, the large undercooling in solidification, and the formation of large intermetallics still exist. It is expected that the addition of some solidification nuclei for Sn-Ag-Cu alloys will refine the solidification microstructure and will suppress undercooling.In the present work, the effects of the fourth elements, i.e., Zn, Ni, Bi, In and Co, on microstructural and mechanical properties of Sn-3.5Ag-0.9Cu lead-free solder were investigated. Sn-3.5Ag-0.9Cu-0.5X (X= Zn, Ni, Bi, In, Co (wt.)) alloys were prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upward at a constant temperature gradient and growth rates by using a Bridgman type directional solidification furnace. The microstructure, microhardness and ultimate tensile strength of alloys were measured. The effects of fourth elements on the microstructure and mechanical properties of Sn–Ag-Cu eutectic alloys were investigated. The results obtained in the present work were compared with the previous experimental results.

Keywords: lead-free solders, microhardness, microstructure, tensile strength

Procedia PDF Downloads 414
606 Application of Chemical Tests for the Inhibition of Scaling From Hamma Hard Waters

Authors: Samira Ghizellaoui, Manel Boumagoura

Abstract:

Calcium carbonate precipitation is a widespread problem, especially in hard water systems. The main water supply that supplies the city of Constantine with drinking water is underground water called Hamma water. This water has a very high hardness of around 590 mg/L CaCO₃. This leads to the formation of scale, consisting mainly of calcium carbonate, which can be responsible for the clogging of valves and the deterioration of equipment (water heaters, washing machines and encrustations in the pipes). Plant extracts used as scale inhibitors have attracted the attention of several researchers. In recent years, green inhibitors have attracted great interest because they are biodegradable, non-toxic and do not affect the environment. The aim of our work is to evaluate the effectiveness of a chemical antiscale treatment in the presence of three green inhibitors: gallicacid; quercetin; alginate, and three mixtures: (gallic acid-quercetin); (quercetin-alginate); (gallic acid-alginate). The results show that the inhibitory effect is manifested from an addition of 1mg/L of gallic acid, 10 mg/L of quercetin, 0.2 mg/L of alginate, 0.4mg/L of (gallic acid-quercetin), 2mg/L of (quercetin-alginate) and 0.4 mg/L of (gallic acid-alginate). On the other hand, 100 mg/L (Drinking water standard) of Ca2+is reached for partial softening at 4 mg/L of gallic acid, 40 mg/L of quercetin, 0.6mg/L of alginate, 4mg/L of (gallic acid-quercetin), 10mg/L of (quercetin-alginate) and 1.6 mg/L of (gallic acid-alginate).

Keywords: water, scaling, calcium carbonate, green inhibitor

Procedia PDF Downloads 70
605 Auto-Tuning of CNC Parameters According to the Machining Mode Selection

Authors: Jenq-Shyong Chen, Ben-Fong Yu

Abstract:

CNC(computer numerical control) machining centers have been widely used for machining different metal components for various industries. For a specific CNC machine, its everyday job is assigned to cut different products with quite different attributes such as material type, workpiece weight, geometry, tooling, and cutting conditions. Theoretically, the dynamic characteristics of the CNC machine should be properly tuned match each machining job in order to get the optimal machining performance. However, most of the CNC machines are set with only a standard set of CNC parameters. In this study, we have developed an auto-tuning system which can automatically change the CNC parameters and in hence change the machine dynamic characteristics according to the selection of machining modes which are set by the mixed combination of three machine performance indexes: the HO (high surface quality) index, HP (high precision) index and HS (high speed) index. The acceleration, jerk, corner error tolerance, oscillation and dynamic bandwidth of machine’s feed axes have been changed according to the selection of the machine performance indexes. The proposed auto-tuning system of the CNC parameters has been implemented on a PC-based CNC controller and a three-axis machining center. The measured experimental result have shown the promising of our proposed auto-tuning system.

Keywords: auto-tuning, CNC parameters, machining mode, high speed, high accuracy, high surface quality

Procedia PDF Downloads 381
604 Geochemical Characterization of Bou Dabbous Formation in Thrust Belt Zones, Northern Tunisia

Authors: M. Ben Jrad, A. Belhaj Mohamed, S. Riahi, I. Bouazizi, M. Saidi, M. Soussi

Abstract:

The generative potential, depositional environment, thermal maturity and oil seeps of the organic-rich Bou Dabbous Formation (Ypresian) from the thrust belt northwestern Tunisia, were determined by Rock Eval and molecular analyses. The paleo-tectonic units in the area show some similarities with equivalent facies in Mediterranean Sea and Sicilian. The Bou Dabbous Formation displays variable source rock characteristics through the various units Tellian and Numidian nappes Units. Organic matter contents and petroleum potentials are fair to high (reaching 1.95% and 6 kg of HC/t of rock respectively) marine type II kerogen. An increasing SE-NW maturity gradient is well documented in the study area. The Bou Dabbous organic-rich facies are marginally mature stage in the Tellian Unit (Kasseb domain), whilst they are mature-late mature stage within Nefza-Ain Allega tectonic windows. A long and north of Cap Serrat-Ghardimaou Master Fault these facies are overmature. Oil/Oil and Oil/source rock correlation, based on biomarker and carbon isotopic composition, shows a positive genetic correlation between the oil seeps and Bou Dabbous source rock.

Keywords: biomarkers, Bou Dabbous Formation, Northern Tunisia, source rock

Procedia PDF Downloads 485
603 Application of Simulated Annealing to Threshold Optimization in Distributed OS-CFAR System

Authors: L. Abdou, O. Taibaoui, A. Moumen, A. Talib Ahmed

Abstract:

This paper proposes an application of the simulated annealing to optimize the detection threshold in an ordered statistics constant false alarm rate (OS-CFAR) system. Using conventional optimization methods, such as the conjugate gradient, can lead to a local optimum and lose the global optimum. Also for a system with a number of sensors that is greater than or equal to three, it is difficult or impossible to find this optimum; Hence, the need to use other methods, such as meta-heuristics. From a variety of meta-heuristic techniques, we can find the simulated annealing (SA) method, inspired from a process used in metallurgy. This technique is based on the selection of an initial solution and the generation of a near solution randomly, in order to improve the criterion to optimize. In this work, two parameters will be subject to such optimisation and which are the statistical order (k) and the scaling factor (T). Two fusion rules; “AND” and “OR” were considered in the case where the signals are independent from sensor to sensor. The results showed that the application of the proposed method to the problem of optimisation in a distributed system is efficiency to resolve such problems. The advantage of this method is that it allows to browse the entire solutions space and to avoid theoretically the stagnation of the optimization process in an area of local minimum.

Keywords: distributed system, OS-CFAR system, independent sensors, simulating annealing

Procedia PDF Downloads 497
602 New Method for Determining the Distribution of Birefringence and Linear Dichroism in Polymer Materials Based on Polarization-Holographic Grating

Authors: Barbara Kilosanidze, George Kakauridze, Levan Nadareishvili, Yuri Mshvenieradze

Abstract:

A new method for determining the distribution of birefringence and linear dichroism in optical polymer materials is presented. The method is based on the use of polarization-holographic diffraction grating that forms an orthogonal circular basis in the process of diffraction of probing laser beam on the grating. The intensities ratio of the orders of diffraction on this grating enables the value of birefringence and linear dichroism in the sample to be determined. The distribution of birefringence in the sample is determined by scanning with a circularly polarized beam with a wavelength far from the absorption band of the material. If the scanning is carried out by probing beam with the wavelength near to a maximum of the absorption band of the chromophore then the distribution of linear dichroism can be determined. An appropriate theoretical model of this method is presented. A laboratory setup was created for the proposed method. An optical scheme of the laboratory setup is presented. The results of measurement in polymer films with two-dimensional gradient distribution of birefringence and linear dichroism are discussed.

Keywords: birefringence, linear dichroism, graded oriented polymers, optical polymers, optical anisotropy, polarization-holographic grating

Procedia PDF Downloads 433
601 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm

Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim

Abstract:

All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.

Keywords: currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features

Procedia PDF Downloads 235
600 Thermophoresis Particle Precipitate on Heated Surfaces

Authors: Rebhi A. Damseh, H. M. Duwairi, Benbella A. Shannak

Abstract:

This work deals with heat and mass transfer by steady laminar boundary layer flow of a Newtonian, viscous fluid over a vertical flat plate with variable surface heat flux embedded in a fluid saturated porous medium in the presence of thermophoresis particle deposition effect. The governing partial differential equations are transformed into no-similar form by using special transformation and solved numerically by using an implicit finite difference method. Many results are obtained and a representative set is displaced graphically to illustrate the influence of the various physical parameters on the wall thermophoresis deposition velocity and concentration profiles. It is found that the increasing of thermophoresis constant or temperature differences enhances heat transfer rates from vertical surfaces and increase wall thermophoresis velocities; this is due to favourable temperature gradients or buoyancy forces. It is also found that the effect of thermophoresis phenomena is more pronounced near pure natural convection heat transfer limit; because this phenomenon is directly a temperature gradient or buoyancy forces dependent. Comparisons with previously published work in the limits are performed and the results are found to be in excellent agreement.

Keywords: thermophoresis, porous medium, variable surface heat flux, heat transfer

Procedia PDF Downloads 203
599 Gradations in Concentration of Heavy and Mineral Elements with Distance and Depth of Soil in the Vicinity of Auto Mechanic Workshops in Sabon Gari, Kaduna State, Nigeria

Authors: E. D. Paul, H. Otanwa, O. F. Paul, A. J. Salifu, J. E. Toryila, C. E. Gimba

Abstract:

The concentration levels of six heavy metals (Cd, Cr, Fe, Ni, Pb, and Zn) and two mineral elements (Ca and Mg) were determined in soil samples collected from the vicinity of two auto mechanic workshops in Sabon-Gari, Kaduna state, Nigeria, using Atomic Absorption Spectrometry (AAS), in order to compare the gradation of their concentrations with distance and depth of soil from the workshop sites. At site 1, concentrations of lead, chromium, iron, and zinc were generally found to be above the World Health Organization limits, while those of Nickel and Cadmium fell within the limits. Iron had the highest concentration with a range of 176.274 ppm to 489.127 ppm at depths of 5 cm to 15 cm and a distance range of 5 m to 15 m, while the concentration of cadmium was least with a range of 0.001 ppm to 0.008 ppm at similar depth and distance ranges. In addition, there was more of calcium (11.521 ppm to 121.709 ppm), in all the samples, than magnesium (11.293 ppm to 21.635 ppm). Similar results were obtained for site II. The concentrations of all the metals analyzed showed a downward gradient with an increase in depth and distance from both workshop sites except for iron and zinc at site 2. The immediate and remote implications of these findings on the biota are discussed.

Keywords: AAS, heavy metals, mechanic workshops, soil, variation

Procedia PDF Downloads 495
598 Feature Selection Approach for the Classification of Hydraulic Leakages in Hydraulic Final Inspection using Machine Learning

Authors: Christian Neunzig, Simon Fahle, Jürgen Schulz, Matthias Möller, Bernd Kuhlenkötter

Abstract:

Manufacturing companies are facing global competition and enormous cost pressure. The use of machine learning applications can help reduce production costs and create added value. Predictive quality enables the securing of product quality through data-supported predictions using machine learning models as a basis for decisions on test results. Furthermore, machine learning methods are able to process large amounts of data, deal with unfavourable row-column ratios and detect dependencies between the covariates and the given target as well as assess the multidimensional influence of all input variables on the target. Real production data are often subject to highly fluctuating boundary conditions and unbalanced data sets. Changes in production data manifest themselves in trends, systematic shifts, and seasonal effects. Thus, Machine learning applications require intensive pre-processing and feature selection. Data preprocessing includes rule-based data cleaning, the application of dimensionality reduction techniques, and the identification of comparable data subsets. Within the used real data set of Bosch hydraulic valves, the comparability of the same production conditions in the production of hydraulic valves within certain time periods can be identified by applying the concept drift method. Furthermore, a classification model is developed to evaluate the feature importance in different subsets within the identified time periods. By selecting comparable and stable features, the number of features used can be significantly reduced without a strong decrease in predictive power. The use of cross-process production data along the value chain of hydraulic valves is a promising approach to predict the quality characteristics of workpieces. In this research, the ada boosting classifier is used to predict the leakage of hydraulic valves based on geometric gauge blocks from machining, mating data from the assembly, and hydraulic measurement data from end-of-line testing. In addition, the most suitable methods are selected and accurate quality predictions are achieved.

Keywords: classification, achine learning, predictive quality, feature selection

Procedia PDF Downloads 162
597 Experimental and Theoretical Approach, Hirshfeld Surface, Reduced Density Gradient, Molecular Docking of a Thiourea Derivative

Authors: Noureddine Benharkat, Abdelkader Chouaih, Nourdine Boukabcha

Abstract:

A thiourea derivative compound was synthesized and subjected to structural analysis using single-crystal X-ray diffraction (XRD). The crystallographic data unveiled its crystallization in the P21/c space group within the monoclinic system. Examination of the dihedral angles indicated a notable non-planar structure. To support and interpret these resulats, density functional theory (DFT) calculations were conducted utilizing the B3LYP functional along with a 6–311 G (d, p) basis set. Additionally, to assess the contribution of intermolecular interactions, Hirshfeld surface analysis and 2D fingerprint plots were employed. Various types of interactions, whether weak intramolecular or intermolecular, within a molecule can significantly impact its stability. The distinctive signature of non-covalent interactions can be detected solely through electron density analysis. The NCI-RDG analysis was employed to investigate both repulsive and attractive van der Waals interactions while also calculating the energies associated with intermolecular interactions and their characteristics. Additionally, a molecular docking study was studied to explain the structure-activity relationship, revealing that the title compound exhibited an affinity energy of -6.8 kcal/mol when docked with B-DNA (1BNA).

Keywords: computational chemistry, density functional theory, crystallography, molecular docking, molecular structure, powder x-ray diffraction, single crystal x-ray diffraction

Procedia PDF Downloads 60
596 Analysis of Noodle Production Process at Yan Hu Food Manufacturing: Basis for Production Improvement

Authors: Rhadinia Tayag-Relanes, Felina C. Young

Abstract:

This study was conducted to analyze the noodle production process at Yan Hu Food Manufacturing for the basis of production improvement. The study utilized the PDCA approach and record review in the gathering of data for the calendar year 2019 from August to October data of the noodle products miki, canton, and misua. Causal-comparative research was used in this study; it attempts to establish cause-effect relationships among the variables such as descriptive statistics and correlation, both were used to compute the data gathered. The study found that miki, canton, and misua production has different cycle time sets for each production and has different production outputs in every set of its production process and a different number of wastages. The company has not yet established its allowable rejection rate/ wastage; instead, this paper used a 1% wastage limit. The researcher recommended the following: machines used for each process of the noodle product must be consistently maintained and monitored; an assessment of all the production operators by checking their performance statistically based on the output and the machine performance; a root cause analysis for finding the solution must be conducted; and an improvement on the recording system of the input and output of the production process of noodle product should be established to eliminate the poor recording of data.

Keywords: continuous improvement, process, operations, PDCA

Procedia PDF Downloads 74
595 Resource-Constrained Assembly Line Balancing Problems with Multi-Manned Workstations

Authors: Yin-Yann Chen, Jia-Ying Li

Abstract:

Assembly line balancing problems can be categorized into one-sided, two-sided, and multi-manned ones by using the number of operators deployed at workstations. This study explores the balancing problem of a resource-constrained assembly line with multi-manned workstations. Resources include machines or tools in assembly lines such as jigs, fixtures, and hand tools. A mathematical programming model was developed to carry out decision-making and planning in order to minimize the numbers of workstations, resources, and operators for achieving optimal production efficiency. To improve the solution-finding efficiency, a genetic algorithm (GA) and a simulated annealing algorithm (SA) were designed and developed in this study to be combined with a practical case in car making. Results of the GA/SA and mathematics programming were compared to verify their validity. Finally, analysis and comparison were conducted in terms of the target values, production efficiency, and deployment combinations provided by the algorithms in order for the results of this study to provide references for decision-making on production deployment.

Keywords: heuristic algorithms, line balancing, multi-manned workstation, resource-constrained

Procedia PDF Downloads 208
594 Statistical Analysis with Prediction Models of User Satisfaction in Software Project Factors

Authors: Katawut Kaewbanjong

Abstract:

We analyzed a volume of data and found significant user satisfaction in software project factors. A statistical significance analysis (logistic regression) and collinearity analysis determined the significance factors from a group of 71 pre-defined factors from 191 software projects in ISBSG Release 12. The eight prediction models used for testing the prediction potential of these factors were Neural network, k-NN, Naïve Bayes, Random forest, Decision tree, Gradient boosted tree, linear regression and logistic regression prediction model. Fifteen pre-defined factors were truly significant in predicting user satisfaction, and they provided 82.71% prediction accuracy when used with a neural network prediction model. These factors were client-server, personnel changes, total defects delivered, project inactive time, industry sector, application type, development type, how methodology was acquired, development techniques, decision making process, intended market, size estimate approach, size estimate method, cost recording method, and effort estimate method. These findings may benefit software development managers considerably.

Keywords: prediction model, statistical analysis, software project, user satisfaction factor

Procedia PDF Downloads 124
593 Thermomagnetic Convection of a Ferrofluid in a Non-Uniform Magnetic Field Induced a Current Carrying Wire

Authors: Ashkan Vatani, Peter Woodfield, Nam-Trung Nguyen, Dzung Dao

Abstract:

Thermomagnetic convection of a ferrofluid flow induced by the non-uniform magnetic field around a current-carrying wire was theoretically analyzed and experimentally tested. To show this phenomenon, the temperature rise of a hot wire, immersed in DIW and Ferrofluid, as a result of joule heating has been measured using a transient hot-wire technique. When current is applied to the wire, a temperature gradient is imposed on the magnetic fluid resulting in non-uniform magnetic susceptibility of the ferrofluid that results in a non-uniform magnetic body force which makes the ferrofluid flow as a bulk suspension. For the case of the wire immersed in DIW, free convection is the only means of cooling, while for the case of ferrofluid a combination of both free convection and thermomagnetic convection is expected to enhance the heat transfer from the wire beyond that of DIW. Experimental results at different temperatures and for a range of constant currents applied to the wire show that thermomagnetic convection becomes effective for the currents higher than 1.5A at all temperatures. It is observed that the onset of thermomagnetic convection is directly proportional to the current applied to the wire and that the thermomagnetic convection happens much faster than the free convection. Calculations show that a 35% enhancement in heat transfer can be expected for the ferrofluid compared to DIW, for a 3A current applied to the wire.

Keywords: cooling, ferrofluid, thermomagnetic convection, magnetic field

Procedia PDF Downloads 263
592 First Principle Studies on the Structural, Electronic and Magnetic Properties of Some BaMn-Based Double Perovskites

Authors: Amel Souidi, S. Bentata, B. Bouadjemi, T. Lantri, Z. Aziz

Abstract:

Perovskite materials which include magnetic elements have relevance due to the technological perspectives in the spintronics industry. In this work, we have investigated the structural, electronic and magnetic properties of double perovskites Ba2MnXO6 with X= Mo and W by using the full-potential linearized augmented plane wave (FP-LAPW) method based on Density Functional Theory (DFT) [1, 2] as implemented in the WIEN2K [3] code. The interchange-correlation potential was included through the generalized gradient approximation (GGA) [4] as well as taking into account the on-site coulomb repulsive interaction in (GGA+U) approach. We have analyzed the structural parameters, charge and spin densities, total and partial densities of states. The results show that the materials crystallize in the 225 space group (Fm-3m) and have a lattice parameter of about 7.97 Å and 7.95 Å for Ba2MnMoO6 and Ba2MnWO6, respectively. The band structures reveal a metallic ferromagnetic (FM) ground state in Ba2MnMoO6 and half-metallic (HM) ferromagnetic (FM) ground state in the Ba2MnWO6 compound, with total magnetic moment equal 2.9951μB (Ba2MnMoO6 ) and 4.0001μB (Ba2MnWO6 ). The GGA+U calculations predict an energy gap in the spin-up bands in Ba2MnWO6. So we estimate that this material with HM-FM nature implies a promising application in spin-electronics technology.

Keywords: double perovskites, electronic structure, first-principles, semiconductors

Procedia PDF Downloads 369
591 Urban Boundary Layer and Its Effects on Haze Episode in Thailand

Authors: S. Bualert, K. Duangmal

Abstract:

Atmospheric boundary layer shows effects of land cover on atmospheric characteristic in term of temperature gradient and wind profile. They are key factors to control atmospheric process such as atmospheric dilution and mixing via thermal and mechanical turbulent. Bangkok, ChiangMai, and Hatyai are major cities of central, southern and northern of Thailand, respectively. The different of them are location, geography and size of the city, Bangkok is the most urbanized city and classified as mega city compared to ChiangMai and HatYai, respectively. They have been suffering from air pollution episode such as transboundary haze. The worst period of the northern part of Thailand was occurred at the end of February through April of each year. The particulate matter less than 10 micrometer (PM10) concentrations were higher than Thai’s ambient air quality standard (120 micrograms per cubic meter) more than two times. Radiosonde technique and air pollutant (CO, PM10, TSP, O3, NOx) measurements were used to identify characteristics of urban boundary layer and air pollutions problems in the cities. Furthermore, air pollutant profiles showed good relationship to characteristic’s urban boundary layer especially on daytime temperature inversion on 29 February 2009 caused two times higher than normal concentrations of CO and particulate matter.

Keywords: haze episode, micrometeorology, temperature inversion, urban boundary layer

Procedia PDF Downloads 259
590 Efficiency of Google Translate and Bing Translator in Translating Persian-to-English Texts

Authors: Samad Sajjadi

Abstract:

Machine translation is a new subject increasingly being used by academic writers, especially students and researchers whose native language is not English. There are numerous studies conducted on machine translation, but few investigations have assessed the accuracy of machine translation from Persian to English at lexical, semantic, and syntactic levels. Using Groves and Mundt’s (2015) Model of error taxonomy, the current study evaluated Persian-to-English translations produced by two famous online translators, Google Translate and Bing Translator. A total of 240 texts were randomly selected from different academic fields (law, literature, medicine, and mass media), and 60 texts were considered for each domain. All texts were rendered by the two translation systems and then by four human translators. All statistical analyses were applied using SPSS. The results indicated that Google translations were more accurate than the translations produced by the Bing Translator, especially in the domains of medicine (lexis: 186 vs. 225; semantic: 44 vs. 48; syntactic: 148 vs. 264 errors) and mass media (lexis: 118 vs. 149; semantic: 25 vs. 32; syntactic: 110 vs. 220 errors), respectively. Nonetheless, both machines are reasonably accurate in Persian-to-English translation of lexicons and syntactic structures, particularly from mass media and medical texts.

Keywords: machine translations, accuracy, human translation, efficiency

Procedia PDF Downloads 78
589 Development and Validation of a HPLC Method for 6-Gingerol and 6-Shogaol in Joint Pain Relief Gel Containing Ginger (Zingiber officinale)

Authors: Tanwarat Kajsongkram, Saowalux Rotamporn, Sirinat Limbunruang, Sirinan Thubthimthed.

Abstract:

High-Performance Liquid Chromatography (HPLC) method was developed and validated for simultaneous estimation of 6-Gingerol(6G) and 6-Shogaol(6S) in joint pain relief gel containing ginger extract. The chromatographic separation was achieved by using C18 column, 150 x 4.6mm i.d., 5μ Luna, mobile phase containing acetonitrile and water (gradient elution). The flow rate was 1.0 ml/min and the absorbance was monitored at 282 nm. The proposed method was validated in terms of the analytical parameters such as specificity, accuracy, precision, linearity, range, limit of detection (LOD), limit of quantification (LOQ), and determined based on the International Conference on Harmonization (ICH) guidelines. The linearity ranges of 6G and 6S were obtained over 20-60 and 6-18 µg/ml respectively. Good linearity was observed over the above-mentioned range with linear regression equation Y= 11016x- 23778 for 6G and Y = 19276x-19604 for 6S (x is concentration of analytes in μg/ml and Y is peak area). The value of correlation coefficient was found to be 0.9994 for both markers. The limit of detection (LOD) and limit of quantification (LOQ) for 6G were 0.8567 and 2.8555 µg/ml and for 6S were 0.3672 and 1.2238 µg/ml respectively. The recovery range for 6G and 6S were found to be 91.57 to 102.36 % and 84.73 to 92.85 % for all three spiked levels. The RSD values from repeated extractions for 6G and 6S were 3.43 and 3.09% respectively. The validation of developed method on precision, accuracy, specificity, linearity, and range were also performed with well-accepted results.

Keywords: ginger, 6-gingerol, HPLC, 6-shogaol

Procedia PDF Downloads 445
588 Healthcare Big Data Analytics Using Hadoop

Authors: Chellammal Surianarayanan

Abstract:

Healthcare industry is generating large amounts of data driven by various needs such as record keeping, physician’s prescription, medical imaging, sensor data, Electronic Patient Record(EPR), laboratory, pharmacy, etc. Healthcare data is so big and complex that they cannot be managed by conventional hardware and software. The complexity of healthcare big data arises from large volume of data, the velocity with which the data is accumulated and different varieties such as structured, semi-structured and unstructured nature of data. Despite the complexity of big data, if the trends and patterns that exist within the big data are uncovered and analyzed, higher quality healthcare at lower cost can be provided. Hadoop is an open source software framework for distributed processing of large data sets across clusters of commodity hardware using a simple programming model. The core components of Hadoop include Hadoop Distributed File System which offers way to store large amount of data across multiple machines and MapReduce which offers way to process large data sets with a parallel, distributed algorithm on a cluster. Hadoop ecosystem also includes various other tools such as Hive (a SQL-like query language), Pig (a higher level query language for MapReduce), Hbase(a columnar data store), etc. In this paper an analysis has been done as how healthcare big data can be processed and analyzed using Hadoop ecosystem.

Keywords: big data analytics, Hadoop, healthcare data, towards quality healthcare

Procedia PDF Downloads 414
587 Implications of Circular Economy on Users Data Privacy: A Case Study on Android Smartphones Second-Hand Market

Authors: Mariia Khramova, Sergio Martinez, Duc Nguyen

Abstract:

Modern electronic devices, particularly smartphones, are characterised by extremely high environmental footprint and short product lifecycle. Every year manufacturers release new models with even more superior performance, which pushes the customers towards new purchases. As a result, millions of devices are being accumulated in the urban mine. To tackle these challenges the concept of circular economy has been introduced to promote repair, reuse and recycle of electronics. In this case, electronic devices, that previously ended up in landfills or households, are getting the second life, therefore, reducing the demand for new raw materials. Smartphone reuse is gradually gaining wider adoption partly due to the price increase of flagship models, consequently, boosting circular economy implementation. However, along with reuse of communication device, circular economy approach needs to ensure the data of the previous user have not been 'reused' together with a device. This is especially important since modern smartphones are comparable with computers in terms of performance and amount of data stored. These data vary from pictures, videos, call logs to social security numbers, passport and credit card details, from personal information to corporate confidential data. To assess how well the data privacy requirements are followed on smartphones second-hand market, a sample of 100 Android smartphones has been purchased from IT Asset Disposition (ITAD) facilities responsible for data erasure and resell. Although devices should not have stored any user data by the time they leave ITAD, it has been possible to retrieve the data from 19% of the sample. Applied techniques varied from manual device inspection to sophisticated equipment and tools. These findings indicate significant barrier in implementation of circular economy and a limitation of smartphone reuse. Therefore, in order to motivate the users to donate or sell their old devices and make electronic use more sustainable, data privacy on second-hand smartphone market should be significantly improved. Presented research has been carried out in the framework of sustainablySMART project, which is part of Horizon 2020 EU Framework Programme for Research and Innovation.

Keywords: android, circular economy, data privacy, second-hand phones

Procedia PDF Downloads 129