Search results for: iterative algorithms
691 Spatiotemporal Analysis of Land Surface Temperature and Urban Heat Island Evaluation of Four Metropolitan Areas of Texas, USA
Authors: Chunhong Zhao
Abstract:
Remotely sensed land surface temperature (LST) is vital to understand the land-atmosphere energy balance, hydrological cycle, and thus is widely used to describe the urban heat island (UHI) phenomenon. However, due to technical constraints, satellite thermal sensors are unable to provide LST measurement with both high spatial and high temporal resolution. Despite different downscaling techniques and algorithms to generate high spatiotemporal resolution LST. Four major metropolitan areas in Texas, USA: Dallas-Fort Worth, Houston, San Antonio, and Austin all demonstrate UHI effects. Different cities are expected to have varying SUHI effect during the urban development trajectory. With the help of the Landsat, ASTER, and MODIS archives, this study focuses on the spatial patterns of UHIs and the seasonal and annual variation of these metropolitan areas. With Gaussian model, and Local Indicators of Spatial Autocorrelations (LISA), as well as data fusion methods, this study identifies the hotspots and the trajectory of the UHI phenomenon of the four cities. By making comparison analysis, the result can help to alleviate the advent effect of UHI and formulate rational urban planning in the long run.Keywords: spatiotemporal analysis, land surface temperature, urban heat island evaluation, metropolitan areas of Texas, USA
Procedia PDF Downloads 418690 Logistics Optimization: A Literature Review of Techniques for Streamlining Land Transportation in Supply Chain Operations
Authors: Danica Terese Valda, Segundo Villa III, Michiko Yasuda, Jomel Tagaro
Abstract:
This study conducts a thorough literature review of logistics optimization techniques that aimed at improving the efficiency of supply chain operations. Logistics optimization encompasses key areas such as transportation management, inventory control, and distribution network design, each of which plays a critical role in streamlining supply chain performance. The review identifies mixed-integer linear programming (MILP) as a dominant method, widely used for its flexibility in handling complex logistics problems. Other methods like heuristic algorithms and combinatorial optimization also prove effective in solving large-scale logistics challenges. Furthermore, real-time data integration and advancements in simulation techniques are transforming the decision-making processes within supply chains, leading to more dynamic and responsive operations. The inclusion of sustainability goals, particularly in minimizing carbon emissions, has emerged as a growing trend in logistics optimization. This research highlights the need for integrated, holistic approaches that consider the interconnectedness of logistical components. The findings provide valuable insights to guide future research and practical applications, fostering more resilient and efficient supply chains.Keywords: logistics, techniques, supply chain, land transportation
Procedia PDF Downloads 12689 Examining the Impact of Fake News on Mental Health of Residents in Jos Metropolis
Authors: Job Bapyibi Guyson, Bangripa Kefas
Abstract:
The advent of social media has no doubt provided platforms that facilitate the spread of fake news. The devastating impact of this does not only end with the prevalence of rumours and propaganda but also poses potential impact on individuals’ mental well-being. Therefore, this study on examining the impact of fake news on the mental health of residents in Jos metropolis among others interrogates the impact of exposure to fake news on residents' mental health. Anchored on the Cultivation Theory, the study adopted quantitative method and surveyed two the opinions of hundred (200) social media users in Jos metropolis using purposive sampling technique. The findings reveal that a significant majority of respondents perceive fake news as highly prevalent on social media, with associated feelings of anxiety and stress. The majority of the respondents express confidence in identifying fake news, though a notable proportion lacks such confidence. Strategies for managing the mental impact of encountering fake news include ignoring it, fact checking, discussing with others, reporting to platforms, and seeking professional support. Based on these insights, recommendations were proposed to address the challenges posed by fake news. These include promoting media literacy, integrating fact-checking tools, adjusting algorithms and fostering digital well-being features among others.Keywords: fake news, mental health, social media, impact
Procedia PDF Downloads 57688 Predictive Analytics of Student Performance Determinants
Authors: Mahtab Davari, Charles Edward Okon, Somayeh Aghanavesi
Abstract:
Every institute of learning is usually interested in the performance of enrolled students. The level of these performances determines the approach an institute of study may adopt in rendering academic services. The focus of this paper is to evaluate students' academic performance in given courses of study using machine learning methods. This study evaluated various supervised machine learning classification algorithms such as Logistic Regression (LR), Support Vector Machine, Random Forest, Decision Tree, K-Nearest Neighbors, Linear Discriminant Analysis, and Quadratic Discriminant Analysis, using selected features to predict study performance. The accuracy, precision, recall, and F1 score obtained from a 5-Fold Cross-Validation were used to determine the best classification algorithm to predict students’ performances. SVM (using a linear kernel), LDA, and LR were identified as the best-performing machine learning methods. Also, using the LR model, this study identified students' educational habits such as reading and paying attention in class as strong determinants for a student to have an above-average performance. Other important features include the academic history of the student and work. Demographic factors such as age, gender, high school graduation, etc., had no significant effect on a student's performance.Keywords: student performance, supervised machine learning, classification, cross-validation, prediction
Procedia PDF Downloads 128687 Developing Digital Twins of Steel Hull Processes
Authors: V. Ložar, N. Hadžić, T. Opetuk, R. Keser
Abstract:
The development of digital twins strongly depends on efficient algorithms and their capability to mirror real-life processes. Nowadays, such efforts are required to establish factories of the future faced with new demands of custom-made production. The ship hull processes face these challenges too. Therefore, it is important to implement design and evaluation approaches based on production system engineering. In this study, the recently developed finite state method is employed to describe the stell hull process as a platform for the implementation of digital twinning technology. The application is justified by comparing the finite state method with the analytical approach. This method is employed to rebuild a model of a real shipyard ship hull process using a combination of serial and splitting lines. The key performance indicators such as the production rate, work in process, probability of starvation, and blockade are calculated and compared to the corresponding results obtained through a simulation approach using the software tool Enterprise dynamics. This study confirms that the finite state method is a suitable tool for digital twinning applications. The conclusion highlights the advantages and disadvantages of methods employed in this context.Keywords: digital twin, finite state method, production system engineering, shipyard
Procedia PDF Downloads 100686 Hybrid Bee Ant Colony Algorithm for Effective Load Balancing and Job Scheduling in Cloud Computing
Authors: Thomas Yeboah
Abstract:
Cloud Computing is newly paradigm in computing that promises a delivery of computing as a service rather than a product, whereby shared resources, software, and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet). As Cloud Computing is a newly style of computing on the internet. It has many merits along with some crucial issues that need to be resolved in order to improve reliability of cloud environment. These issues are related with the load balancing, fault tolerance and different security issues in cloud environment.In this paper the main concern is to develop an effective load balancing algorithm that gives satisfactory performance to both, cloud users and providers. This proposed algorithm (hybrid Bee Ant Colony algorithm) is a combination of two dynamic algorithms: Ant Colony Optimization and Bees Life algorithm. Ant Colony algorithm is used in this hybrid Bee Ant Colony algorithm to solve load balancing issues whiles the Bees Life algorithm is used for optimization of job scheduling in cloud environment. The results of the proposed algorithm shows that the hybrid Bee Ant Colony algorithm outperforms the performances of both Ant Colony algorithm and Bees Life algorithm when evaluated the proposed algorithm performances in terms of Waiting time and Response time on a simulator called CloudSim.Keywords: ant colony optimization algorithm, bees life algorithm, scheduling algorithm, performance, cloud computing, load balancing
Procedia PDF Downloads 630685 Brain Tumor Detection and Classification Using Pre-Trained Deep Learning Models
Authors: Aditya Karade, Sharada Falane, Dhananjay Deshmukh, Vijaykumar Mantri
Abstract:
Brain tumors pose a significant challenge in healthcare due to their complex nature and impact on patient outcomes. The application of deep learning (DL) algorithms in medical imaging have shown promise in accurate and efficient brain tumour detection. This paper explores the performance of various pre-trained DL models ResNet50, Xception, InceptionV3, EfficientNetB0, DenseNet121, NASNetMobile, VGG19, VGG16, and MobileNet on a brain tumour dataset sourced from Figshare. The dataset consists of MRI scans categorizing different types of brain tumours, including meningioma, pituitary, glioma, and no tumour. The study involves a comprehensive evaluation of these models’ accuracy and effectiveness in classifying brain tumour images. Data preprocessing, augmentation, and finetuning techniques are employed to optimize model performance. Among the evaluated deep learning models for brain tumour detection, ResNet50 emerges as the top performer with an accuracy of 98.86%. Following closely is Xception, exhibiting a strong accuracy of 97.33%. These models showcase robust capabilities in accurately classifying brain tumour images. On the other end of the spectrum, VGG16 trails with the lowest accuracy at 89.02%.Keywords: brain tumour, MRI image, detecting and classifying tumour, pre-trained models, transfer learning, image segmentation, data augmentation
Procedia PDF Downloads 74684 Single Pole-To-Earth Fault Detection and Location on the Tehran Railway System Using ICA and PSO Trained Neural Network
Authors: Masoud Safarishaal
Abstract:
Detecting the location of pole-to-earth faults is essential for the safe operation of the electrical system of the railroad. This paper aims to use a combination of evolutionary algorithms and neural networks to increase the accuracy of single pole-to-earth fault detection and location on the Tehran railroad power supply system. As a result, the Imperialist Competitive Algorithm (ICA) and Particle Swarm Optimization (PSO) are used to train the neural network to improve the accuracy and convergence of the learning process. Due to the system's nonlinearity, fault detection is an ideal application for the proposed method, where the 600 Hz harmonic ripple method is used in this paper for fault detection. The substations were simulated by considering various situations in feeding the circuit, the transformer, and typical Tehran metro parameters that have developed the silicon rectifier. Required data for the network learning process has been gathered from simulation results. The 600Hz component value will change with the change of the location of a single pole to the earth's fault. Therefore, 600Hz components are used as inputs of the neural network when fault location is the output of the network system. The simulation results show that the proposed methods can accurately predict the fault location.Keywords: single pole-to-pole fault, Tehran railway, ICA, PSO, artificial neural network
Procedia PDF Downloads 125683 Modified CUSUM Algorithm for Gradual Change Detection in a Time Series Data
Authors: Victoria Siriaki Jorry, I. S. Mbalawata, Hayong Shin
Abstract:
The main objective in a change detection problem is to develop algorithms for efficient detection of gradual and/or abrupt changes in the parameter distribution of a process or time series data. In this paper, we present a modified cumulative (MCUSUM) algorithm to detect the start and end of a time-varying linear drift in mean value of a time series data based on likelihood ratio test procedure. The design, implementation and performance of the proposed algorithm for a linear drift detection is evaluated and compared to the existing CUSUM algorithm using different performance measures. An approach to accurately approximate the threshold of the MCUSUM is also provided. Performance of the MCUSUM for gradual change-point detection is compared to that of standard cumulative sum (CUSUM) control chart designed for abrupt shift detection using Monte Carlo Simulations. In terms of the expected time for detection, the MCUSUM procedure is found to have a better performance than a standard CUSUM chart for detection of the gradual change in mean. The algorithm is then applied and tested to a randomly generated time series data with a gradual linear trend in mean to demonstrate its usefulness.Keywords: average run length, CUSUM control chart, gradual change detection, likelihood ratio test
Procedia PDF Downloads 299682 Attributes That Influence Respondents When Choosing a Mate in Internet Dating Sites: An Innovative Matching Algorithm
Authors: Moti Zwilling, Srečko Natek
Abstract:
This paper aims to present an innovative predictive analytics analysis in order to find the best combination between two consumers who strive to find their partner or in internet sites. The methodology shown in this paper is based on analysis of consumer preferences and involves data mining and machine learning search techniques. The study is composed of two parts: The first part examines by means of descriptive statistics the correlations between a set of parameters that are taken between man and women where they intent to meet each other through the social media, usually the internet. In this part several hypotheses were examined and statistical analysis were taken place. Results show that there is a strong correlation between the affiliated attributes of man and woman as long as concerned to how they present themselves in a social media such as "Facebook". One interesting issue is the strong desire to develop a serious relationship between most of the respondents. In the second part, the authors used common data mining algorithms to search and classify the most important and effective attributes that affect the response rate of the other side. Results exhibit that personal presentation and education background are found as most affective to achieve a positive attitude to one's profile from the other mate.Keywords: dating sites, social networks, machine learning, decision trees, data mining
Procedia PDF Downloads 295681 Algorithms for Computing of Optimization Problems with a Common Minimum-Norm Fixed Point with Applications
Authors: Apirak Sombat, Teerapol Saleewong, Poom Kumam, Parin Chaipunya, Wiyada Kumam, Anantachai Padcharoen, Yeol Je Cho, Thana Sutthibutpong
Abstract:
This research is aimed to study a two-step iteration process defined over a finite family of σ-asymptotically quasi-nonexpansive nonself-mappings. The strong convergence is guaranteed under the framework of Banach spaces with some additional structural properties including strict and uniform convexity, reflexivity, and smoothness assumptions. With similar projection technique for nonself-mapping in Hilbert spaces, we hereby use the generalized projection to construct a point within the corresponding domain. Moreover, we have to introduce the use of duality mapping and its inverse to overcome the unavailability of duality representation that is exploit by Hilbert space theorists. We then apply our results for σ-asymptotically quasi-nonexpansive nonself-mappings to solve for ideal efficiency of vector optimization problems composed of finitely many objective functions. We also showed that the obtained solution from our process is the closest to the origin. Moreover, we also give an illustrative numerical example to support our results.Keywords: asymptotically quasi-nonexpansive nonself-mapping, strong convergence, fixed point, uniformly convex and uniformly smooth Banach space
Procedia PDF Downloads 260680 Spatio-Temporal Data Mining with Association Rules for Lake Van
Authors: Tolga Aydin, M. Fatih Alaeddinoğlu
Abstract:
People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.Keywords: apriori algorithm, association rules, data mining, spatio-temporal data
Procedia PDF Downloads 375679 Pilot Induced Oscillations Adaptive Suppression in Fly-By-Wire Systems
Authors: Herlandson C. Moura, Jorge H. Bidinotto, Eduardo M. Belo
Abstract:
The present work proposes the development of an adaptive control system which enables the suppression of Pilot Induced Oscillations (PIO) in Digital Fly-By-Wire (DFBW) aircrafts. The proposed system consists of a Modified Model Reference Adaptive Control (M-MRAC) integrated with the Gain Scheduling technique. The PIO oscillations are detected using a Real Time Oscillation Verifier (ROVER) algorithm, which then enables the system to switch between two reference models; one in PIO condition, with low proneness to the phenomenon and another one in normal condition, with high (or medium) proneness. The reference models are defined in a closed loop condition using the Linear Quadratic Regulator (LQR) control methodology for Multiple-Input-Multiple-Output (MIMO) systems. The implemented algorithms are simulated in software implementations with state space models and commercial flight simulators as the controlled elements and with pilot dynamics models. A sequence of pitch angles is considered as the reference signal, named as Synthetic Task (Syntask), which must be tracked by the pilot models. The initial outcomes show that the proposed system can detect and suppress (or mitigate) the PIO oscillations in real time before it reaches high amplitudes.Keywords: adaptive control, digital Fly-By-Wire, oscillations suppression, PIO
Procedia PDF Downloads 134678 A Selection Approach: Discriminative Model for Nominal Attributes-Based Distance Measures
Authors: Fang Gong
Abstract:
Distance measures are an indispensable part of many instance-based learning (IBL) and machine learning (ML) algorithms. The value difference metrics (VDM) and inverted specific-class distance measure (ISCDM) are among the top-performing distance measures that address nominal attributes. VDM performs well in some domains owing to its simplicity and poorly in others that exist missing value and non-class attribute noise. ISCDM, however, typically works better than VDM on such domains. To maximize their advantages and avoid disadvantages, in this paper, a selection approach: a discriminative model for nominal attributes-based distance measures is proposed. More concretely, VDM and ISCDM are built independently on a training dataset at the training stage, and the most credible one is recorded for each training instance. At the test stage, its nearest neighbor for each test instance is primarily found by any of VDM and ISCDM and then chooses the most reliable model of its nearest neighbor to predict its class label. It is simply denoted as a discriminative distance measure (DDM). Experiments are conducted on the 34 University of California at Irvine (UCI) machine learning repository datasets, and it shows DDM retains the interpretability and simplicity of VDM and ISCDM but significantly outperforms the original VDM and ISCDM and other state-of-the-art competitors in terms of accuracy.Keywords: distance measure, discriminative model, nominal attributes, nearest neighbor
Procedia PDF Downloads 115677 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)
Authors: Davood Rajabi, Mojgan Yazdani
Abstract:
Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood
Procedia PDF Downloads 505676 Performance Evaluation of Distributed Deep Learning Frameworks in Cloud Environment
Authors: Shuen-Tai Wang, Fang-An Kuo, Chau-Yi Chou, Yu-Bin Fang
Abstract:
2016 has become the year of the Artificial Intelligence explosion. AI technologies are getting more and more matured that most world well-known tech giants are making large investment to increase the capabilities in AI. Machine learning is the science of getting computers to act without being explicitly programmed, and deep learning is a subset of machine learning that uses deep neural network to train a machine to learn features directly from data. Deep learning realizes many machine learning applications which expand the field of AI. At the present time, deep learning frameworks have been widely deployed on servers for deep learning applications in both academia and industry. In training deep neural networks, there are many standard processes or algorithms, but the performance of different frameworks might be different. In this paper we evaluate the running performance of two state-of-the-art distributed deep learning frameworks that are running training calculation in parallel over multi GPU and multi nodes in our cloud environment. We evaluate the training performance of the frameworks with ResNet-50 convolutional neural network, and we analyze what factors that result in the performance among both distributed frameworks as well. Through the experimental analysis, we identify the overheads which could be further optimized. The main contribution is that the evaluation results provide further optimization directions in both performance tuning and algorithmic design.Keywords: artificial intelligence, machine learning, deep learning, convolutional neural networks
Procedia PDF Downloads 212675 The Algorithm to Solve the Extend General Malfatti’s Problem in a Convex Circular Triangle
Authors: Ching-Shoei Chiang
Abstract:
The Malfatti’s Problem solves the problem of fitting 3 circles into a right triangle such that these 3 circles are tangent to each other, and each circle is also tangent to a pair of the triangle’s sides. This problem has been extended to any triangle (called general Malfatti’s Problem). Furthermore, the problem has been extended to have 1+2+…+n circles inside the triangle with special tangency properties among circles and triangle sides; we call it extended general Malfatti’s problem. In the extended general Malfatti’s problem, call it Tri(Tn), where Tn is the triangle number, there are closed-form solutions for Tri(T₁) (inscribed circle) problem and Tri(T₂) (3 Malfatti’s circles) problem. These problems become more complex when n is greater than 2. In solving Tri(Tn) problem, n>2, algorithms have been proposed to solve these problems numerically. With a similar idea, this paper proposed an algorithm to find the radii of circles with the same tangency properties. Instead of the boundary of the triangle being a straight line, we use a convex circular arc as the boundary and try to find Tn circles inside this convex circular triangle with the same tangency properties among circles and boundary Carc. We call these problems the Carc(Tn) problems. The CPU time it takes for Carc(T16) problem, which finds 136 circles inside a convex circular triangle with specified tangency properties, is less than one second.Keywords: circle packing, computer-aided geometric design, geometric constraint solver, Malfatti’s problem
Procedia PDF Downloads 110674 Time Optimal Control Mode Switching between Detumbling and Pointing in the Early Orbit Phase
Authors: W. M. Ng, O. B. Iskender, L. Simonini, J. M. Gonzalez
Abstract:
A multitude of factors, including mechanical imperfections of the deployment system and separation instance of satellites from launchers, oftentimes results in highly uncontrolled initial tumbling motion immediately after deployment. In particular, small satellites which are characteristically launched as a piggyback to a large rocket, are generally allocated a large time window to complete detumbling within the early orbit phase. Because of the saturation risk of the actuators, current algorithms are conservative to avoid draining excessive power in the detumbling phase. This work aims to enable time-optimal switching of control modes during the early phase, reducing the time required to transit from launch to sun-pointing mode for power budget conscious satellites. This assumes the usage of B-dot controller for detumbling and PD controller for pointing. Nonlinear Euler's rotation equations are used to represent the attitude dynamics of satellites and Commercial-off-the-shelf (COTS) reaction wheels and magnetorquers are used to perform the manoeuver. Simulation results will be based on a spacecraft attitude simulator and the use case will be for multiple orbits of launch deployment general to Low Earth Orbit (LEO) satellites.Keywords: attitude control, detumbling, small satellites, spacecraft autonomy, time optimal control
Procedia PDF Downloads 117673 Intelligent Decision Support for Wind Park Operation: Machine-Learning Based Detection and Diagnosis of Anomalous Operating States
Authors: Angela Meyer
Abstract:
The operation and maintenance cost for wind parks make up a major fraction of the park’s overall lifetime cost. To minimize the cost and risk involved, an optimal operation and maintenance strategy requires continuous monitoring and analysis. In order to facilitate this, we present a decision support system that automatically scans the stream of telemetry sensor data generated from the turbines. By learning decision boundaries and normal reference operating states using machine learning algorithms, the decision support system can detect anomalous operating behavior in individual wind turbines and diagnose the involved turbine sub-systems. Operating personal can be alerted if a normal operating state boundary is exceeded. The presented decision support system and method are applicable for any turbine type and manufacturer providing telemetry data of the turbine operating state. We demonstrate the successful detection and diagnosis of anomalous operating states in a case study at a German onshore wind park comprised of Vestas V112 turbines.Keywords: anomaly detection, decision support, machine learning, monitoring, performance optimization, wind turbines
Procedia PDF Downloads 167672 Use of Machine Learning in Data Quality Assessment
Authors: Bruno Pinto Vieira, Marco Antonio Calijorne Soares, Armando Sérgio de Aguiar Filho
Abstract:
Nowadays, a massive amount of information has been produced by different data sources, including mobile devices and transactional systems. In this scenario, concerns arise on how to maintain or establish data quality, which is now treated as a product to be defined, measured, analyzed, and improved to meet consumers' needs, which is the one who uses these data in decision making and companies strategies. Information that reaches low levels of quality can lead to issues that can consume time and money, such as missed business opportunities, inadequate decisions, and bad risk management actions. The step of selecting, identifying, evaluating, and selecting data sources with significant quality according to the need has become a costly task for users since the sources do not provide information about their quality. Traditional data quality control methods are based on user experience or business rules limiting performance and slowing down the process with less than desirable accuracy. Using advanced machine learning algorithms, it is possible to take advantage of computational resources to overcome challenges and add value to companies and users. In this study, machine learning is applied to data quality analysis on different datasets, seeking to compare the performance of the techniques according to the dimensions of quality assessment. As a result, we could create a ranking of approaches used, besides a system that is able to carry out automatically, data quality assessment.Keywords: machine learning, data quality, quality dimension, quality assessment
Procedia PDF Downloads 150671 The Optimum Mel-Frequency Cepstral Coefficients (MFCCs) Contribution to Iranian Traditional Music Genre Classification by Instrumental Features
Authors: M. Abbasi Layegh, S. Haghipour, K. Athari, R. Khosravi, M. Tafkikialamdari
Abstract:
An approach to find the optimum mel-frequency cepstral coefficients (MFCCs) for the Radif of Mirzâ Ábdollâh, which is the principal emblem and the heart of Persian music, performed by most famous Iranian masters on two Iranian stringed instruments ‘Tar’ and ‘Setar’ is proposed. While investigating the variance of MFCC for each record in themusic database of 1500 gushe of the repertoire belonging to 12 modal systems (dastgâh and âvâz), we have applied the Fuzzy C-Mean clustering algorithm on each of the 12 coefficient and different combinations of those coefficients. We have applied the same experiment while increasing the number of coefficients but the clustering accuracy remained the same. Therefore, we can conclude that the first 7 MFCCs (V-7MFCC) are enough for classification of The Radif of Mirzâ Ábdollâh. Classical machine learning algorithms such as MLP neural networks, K-Nearest Neighbors (KNN), Gaussian Mixture Model (GMM), Hidden Markov Model (HMM) and Support Vector Machine (SVM) have been employed. Finally, it can be realized that SVM shows a better performance in this study.Keywords: radif of Mirzâ Ábdollâh, Gushe, mel frequency cepstral coefficients, fuzzy c-mean clustering algorithm, k-nearest neighbors (KNN), gaussian mixture model (GMM), hidden markov model (HMM), support vector machine (SVM)
Procedia PDF Downloads 448670 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges
Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars
Abstract:
In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting
Procedia PDF Downloads 153669 Blockchain-Resilient Framework for Cloud-Based Network Devices within the Architecture of Self-Driving Cars
Authors: Mirza Mujtaba Baig
Abstract:
Artificial Intelligence (AI) is evolving rapidly, and one of the areas in which this field has influenced is automation. The automobile, healthcare, education, and robotic industries deploy AI technologies constantly, and the automation of tasks is beneficial to allow time for knowledge-based tasks and also introduce convenience to everyday human endeavors. The paper reviews the challenges faced with the current implementations of autonomous self-driving cars by exploring the machine learning, robotics, and artificial intelligence techniques employed for the development of this innovation. The controversy surrounding the development and deployment of autonomous machines, e.g., vehicles, begs the need for the exploration of the configuration of the programming modules. This paper seeks to add to the body of knowledge of research assisting researchers in decreasing the inconsistencies in current programming modules. Blockchain is a technology of which applications are mostly found within the domains of financial, pharmaceutical, manufacturing, and artificial intelligence. The registering of events in a secured manner as well as applying external algorithms required for the data analytics are especially helpful for integrating, adapting, maintaining, and extending to new domains, especially predictive analytics applications.Keywords: artificial intelligence, automation, big data, self-driving cars, machine learning, neural networking algorithm, blockchain, business intelligence
Procedia PDF Downloads 120668 Design and Implementation of a Counting and Differentiation System for Vehicles through Video Processing
Authors: Derlis Gregor, Kevin Cikel, Mario Arzamendia, Raúl Gregor
Abstract:
This paper presents a self-sustaining mobile system for counting and classification of vehicles through processing video. It proposes a counting and classification algorithm divided in four steps that can be executed multiple times in parallel in a SBC (Single Board Computer), like the Raspberry Pi 2, in such a way that it can be implemented in real time. The first step of the proposed algorithm limits the zone of the image that it will be processed. The second step performs the detection of the mobile objects using a BGS (Background Subtraction) algorithm based on the GMM (Gaussian Mixture Model), as well as a shadow removal algorithm using physical-based features, followed by morphological operations. In the first step the vehicle detection will be performed by using edge detection algorithms and the vehicle following through Kalman filters. The last step of the proposed algorithm registers the vehicle passing and performs their classification according to their areas. An auto-sustainable system is proposed, powered by batteries and photovoltaic solar panels, and the data transmission is done through GPRS (General Packet Radio Service)eliminating the need of using external cable, which will facilitate it deployment and translation to any location where it could operate. The self-sustaining trailer will allow the counting and classification of vehicles in specific zones with difficult access.Keywords: intelligent transportation system, object detection, vehicle couting, vehicle classification, video processing
Procedia PDF Downloads 323667 Identifying Psychosocial, Autonomic, and Pain Sensitivity Risk Factors of Chronic Temporomandibular Disorder by Using Ridge Logistic Regression and Bootstrapping
Authors: Haolin Li, Eric Bair, Jane Monaco, Quefeng Li
Abstract:
The temporomandibular disorder (TMD) is a series of musculoskeletal disorders ranging from jaw pain to chronic debilitating pain, and the risk factors for the onset and maintenance of TMD are still unclear. Prior researches have shown that the potential risk factors for chronic TMD are related to psychosocial factors, autonomic functions, and pain sensitivity. Using data from the Orofacial Pain: Prospective Evaluation and Risk Assessment (OPPERA) study’s baseline case-control study, we examine whether the risk factors identified by prior researches are still statistically significant after taking all of the risk measures into account in one single model, and we also compare the relative influences of the risk factors in three different perspectives (psychosocial factors, autonomic functions, and pain sensitivity) on the chronic TMD. The statistical analysis is conducted by using ridge logistic regression and bootstrapping, in which the performance of the algorithms has been assessed using extensive simulation studies. The results support most of the findings of prior researches that there are many psychosocial and pain sensitivity measures that have significant associations with chronic TMD. However, it is surprising that most of the risk factors of autonomic functions have not presented significant associations with chronic TMD, as described by a prior research.Keywords: autonomic function, OPPERA study, pain sensitivity, psychosocial measures, temporomandibular disorder
Procedia PDF Downloads 190666 Determination of Verapamil Hydrochloride in Tablets and Injection Solutions With the Verapamil-Selective Electrode and Possibilities of Application in Pharmaceutical Analysis
Authors: Faisal A. Salih
Abstract:
Verapamil hydrochloride (Ver) is a drug used in medicine for arrythmia, angina and hypertension as a calcium channel blocker. For the quantitative determination of Ver in dosage forms, the HPLC method is most often used. A convenient alternative to the chromatographic method is potentiometry using a Verselective electrode, which does not require expensive equipment, can be used without separation from the matrix components, which significantly reduces the analysis time, and does not use toxic organic solvents, being a "green", "environmentally friendly" technique. It has been established in this study that the rational choice of the membrane plasticizer and the preconditioning and measurement algorithms, which prevent nonexchangeable extraction of Ver into the membrane phase, makes it possible to achieve excellent analytical characteristics of Ver-selective electrodes based on commercially available components. In particular, an electrode with the following membrane composition: PVC (32.8 wt %), ortho-nitrophenyloctyl ether (66.6 wt %), and tetrakis-4-chlorophenylborate (0.6 wt % or 0.01 M) have the lower detection limit 4 × 10−8 M and potential reproducibility 0.15–0.22 mV. Both direct potentiometry (DP) and potentiometric titration (PT) methods can be used for the determination of Ver in tablets and injection solutions. Masses of Ver per average tablet weight determined by the methods of DP and PT for the same set of 10 tablets were (80.4±0.2 and80.7±0.2) mg, respectively. The masses of Ver in solutions for injection, determined by DP for two ampoules from one set, were (5.00±0.015 and 5.004±0.006) mg. In all cases, good reproducibility and excellent correspondence with the declared quantities were observed.Keywords: verapamil, potentiometry, ion-selective electrode, pharmaceutical analysis
Procedia PDF Downloads 88665 Speed Control of DC Motor Using Optimization Techniques Based PID Controller
Authors: Santosh Kumar Suman, Vinod Kumar Giri
Abstract:
The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE
Procedia PDF Downloads 422664 Comparative Study on Manet Using Soft Computing Techniques
Authors: Amarjit Singh, Tripatdeep Singh Dua, Vikas Attri
Abstract:
Mobile Ad-hoc Network is a combination of several nodes that create dynamically a specific network without using any base infrastructure. In this study all the mobile nodes can depended upon each other to send any data. Mobile host can pick up data and forwarding to their destination path. Basically MANET depend upon their Quality of Service which is highly constraints to the user. To give better services we need to improve the QOS. In these days MANET QOS requirement to use soft computing techniques. These techniques depend upon their specific requirement and which exists using MANET concepts. Using a soft computing techniques various protocol and algorithms may be considered. In this paper, we provide comparative study review of existing work done in MANET using various kind of soft computing techniques. Our review research is based on their specific protocol or algorithm which provide concern solution of QOS need. We discuss about various protocol through which routing in MANET. In Second section we clear the concepts of Soft Computing and their types. In third section we review the MANET using different kind of soft computing techniques work done before. In forth section we need to understand the concept of QoS requirement which exists in MANET and we done comparative study on different protocol used before and last we conclude the purpose of using MANET with soft computing techniques metrics.Keywords: mobile ad-hoc network, fuzzy improved genetic approach, neural network, routing protocol, wireless mesh network
Procedia PDF Downloads 351663 Interacting with Multi-Scale Structures of Online Political Debates by Visualizing Phylomemies
Authors: Quentin Lobbe, David Chavalarias, Alexandre Delanoe
Abstract:
The ICT revolution has given birth to an unprecedented world of digital traces and has impacted a wide number of knowledge-driven domains such as science, education or policy making. Nowadays, we are daily fueled by unlimited flows of articles, blogs, messages, tweets, etc. The internet itself can thus be considered as an unsteady hyper-textual environment where websites emerge and expand every day. But there are structures inside knowledge. A given text can always be studied in relation to others or in light of a specific socio-cultural context. By way of their textual traces, human beings are calling each other out: hypertext citations, retweets, vocabulary similarity, etc. We are in fact the architects of a giant web of elements of knowledge whose structures and shapes convey their own information. The global shapes of these digital traces represent a source of collective knowledge and the question of their visualization remains an opened challenge. How can we explore, browse and interact with such shapes? In order to navigate across these growing constellations of words and texts, interdisciplinary innovations are emerging at the crossroad between fields of social and computational sciences. In particular, complex systems approaches make it now possible to reconstruct the hidden structures of textual knowledge by means of multi-scale objects of research such as semantic maps and phylomemies. The phylomemy reconstruction is a generic method related to the co-word analysis framework. Phylomemies aim to reveal the temporal dynamics of large corpora of textual contents by performing inter-temporal matching on extracted knowledge domains in order to identify their conceptual lineages. This study aims to address the question of visualizing the global shapes of online political discussions related to the French presidential and legislative elections of 2017. We aim to build phylomemies on top of a dedicated collection of thousands of French political tweets enriched with archived contemporary news web articles. Our goal is to reconstruct the temporal evolution of online debates fueled by each political community during the elections. To that end, we want to introduce an iterative data exploration methodology implemented and tested within the free software Gargantext. There we combine synchronic and diachronic axis of visualization to reveal the dynamics of our corpora of tweets and web pages as well as their inner syntagmatic and paradigmatic relationships. In doing so, we aim to provide researchers with innovative methodological means to explore online semantic landscapes in a collaborative and reflective way.Keywords: online political debate, French election, hyper-text, phylomemy
Procedia PDF Downloads 186662 Control Strategy for Two-Mode Hybrid Electric Vehicle by Using Fuzzy Controller
Authors: Jia-Shiun Chen, Hsiu-Ying Hwang
Abstract:
Hybrid electric vehicles can reduce pollution and improve fuel economy. Power-split hybrid electric vehicles (HEVs) provide two power paths between the internal combustion engine (ICE) and energy storage system (ESS) through the gears of an electrically variable transmission (EVT). EVT allows ICE to operate independently from vehicle speed all the time. Therefore, the ICE can operate in the efficient region of its characteristic brake specific fuel consumption (BSFC) map. The two-mode powertrain can operate in input-split or compound-split EVT modes and in four different fixed gear configurations. Power-split architecture is advantageous because it combines conventional series and parallel power paths. This research focuses on input-split and compound-split modes in the two-mode power-split powertrain. Fuzzy Logic Control (FLC) for an internal combustion engine (ICE) and PI control for electric machines (EMs) are derived for the urban driving cycle simulation. These control algorithms reduce vehicle fuel consumption and improve ICE efficiency while maintaining the state of charge (SOC) of the energy storage system in an efficient range.Keywords: hybrid electric vehicle, fuel economy, two-mode hybrid, fuzzy control
Procedia PDF Downloads 384