Search results for: forecasting accuracy.
1367 Multi-Temporal Mapping of Built-up Areas Using Daytime and Nighttime Satellite Images Based on Google Earth Engine Platform
Authors: S. Hutasavi, D. Chen
Abstract:
The built-up area is a significant proxy to measure regional economic growth and reflects the Gross Provincial Product (GPP). However, an up-to-date and reliable database of built-up areas is not always available, especially in developing countries. The cloud-based geospatial analysis platform such as Google Earth Engine (GEE) provides an opportunity with accessibility and computational power for those countries to generate the built-up data. Therefore, this study aims to extract the built-up areas in Eastern Economic Corridor (EEC), Thailand using day and nighttime satellite imagery based on GEE facilities. The normalized indices were generated from Landsat 8 surface reflectance dataset, including Normalized Difference Built-up Index (NDBI), Built-up Index (BUI), and Modified Built-up Index (MBUI). These indices were applied to identify built-up areas in EEC. The result shows that MBUI performs better than BUI and NDBI, with the highest accuracy of 0.85 and Kappa of 0.82. Moreover, the overall accuracy of classification was improved from 79% to 90%, and error of total built-up area was decreased from 29% to 0.7%, after night-time light data from the Visible and Infrared Imaging Suite (VIIRS) Day Night Band (DNB). The results suggest that MBUI with night-time light imagery is appropriate for built-up area extraction and be utilize for further study of socioeconomic impacts of regional development policy over the EEC region.
Keywords: Built-up area extraction, Google earth engine, adaptive thresholding method, rapid mapping.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6101366 sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: Classifiers, feature selection, locomotion, sEMG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14911365 Effect of Needle Height on Discharge Coefficient and Cavitation Number
Authors: Azadeh Yazdi, Mohammadreza Nezamirad, Sepideh Amirahmadian, Nasim Sabetpour, Amirmasoud Hamedi
Abstract:
Cavitation inside diesel injector nozzle is investigated using Reynolds-Stress-Navier stokes equations. Schnerr-Sauer cavitation model is used for modeling cavitation inside diesel injector nozzle. The carrying fluid utilized in the current study is diesel fuel. The flow is verified at the beginning by comparing with the previous experimental data and it was found that K-Epsilon turbulent model could lead to a better accuracy comparing to K-Omega turbulent model. Moreover, mass flow rate obtained numerically is compared with the experimental value and discrepancy was found to be less than 5% - which shows the accuracy of the current results. Finally, a real-size four-hole nozzle is investigated and the flow inside it is visualized based on velocity profile, discharge coefficient and cavitation number. It was found that the mesh density could be reduced significantly by utilizing periodic boundary condition. Velocity contour at the mid nozzle showed that maximum value of velocity occurs at the end of the needle before entering the orifice area. Last but not least, at the same boundary conditions, when different needle heights were utilized, it was found that as needle height increases with an increase in cavitation number, discharge coefficient increases, while the mentioned increases is more tangible at smaller values of needle heights.
Keywords: cavitation, diesel fuel, CFD, real size nozzle, mass flow rate
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5231364 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network
Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain
Abstract:
This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.
Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3581363 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer
Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved
Abstract:
Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.
Keywords: Computer-aided system, detection, image segmentation, morphology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5441362 Bayes Net Classifiers for Prediction of Renal Graft Status and Survival Period
Authors: Jiakai Li, Gursel Serpen, Steven Selman, Matt Franchetti, Mike Riesen, Cynthia Schneider
Abstract:
This paper presents the development of a Bayesian belief network classifier for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation. The objective was to explore feasibility of developing a decision making tool for identifying the most suitable recipient among the candidate pool members. The dataset was compiled from the University of Toledo Medical Center Hospital patients as reported to the United Network Organ Sharing, and had 1228 patient records for the period covering 1987 through 2009. The Bayes net classifiers were developed using the Weka machine learning software workbench. Two separate classifiers were induced from the data set, one to predict the status of the graft as either failed or living, and a second classifier to predict the graft survival period. The classifier for graft status prediction performed very well with a prediction accuracy of 97.8% and true positive values of 0.967 and 0.988 for the living and failed classes, respectively. The second classifier to predict the graft survival period yielded a prediction accuracy of 68.2% and a true positive rate of 0.85 for the class representing those instances with kidneys failing during the first year following transplantation. Simulation results indicated that it is feasible to develop a successful Bayesian belief network classifier for prediction of graft status, but not the graft survival period, using the information in UNOS database.Keywords: Bayesian network classifier, renal transplantation, graft survival period, United Network for Organ Sharing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21091361 Performance of Heterogeneous Autoregressive Models of Realized Volatility: Evidence from U.S. Stock Market
Authors: Petr Seďa
Abstract:
This paper deals with heterogeneous autoregressive models of realized volatility (HAR-RV models) on high-frequency data of stock indices in the USA. Its aim is to capture the behavior of three groups of market participants trading on a daily, weekly and monthly basis and assess their role in predicting the daily realized volatility. The benefits of this work lies mainly in the application of heterogeneous autoregressive models of realized volatility on stock indices in the USA with a special aim to analyze an impact of the global financial crisis on applied models forecasting performance. We use three data sets, the first one from the period before the global financial crisis occurred in the years 2006-2007, the second one from the period when the global financial crisis fully hit the U.S. financial market in 2008-2009 years, and the last period was defined over 2010-2011 years. The model output indicates that estimated realized volatility in the market is very much determined by daily traders and in some cases excludes the impact of those market participants who trade on monthly basis.Keywords: Global financial crisis, heterogeneous autoregressive model, in-sample forecast, realized volatility, U.S. stock market.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24761360 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations
Authors: Yehjune Heo
Abstract:
Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.
Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5931359 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique
Authors: Nishant Shrivastava, D. K. Sehgal
Abstract:
In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.
Keywords: Finite element, Lagrangian, optimal stress location, serendipity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6331358 Artificial Neural Networks Modeling in Water Resources Engineering: Infrastructure and Applications
Authors: M. R. Mustafa, M. H. Isa, R. B. Rezaur
Abstract:
The use of artificial neural network (ANN) modeling for prediction and forecasting variables in water resources engineering are being increasing rapidly. Infrastructural applications of ANN in terms of selection of inputs, architecture of networks, training algorithms, and selection of training parameters in different types of neural networks used in water resources engineering have been reported. ANN modeling conducted for water resources engineering variables (river sediment and discharge) published in high impact journals since 2002 to 2011 have been examined and presented in this review. ANN is a vigorous technique to develop immense relationship between the input and output variables, and able to extract complex behavior between the water resources variables such as river sediment and discharge. It can produce robust prediction results for many of the water resources engineering problems by appropriate learning from a set of examples. It is important to have a good understanding of the input and output variables from a statistical analysis of the data before network modeling, which can facilitate to design an efficient network. An appropriate training based ANN model is able to adopt the physical understanding between the variables and may generate more effective results than conventional prediction techniques.Keywords: ANN, discharge, modeling, prediction, sediment,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56841357 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics
Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo
Abstract:
Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7081356 Evaluation of Best-Fit Probability Distribution for Prediction of Extreme Hydrologic Phenomena
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
The probability distributions are the best method for forecasting of extreme hydrologic phenomena such as rainfall and flood flows. In this research, in order to determine suitable probability distribution for estimating of annual extreme rainfall and flood flows (discharge) series with different return periods, precipitation with 40 and discharge with 58 years time period had been collected from Karkheh River at Iran. After homogeneity and adequacy tests, data have been analyzed by Stormwater Management and Design Aid (SMADA) software and residual sum of squares (R.S.S). The best probability distribution was Log Pearson Type III with R.S.S value (145.91) and value (13.67) for peak discharge and Log Pearson Type III with R.S.S values (141.08) and (8.95) for maximum discharge in Jelogir Majin and Pole Zal stations, respectively. The best distribution for maximum precipitation in Jelogir Majin and Pole Zal stations was Log Pearson Type III distribution with R.S.S values (1.74&1.90) and then Pearson Type III distribution with R.S.S values (1.53&1.69). Overall, the Log Pearson Type III distributions are acceptable distribution types for representing statistics of extreme hydrologic phenomena in Karkheh River at Iran with the Pearson Type III distribution as a potential alternative.
Keywords: Karkheh river, log pearson type III, probability distribution, residual sum of squares.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8811355 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: Clustering algorithm, potential function, speech signal, the UBSS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6791354 Artifacts in Spiral X-ray CT Scanners: Problems and Solutions
Authors: Mehran Yazdi, Luc Beaulieu
Abstract:
Artifact is one of the most important factors in degrading the CT image quality and plays an important role in diagnostic accuracy. In this paper, some artifacts typically appear in Spiral CT are introduced. The different factors such as patient, equipment and interpolation algorithm which cause the artifacts are discussed and new developments and image processing algorithms to prevent or reduce them are presented.Keywords: CT artifacts, Spiral CT, Artifact removal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 45051353 Adaptive Neuro-Fuzzy Inference System for Financial Trading using Intraday Seasonality Observation Model
Authors: A. Kablan
Abstract:
The prediction of financial time series is a very complicated process. If the efficient market hypothesis holds, then the predictability of most financial time series would be a rather controversial issue, due to the fact that the current price contains already all available information in the market. This paper extends the Adaptive Neuro Fuzzy Inference System for High Frequency Trading which is an expert system that is capable of using fuzzy reasoning combined with the pattern recognition capability of neural networks to be used in financial forecasting and trading in high frequency. However, in order to eliminate unnecessary input in the training phase a new event based volatility model was proposed. Taking volatility and the scaling laws of financial time series into consideration has brought about the development of the Intraday Seasonality Observation Model. This new model allows the observation of specific events and seasonalities in data and subsequently removes any unnecessary data. This new event based volatility model provides the ANFIS system with more accurate input and has increased the overall performance of the system.Keywords: Adaptive Neuro-fuzzy Inference system, High Frequency Trading, Intraday Seasonality Observation Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33951352 Constructing a Bayesian Network for Solar Energy in Egypt Using Life Cycle Analysis and Machine Learning Algorithms
Authors: Rawaa H. El-Bidweihy, Hisham M. Abdelsalam, Ihab A. El-Khodary
Abstract:
In an era where machines run and shape our world, the need for a stable, non-ending source of energy emerges. In this study, the focus was on the solar energy in Egypt as a renewable source, the most important factors that could affect the solar energy’s market share throughout its life cycle production were analyzed and filtered, the relationships between them were derived before structuring a Bayesian network. Also, forecasted models were built for multiple factors to predict the states in Egypt by 2035, based on historical data and patterns, to be used as the nodes’ states in the network. 37 factors were found to might have an impact on the use of solar energy and then were deducted to 12 factors that were chosen to be the most effective to the solar energy’s life cycle in Egypt, based on surveying experts and data analysis, some of the factors were found to be recurring in multiple stages. The presented Bayesian network could be used later for scenario and decision analysis of using solar energy in Egypt, as a stable renewable source for generating any type of energy needed.
Keywords: ARIMA, auto correlation, Bayesian network, forecasting models, life cycle, partial correlation, renewable energy, SARIMA, solar energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7811351 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9771350 Path-Tracking Controller for Tracked Mobile Robot on Rough Terrain
Authors: Toshifumi Hiramatsu, Satoshi Morita, Manuel Pencelli, Marta Niccolini, Matteo Ragaglia, Alfredo Argiolas
Abstract:
Automation technologies for agriculture field are needed to promote labor-saving. One of the most relevant problems in automated agriculture is represented by controlling the robot along a predetermined path in presence of rough terrain or incline ground. Unfortunately, disturbances originating from interaction with the ground, such as slipping, make it quite difficult to achieve the required accuracy. In general, it is required to move within 5-10 cm accuracy with respect to the predetermined path. Moreover, lateral velocity caused by gravity on the incline field also affects slipping. In this paper, a path-tracking controller for tracked mobile robots moving on rough terrains of incline field such as vineyard is presented. The controller is composed of a disturbance observer and an adaptive controller based on the kinematic model of the robot. The disturbance observer measures the difference between the measured and the reference yaw rate and linear velocity in order to estimate slip. Then, the adaptive controller adapts “virtual” parameter of the kinematics model: Instantaneous Centers of Rotation (ICRs). Finally, target angular velocity reference is computed according to the adapted parameter. This solution allows estimating the effects of slip without making the model too complex. Finally, the effectiveness of the proposed solution is tested in a simulation environment.
Keywords: Agricultural robot, autonomous control, path-tracking control, tracked mobile robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11341349 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values
Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi
Abstract:
A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.
Keywords: eXtreme Gradient Boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impairment, multiclass classification, ADNI, support vector machine, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9581348 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images
Authors: Mehrnoosh Omati, Mahmod Reza Sahebi
Abstract:
The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.
Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8631347 Classifying Students for E-Learning in Information Technology Course Using ANN
Authors: S. Areerachakul, N. Ployong, S. Na Songkla
Abstract:
This research’s objective is to select the model with most accurate value by using Neural Network Technique as a way to filter potential students who enroll in IT course by Electronic learning at Suan Suanadha Rajabhat University. It is designed to help students selecting the appropriate courses by themselves. The result showed that the most accurate model was 100 Folds Cross-validation which had 73.58% points of accuracy.
Keywords: Artificial neural network, classification, students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14981346 Forecasting the Sea Level Change in Strait of Hormuz
Authors: Hamid Goharnejad, Amir Hossein Eghbali
Abstract:
Recent investigations have demonstrated the global sea level rise due to climate change impacts. In this study, climate changes study the effects of increasing water level in the strait of Hormuz. The probable changes of sea level rise should be investigated to employ the adaption strategies. The climatic output data of a GCM (General Circulation Model) named CGCM3 under climate change scenario of A1b and A2 were used. Among different variables simulated by this model, those of maximum correlation with sea level changes in the study region and least redundancy among themselves were selected for sea level rise prediction by using stepwise regression. One of models (Discrete Wavelet artificial Neural Network) was developed to explore the relationship between climatic variables and sea level changes. In these models, wavelet was used to disaggregate the time series of input and output data into different components and then ANN was used to relate the disaggregated components of predictors and input parameters to each other. The results showed in the Shahid Rajae Station for scenario A1B sea level rise is among 64 to 75 cm and for the A2 Scenario sea level rise is among 90 t0 105 cm. Furthermore, the result showed a significant increase of sea level at the study region under climate change impacts, which should be incorporated in coastal areas management.Keywords: Climate change scenarios, sea-level rise, strait of Hormuz, artificial neural network, fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24241345 Accurate Control of a Pneumatic System using an Innovative Fuzzy Gain-Scheduling Pattern
Authors: M. G. Papoutsidakis, G. Chamilothoris, F. Dailami, N. Larsen, A Pipe
Abstract:
Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. A methodology for obtaining high position accuracy with a linear pneumatic actuator is presented. During experimentation with a number of PID classical control approaches over many operations of the pneumatic system, the need for frequent manual re-tuning of the controller could not be eliminated. The reason for this problem is thermal and energy losses inside the cylinder body due to the complex friction forces developed by the piston displacements. Although PD controllers performed very well over short periods, it was necessary in our research project to introduce some form of automatic gain-scheduling to achieve good long-term performance. We chose a fuzzy logic system to do this, which proved to be an easily designed and robust approach. Since the PD approach showed very good behaviour in terms of position accuracy and settling time, it was incorporated into a modified form of the 1st order Tagaki- Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler uses an input variable which automatically changes the PD gain values of the controller according to the frequency of repeated system operations. Performance of the new controller was significantly improved and the need for manual re-tuning was eliminated without a decrease in performance. The performance of the controller operating with the above method is going to be tested through a high-speed web network (GRID) for research purposes.Keywords: Fuzzy logic, gain scheduling, leaky integrator, pneumatic actuator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17501344 Fake Account Detection in Twitter Based on Minimum Weighted Feature set
Authors: Ahmed El Azab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny
Abstract:
Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting the fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, and then the determined factors are applied using different classification techniques. A comparison of the results of these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent researches in the same area; this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts; moreover, the study can be applied on different social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.Keywords: Fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 58371343 Variational Iteration Method for the Solution of Boundary Value Problems
Authors: Olayiwola M.O., Gbolagade A .W., Akinpelu F. O.
Abstract:
In this work, we present a reliable framework to solve boundary value problems with particular significance in solid mechanics. These problems are used as mathematical models in deformation of beams. The algorithm rests mainly on a relatively new technique, the Variational Iteration Method. Some examples are given to confirm the efficiency and the accuracy of the method.
Keywords: Variational iteration method, boundary value problems, convergence, restricted variation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21021342 A New Center of Motion in Cabling Robots
Authors: A. Abbasi Moshaii, F. Najafi
Abstract:
In this paper a new model for center of motion creating is proposed. This new method uses cables. So, it is very useful in robots because it is light and has easy assembling process. In the robots which need to be in touch with some things this method is so useful. It will be described in the following. The accuracy of the idea is proved by two experiments. This system could be used in the robots which need a fixed point in the contact with some things and make a circular motion.Keywords: Center of Motion, Robotic cables, permanent touching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16681341 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26411340 Early Depression Detection for Young Adults with a Psychiatric and AI Interdisciplinary Multimodal Framework
Authors: Raymond Xu, Ashley Hua, Andrew Wang, Yuru Lin
Abstract:
During COVID-19, the depression rate has increased dramatically. Young adults are most vulnerable to the mental health effects of the pandemic. Lower-income families have a higher ratio to be diagnosed with depression than the general population, but less access to clinics. This research aims to achieve early depression detection at low cost, large scale, and high accuracy with an interdisciplinary approach by incorporating clinical practices defined by American Psychiatric Association (APA) as well as multimodal AI framework. The proposed approach detected the nine depression symptoms with Natural Language Processing sentiment analysis and a symptom-based Lexicon uniquely designed for young adults. The experiments were conducted on the multimedia survey results from adolescents and young adults and unbiased Twitter communications. The result was further aggregated with the facial emotional cues analyzed by the Convolutional Neural Network on the multimedia survey videos. Five experiments each conducted on 10k data entries reached consistent results with an average accuracy of 88.31%, higher than the existing natural language analysis models. This approach can reach 300+ million daily active Twitter users and is highly accessible by low-income populations to promote early depression detection to raise awareness in adolescents and young adults and reveal complementary cues to assist clinical depression diagnosis.
Keywords: Artificial intelligence, depression detection, facial emotion recognition, natural language processing, mental disorder.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11781339 Implementation of an On-Line PD Measurement System Using HFCT
Authors: F. Haghjoo, M. Sarlak, S.M. Shahrtash
Abstract:
In order to perform on-line measuring and detection of PD signals, a total solution composing of an HFCT, A/D converter and a complete software package is proposed. The software package includes compensation of HFCT contribution, filtering and noise reduction using wavelet transform and soft calibration routines. The results have shown good performance and high accuracy.Keywords: Partial Discharge, Measurement, On-line, HFCT
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18181338 Subpixel Detection of Circular Objects Using Geometric Property
Authors: Wen-Yen Wu, Wen-Bin Yu
Abstract:
In this paper, we propose a method for detecting circular shapes with subpixel accuracy. First, the geometric properties of circles have been used to find the diameters as well as the circumference pixels. The center and radius are then estimated by the circumference pixels. Both synthetic and real images have been tested by the proposed method. The experimental results show that the new method is efficient.Keywords: Subpixel, least squares estimation, circle detection, Hough transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2137