Search results for: deep neural models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9155

Search results for: deep neural models

7115 Fuzzy Time Series Forecasting Based on Fuzzy Logical Relationships, PSO Technique, and Automatic Clustering Algorithm

Authors: A. K. M. Kamrul Islam, Abdelhamid Bouchachia, Suang Cang, Hongnian Yu

Abstract:

Forecasting model has a great impact in terms of prediction and continues to do so into the future. Although many forecasting models have been studied in recent years, most researchers focus on different forecasting methods based on fuzzy time series to solve forecasting problems. The forecasted models accuracy fully depends on the two terms that are the length of the interval in the universe of discourse and the content of the forecast rules. Moreover, a hybrid forecasting method can be an effective and efficient way to improve forecasts rather than an individual forecasting model. There are different hybrids forecasting models which combined fuzzy time series with evolutionary algorithms, but the performances are not quite satisfactory. In this paper, we proposed a hybrid forecasting model which deals with the first order as well as high order fuzzy time series and particle swarm optimization to improve the forecasted accuracy. The proposed method used the historical enrollments of the University of Alabama as dataset in the forecasting process. Firstly, we considered an automatic clustering algorithm to calculate the appropriate interval for the historical enrollments. Then particle swarm optimization and fuzzy time series are combined that shows better forecasting accuracy than other existing forecasting models.

Keywords: fuzzy time series (fts), particle swarm optimization, clustering algorithm, hybrid forecasting model

Procedia PDF Downloads 238
7114 A Review on 3D Smart City Platforms Using Remotely Sensed Data to Aid Simulation and Urban Analysis

Authors: Slim Namouchi, Bruno Vallet, Imed Riadh Farah

Abstract:

3D urban models provide powerful tools for decision making, urban planning, and smart city services. The accuracy of this 3D based systems is directly related to the quality of these models. Since manual large-scale modeling, such as cities or countries is highly time intensive and very expensive process, a fully automatic 3D building generation is needed. However, 3D modeling process result depends on the input data, the proprieties of the captured objects, and the required characteristics of the reconstructed 3D model. Nowadays, producing 3D real-world model is no longer a problem. Remotely sensed data had experienced a remarkable increase in the recent years, especially data acquired using unmanned aerial vehicles (UAV). While the scanning techniques are developing, the captured data amount and the resolution are getting bigger and more precise. This paper presents a literature review, which aims to identify different methods of automatic 3D buildings extractions either from LiDAR or the combination of LiDAR and satellite or aerial images. Then, we present open source technologies, and data models (e.g., CityGML, PostGIS, Cesiumjs) used to integrate these models in geospatial base layers for smart city services.

Keywords: CityGML, LiDAR, remote sensing, SIG, Smart City, 3D urban modeling

Procedia PDF Downloads 120
7113 Robot Navigation and Localization Based on the Rat’s Brain Signals

Authors: Endri Rama, Genci Capi, Shigenori Kawahara

Abstract:

The mobile robot ability to navigate autonomously in its environment is very important. Even though the advances in technology, robot self-localization and goal directed navigation in complex environments are still challenging tasks. In this article, we propose a novel method for robot navigation based on rat’s brain signals (Local Field Potentials). It has been well known that rats accurately and rapidly navigate in a complex space by localizing themselves in reference to the surrounding environmental cues. As the first step to incorporate the rat’s navigation strategy into the robot control, we analyzed the rats’ strategies while it navigates in a multiple Y-maze, and recorded Local Field Potentials (LFPs) simultaneously from three brain regions. Next, we processed the LFPs, and the extracted features were used as an input in the artificial neural network to predict the rat’s next location, especially in the decision-making moment, in Y-junctions. We developed an algorithm by which the robot learned to imitate the rat’s decision-making by mapping the rat’s brain signals into its own actions. Finally, the robot learned to integrate the internal states as well as external sensors in order to localize and navigate in the complex environment.

Keywords: brain-machine interface, decision-making, mobile robot, neural network

Procedia PDF Downloads 284
7112 The Museum of Museums: A Mobile Augmented Reality Application

Authors: Qian Jin

Abstract:

Museums have been using interactive technology to spark visitor interest and improve understanding. These technologies can play a crucial role in helping visitors understand more about an exhibition site by using multimedia to provide information. Google Arts and Culture and Smartify are two very successful digital heritage products. They used mobile augmented reality to visualise the museum's 3D models and heritage images but did not include 3D models of the collection and audio information. In this research, service-oriented mobile augmented reality application was developed for users to access collections from multiple museums(including V and A, the British Museum, and British Library). The third-party API (Application Programming Interface) is requested to collect metadata (including images, 3D models, videos, and text) of three museums' collections. The acquired content is then visualized in AR environments. This product will help users who cannot visit the museum offline due to various reasons (inconvenience of transportation, physical disability, time schedule).

Keywords: digital heritage, argument reality, museum, flutter, ARcore

Procedia PDF Downloads 64
7111 Artificial Intelligence-Based Detection of Individuals Suffering from Vestibular Disorder

Authors: Dua Hişam, Serhat İkizoğlu

Abstract:

Identifying the problem behind balance disorder is one of the most interesting topics in the medical literature. This study has considerably enhanced the development of artificial intelligence (AI) algorithms applying multiple machine learning (ML) models to sensory data on gait collected from humans to classify between normal people and those suffering from Vestibular System (VS) problems. Although AI is widely utilized as a diagnostic tool in medicine, AI models have not been used to perform feature extraction and identify VS disorders through training on raw data. In this study, three machine learning (ML) models, the Random Forest Classifier (RF), Extreme Gradient Boosting (XGB), and K-Nearest Neighbor (KNN), have been trained to detect VS disorder, and the performance comparison of the algorithms has been made using accuracy, recall, precision, and f1-score. With an accuracy of 95.28 %, Random Forest Classifier (RF) was the most accurate model.

Keywords: vestibular disorder, machine learning, random forest classifier, k-nearest neighbor, extreme gradient boosting

Procedia PDF Downloads 55
7110 Predicting Blockchain Technology Installation Cost in Supply Chain System through Supervised Learning

Authors: Hossein Havaeji, Tony Wong, Thien-My Dao

Abstract:

1. Research Problems and Research Objectives: Blockchain Technology-enabled Supply Chain System (BT-enabled SCS) is the system using BT to drive SCS transparency, security, durability, and process integrity as SCS data is not always visible, available, or trusted. The costs of operating BT in the SCS are a common problem in several organizations. The costs must be estimated as they can impact existing cost control strategies. To account for system and deployment costs, it is necessary to overcome the following hurdle. The problem is that the costs of developing and running a BT in SCS are not yet clear in most cases. Many industries aiming to use BT have special attention to the importance of BT installation cost which has a direct impact on the total costs of SCS. Predicting BT installation cost in SCS may help managers decide whether BT is to be an economic advantage. The purpose of the research is to identify some main BT installation cost components in SCS needed for deeper cost analysis. We then identify and categorize the main groups of cost components in more detail to utilize them in the prediction process. The second objective is to determine the suitable Supervised Learning technique in order to predict the costs of developing and running BT in SCS in a particular case study. The last aim is to investigate how the running BT cost can be involved in the total cost of SCS. 2. Work Performed: Applied successfully in various fields, Supervised Learning is a method to set the data frame, treat the data, and train/practice the method sort. It is a learning model directed to make predictions of an outcome measurement based on a set of unforeseen input data. The following steps must be conducted to search for the objectives of our subject. The first step is to make a literature review to identify the different cost components of BT installation in SCS. Based on the literature review, we should choose some Supervised Learning methods which are suitable for BT installation cost prediction in SCS. According to the literature review, some Supervised Learning algorithms which provide us with a powerful tool to classify BT installation components and predict BT installation cost are the Support Vector Regression (SVR) algorithm, Back Propagation (BP) neural network, and Artificial Neural Network (ANN). Choosing a case study to feed data into the models comes into the third step. Finally, we will propose the best predictive performance to find the minimum BT installation costs in SCS. 3. Expected Results and Conclusion: This study tends to propose a cost prediction of BT installation in SCS with the help of Supervised Learning algorithms. At first attempt, we will select a case study in the field of BT-enabled SCS, and then use some Supervised Learning algorithms to predict BT installation cost in SCS. We continue to find the best predictive performance for developing and running BT in SCS. Finally, the paper will be presented at the conference.

Keywords: blockchain technology, blockchain technology-enabled supply chain system, installation cost, supervised learning

Procedia PDF Downloads 109
7109 On Four Models of a Three Server Queue with Optional Server Vacations

Authors: Kailash C. Madan

Abstract:

We study four models of a three server queueing system with Bernoulli schedule optional server vacations. Customers arriving at the system one by one in a Poisson process are provided identical exponential service by three parallel servers according to a first-come, first served queue discipline. In model A, all three servers may be allowed a vacation at one time, in Model B at the most two of the three servers may be allowed a vacation at one time, in model C at the most one server is allowed a vacation, and in model D no server is allowed a vacation. We study steady the state behavior of the four models and obtain steady state probability generating functions for the queue size at a random point of time for all states of the system. In model D, a known result for a three server queueing system without server vacations is derived.

Keywords: a three server queue, Bernoulli schedule server vacations, queue size distribution at a random epoch, steady state

Procedia PDF Downloads 288
7108 Time Series Regression with Meta-Clusters

Authors: Monika Chuchro

Abstract:

This paper presents a preliminary attempt to apply classification of time series using meta-clusters in order to improve the quality of regression models. In this case, clustering was performed as a method to obtain a subgroups of time series data with normal distribution from inflow into waste water treatment plant data which Composed of several groups differing by mean value. Two simple algorithms: K-mean and EM were chosen as a clustering method. The rand index was used to measure the similarity. After simple meta-clustering, regression model was performed for each subgroups. The final model was a sum of subgroups models. The quality of obtained model was compared with the regression model made using the same explanatory variables but with no clustering of data. Results were compared by determination coefficient (R2), measure of prediction accuracy mean absolute percentage error (MAPE) and comparison on linear chart. Preliminary results allows to foresee the potential of the presented technique.

Keywords: clustering, data analysis, data mining, predictive models

Procedia PDF Downloads 459
7107 Creation and Management of Knowledge for Organization Sustainability and Learning

Authors: Deepa Kapoor, Rajshree Singh

Abstract:

This paper appreciates the emergence and growing importance as a new production factor makes the development of technologies, methodologies and strategies for measurement, creation, and diffusion into one of the main priorities of the organizations in the knowledge society. There are many models for creation and management of knowledge and diverse and varied perspectives for study, analysis, and understanding. In this article, we will conduct a theoretical approach to the type of models for the creation and management of knowledge; we will discuss some of them and see some of the difficulties and the key factors that determine the success of the processes for the creation and management of knowledge.

Keywords: knowledge creation, knowledge management, organizational development, organization learning

Procedia PDF Downloads 326
7106 Removal of Toxic Ni++ Ions from Wastewater by Nano-Bentonite

Authors: A. M. Ahmed, Mona A. Darwish

Abstract:

Removal of Ni++ ions from aqueous solution by sorption ontoNano-bentonite was investigated. Experiments were carried out as a function amount of Nano-bentonite, pH, concentration of metal, constant time, agitation speed and temperature. The adsorption parameter of metal ions followed the Langmuir Freundlich adsorption isotherm were applied to analyze adsorption data. The adsorption process has fit pseudo-second order kinetic models. Thermodynamics parameters e.g.ΔG*, ΔS °and ΔH ° of adsorption process have also been calculated and the sorption process was found to be endothermic. The adsorption process has fit pseudo-second order kinetic models. Langmuir and Freundich adsorption isotherm models were applied to analyze adsorption data and both were found to be applicable to the adsorption process. Thermodynamic parameters, e.g., ∆G °, ∆S ° and ∆H ° of the on-going adsorption process have also been calculated and the sorption process was found to be endothermic. Finally, it can be seen that Bentonite was found to be more effective for the removal of Ni (II) same with some experimental conditions.

Keywords: waste water, nickel, bentonite, adsorption

Procedia PDF Downloads 242
7105 Analyzing the Performance of Machine Learning Models to Predict Alzheimer's Disease and its Stages Addressing Missing Value Problem

Authors: Carlos Theran, Yohn Parra Bautista, Victor Adankai, Richard Alo, Jimwi Liu, Clement G. Yedjou

Abstract:

Alzheimer's disease (AD) is a neurodegenerative disorder primarily characterized by deteriorating cognitive functions. AD has gained relevant attention in the last decade. An estimated 24 million people worldwide suffered from this disease by 2011. In 2016 an estimated 40 million were diagnosed with AD, and for 2050 is expected to reach 131 million people affected by AD. Therefore, detecting and confirming AD at its different stages is a priority for medical practices to provide adequate and accurate treatments. Recently, Machine Learning (ML) models have been used to study AD's stages handling missing values in multiclass, focusing on the delineation of Early Mild Cognitive Impairment (EMCI), Late Mild Cognitive Impairment (LMCI), and normal cognitive (CN). But, to our best knowledge, robust performance information of these models and the missing data analysis has not been presented in the literature. In this paper, we propose studying the performance of five different machine learning models for AD's stages multiclass prediction in terms of accuracy, precision, and F1-score. Also, the analysis of three imputation methods to handle the missing value problem is presented. A framework that integrates ML model for AD's stages multiclass prediction is proposed, performing an average accuracy of 84%.

Keywords: alzheimer's disease, missing value, machine learning, performance evaluation

Procedia PDF Downloads 220
7104 Persian Pistachio Nut (Pistacia vera L.) Dehydration in Natural and Industrial Conditions

Authors: Hamid Tavakolipour, Mohsen Mokhtarian, Ahmad Kalbasi Ashtari

Abstract:

In this study, the effect of various drying methods (sun drying, shade drying and industrial drying) on final moisture content, shell splitting degree, shrinkage and color change were studied. Sun drying resulted higher degree of pistachio nuts shell splitting on pistachio nuts relative other drying methods. The ANOVA results showed that the different drying methods did not significantly effects on color change of dried pistachio nut. The results illustrated that pistachio nut dried by industrial drying had the lowest moisture content. After the end of drying process, initially, the experimental drying data were fitted with five famous drying models namely Newton, Page, Silva et al., Peleg and Henderson and Pabis. The results indicated that Peleg and Page models gave better results compared with other models to monitor the moisture ratio’s pistachio nut in industrial drying and open sun (or shade drying) methods, respectively.

Keywords: industrial drying, pistachio, quality properties, traditional drying

Procedia PDF Downloads 319
7103 The Effect of Austempering Temperature on Anisotropy of TRIP Steel

Authors: Abdolreza Heidari Noosh Abad, Amir Abedi, Davood Mirahmadi khaki

Abstract:

The high strength and flexibility of TRIP steels are the major reasons for them being widely used in the automobile industry. Deep drawing is regarded as a common metal sheet manufacturing process is used extensively in the modern industry, particularly automobile industry. To investigate the potential of deep drawing characteristic of materials, steel sheet anisotropy is studied and expressed as R-Value. The TRIP steels have a multi-phase microstructure consisting typically of ferrite, bainite and retained austenite. The retained austenite appears to be the most effective phase in the microstructure of the TRIP steels. In the present research, Taguchi method has been employed to study investigates the effect of austempering temperature parameters on the anisotropy property of the TRIP steel. To achieve this purpose, a steel with chemical composition of 0.196C -1.42Si-1.41Mn, has been used and annealed at 810oC, and then austempered at 340-460oC for 3, 6, and 9 minutes. The results shows that the austempering temperature has a direct relationship with R-value, respectively. With increasing austempering temperature, residual austenite grain size increases as well as increased solubility, which increases the amount of R-value. According to the results of the Taguchi method, austempering temperature’s p-value less than 0.05 is due to effective on R-value.

Keywords: Taguchi method, hot rolling, thermomechanical process, anisotropy, R-value

Procedia PDF Downloads 314
7102 Credit Risk Evaluation Using Genetic Programming

Authors: Ines Gasmi, Salima Smiti, Makram Soui, Khaled Ghedira

Abstract:

Credit risk is considered as one of the important issues for financial institutions. It provokes great losses for banks. To this objective, numerous methods for credit risk evaluation have been proposed. Many evaluation methods are black box models that cannot adequately reveal information hidden in the data. However, several works have focused on building transparent rules-based models. For credit risk assessment, generated rules must be not only highly accurate, but also highly interpretable. In this paper, we aim to build both, an accurate and transparent credit risk evaluation model which proposes a set of classification rules. In fact, we consider the credit risk evaluation as an optimization problem which uses a genetic programming (GP) algorithm, where the goal is to maximize the accuracy of generated rules. We evaluate our proposed approach on the base of German and Australian credit datasets. We compared our finding with some existing works; the result shows that the proposed GP outperforms the other models.

Keywords: credit risk assessment, rule generation, genetic programming, feature selection

Procedia PDF Downloads 336
7101 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 315
7100 Adaptive Neuro Fuzzy Inference System Model Based on Support Vector Regression for Stock Time Series Forecasting

Authors: Anita Setianingrum, Oki S. Jaya, Zuherman Rustam

Abstract:

Forecasting stock price is a challenging task due to the complex time series of the data. The complexity arises from many variables that affect the stock market. Many time series models have been proposed before, but those previous models still have some problems: 1) put the subjectivity of choosing the technical indicators, and 2) rely upon some assumptions about the variables, so it is limited to be applied to all datasets. Therefore, this paper studied a novel Adaptive Neuro-Fuzzy Inference System (ANFIS) time series model based on Support Vector Regression (SVR) for forecasting the stock market. In order to evaluate the performance of proposed models, stock market transaction data of TAIEX and HIS from January to December 2015 is collected as experimental datasets. As a result, the method has outperformed its counterparts in terms of accuracy.

Keywords: ANFIS, fuzzy time series, stock forecasting, SVR

Procedia PDF Downloads 231
7099 Comparison of Fundamental Frequency Model and PWM Based Model for UPFC

Authors: S. A. Al-Qallaf, S. A. Al-Mawsawi, A. Haider

Abstract:

Among all FACTS devices, the unified power flow controller (UPFC) is considered to be the most versatile device. This is due to its capability to control all the transmission system parameters (impedance, voltage magnitude, and phase angle). With the growing interest in UPFC, the attention to develop a mathematical model has increased. Several models were introduced for UPFC in literature for different type of studies in power systems. In this paper a novel comparison study between two dynamic models of UPFC with their proposed control strategies.

Keywords: FACTS, UPFC, dynamic modeling, PWM, fundamental frequency

Procedia PDF Downloads 337
7098 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 58
7097 A Decision Support System to Detect the Lumbar Disc Disease on the Basis of Clinical MRI

Authors: Yavuz Unal, Kemal Polat, H. Erdinc Kocer

Abstract:

In this study, a decision support system comprising three stages has been proposed to detect the disc abnormalities of the lumbar region. In the first stage named the feature extraction, T2-weighted sagittal and axial Magnetic Resonance Images (MRI) were taken from 55 people and then 27 appearance and shape features were acquired from both sagittal and transverse images. In the second stage named the feature weighting process, k-means clustering based feature weighting (KMCBFW) proposed by Gunes et al. Finally, in the third stage named the classification process, the classifier algorithms including multi-layer perceptron (MLP- neural network), support vector machine (SVM), Naïve Bayes, and decision tree have been used to classify whether the subject has lumbar disc or not. In order to test the performance of the proposed method, the classification accuracy (%), sensitivity, specificity, precision, recall, f-measure, kappa value, and computation times have been used. The best hybrid model is the combination of k-means clustering based feature weighting and decision tree in the detecting of lumbar disc disease based on both sagittal and axial MR images.

Keywords: lumbar disc abnormality, lumbar MRI, lumbar spine, hybrid models, hybrid features, k-means clustering based feature weighting

Procedia PDF Downloads 507
7096 Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network

Authors: Trupti Chavan, Ramachandra Guda, Kameshwar Rao

Abstract:

The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints.

Keywords: data augmentation, enhanced SSD, knee detection and localization, medical image analysis, openKnee, Spatial transformations

Procedia PDF Downloads 140
7095 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals

Authors: Naser Safdarian, Nader Jafarnia Dabanloo

Abstract:

In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.

Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition

Procedia PDF Downloads 445
7094 Experiments on Weakly-Supervised Learning on Imperfect Data

Authors: Yan Cheng, Yijun Shao, James Rudolph, Charlene R. Weir, Beth Sahlmann, Qing Zeng-Treitler

Abstract:

Supervised predictive models require labeled data for training purposes. Complete and accurate labeled data, i.e., a ‘gold standard’, is not always available, and imperfectly labeled data may need to serve as an alternative. An important question is if the accuracy of the labeled data creates a performance ceiling for the trained model. In this study, we trained several models to recognize the presence of delirium in clinical documents using data with annotations that are not completely accurate (i.e., weakly-supervised learning). In the external evaluation, the support vector machine model with a linear kernel performed best, achieving an area under the curve of 89.3% and accuracy of 88%, surpassing the 80% accuracy of the training sample. We then generated a set of simulated data and carried out a series of experiments which demonstrated that models trained on imperfect data can (but do not always) outperform the accuracy of the training data, e.g., the area under the curve for some models is higher than 80% when trained on the data with an error rate of 40%. Our experiments also showed that the error resistance of linear modeling is associated with larger sample size, error type, and linearity of the data (all p-values < 0.001). In conclusion, this study sheds light on the usefulness of imperfect data in clinical research via weakly-supervised learning.

Keywords: weakly-supervised learning, support vector machine, prediction, delirium, simulation

Procedia PDF Downloads 180
7093 Development of an Optimised, Automated Multidimensional Model for Supply Chains

Authors: Safaa H. Sindi, Michael Roe

Abstract:

This project divides supply chain (SC) models into seven Eras, according to the evolution of the market’s needs throughout time. The five earliest Eras describe the emergence of supply chains, while the last two Eras are to be created. Research objectives: The aim is to generate the two latest Eras with their respective models that focus on the consumable goods. Era Six contains the Optimal Multidimensional Matrix (OMM) that incorporates most characteristics of the SC and allocates them into four quarters (Agile, Lean, Leagile, and Basic SC). This will help companies, especially (SMEs) plan their optimal SC route. Era Seven creates an Automated Multidimensional Model (AMM) which upgrades the matrix of Era six, as it accounts for all the supply chain factors (i.e. Offshoring, sourcing, risk) into an interactive system with Heuristic Learning that helps larger companies and industries to select the best SC model for their market. Methodologies: The data collection is based on a Fuzzy-Delphi study that analyses statements using Fuzzy Logic. The first round of Delphi study will contain statements (fuzzy rules) about the matrix of Era six. The second round of Delphi contains the feedback given from the first round and so on. Preliminary findings: both models are applicable, Matrix of Era six reduces the complexity of choosing the best SC model for SMEs by helping them identify the best strategy of Basic SC, Lean, Agile and Leagile SC; that’s tailored to their needs. The interactive heuristic learning in the AMM of Era seven will help mitigate error and aid large companies to identify and re-strategize the best SC model and distribution system for their market and commodity, hence increasing efficiency. Potential contributions to the literature: The problematic issue facing many companies is to decide which SC model or strategy to incorporate, due to the many models and definitions developed over the years. This research simplifies this by putting most definition in a template and most models in the Matrix of era six. This research is original as the division of SC into Eras, the Matrix of Era six (OMM) with Fuzzy-Delphi and Heuristic Learning in the AMM of Era seven provides a synergy of tools that were not combined before in the area of SC. Additionally the OMM of Era six is unique as it combines most characteristics of the SC, which is an original concept in itself.

Keywords: Leagile, automation, heuristic learning, supply chain models

Procedia PDF Downloads 379
7092 Numerical Investigation of Two Turbulence Models for Predicting the Temperature Separation in Conical Vortex Tube

Authors: M. Guen

Abstract:

A three-dimensional numerical study is used to analyze the behavior of the flow inside a vortex tube. The vortex tube or Ranque-Hilsch vortex tube is a simple device which is capable of dividing compressed air from the inlet nozzle tangentially into two flow with different temperatures warm and cold. This phenomenon is known from literature by temperature separation. The K ω-SST and K-ε turbulence models are used to predict the turbulent flow behaviour inside the tube. The vortex tube is an Exair 708 slpm (25 scfm) commercial tube. The cold and hot exits areas are 30.2 and 95 mm2 respectively. The vortex nozzle consists of 6 straight slots; the height and the width of each slot are 0.97 mm and 1.41 mm. The total area normal to the flow associated with six nozzles is therefore 8.15 mm 2. The present study focuses on a comparison between two turbulence models K ω-SST, K-ε by using a new configuration of vortex tube (Conical Vortex Tube). The performance curves of the temperature separation versus cold outlet mass fraction were calculated and compared with experimental and numerical study of other researchers.

Keywords: conical vortex tube, temperature separation, cold mass fraction, turbulence

Procedia PDF Downloads 239
7091 Kinetics, Equilibrium and Thermodynamics of the Adsorption of Triphenyltin onto NanoSiO₂/Fly Ash/Activated Carbon Composite

Authors: Olushola S. Ayanda, Olalekan S. Fatoki, Folahan A. Adekola, Bhekumusa J. Ximba, Cecilia O. Akintayo

Abstract:

In the present study, the kinetics, equilibrium and thermodynamics of the adsorption of triphenyltin (TPT) from TPT-contaminated water onto nanoSiO2/fly ash/activated carbon composite was investigated in batch adsorption system. Equilibrium adsorption data were analyzed using Langmuir, Freundlich, Temkin and Dubinin–Radushkevich (D-R) isotherm models. Pseudo first- and second-order, Elovich and fractional power models were applied to test the kinetic data and in order to understand the mechanism of adsorption, thermodynamic parameters such as ΔG°, ΔSo and ΔH° were also calculated. The results showed a very good compliance with pseudo second-order equation while the Freundlich and D-R models fit the experiment data. Approximately 99.999 % TPT was removed from the initial concentration of 100 mg/L TPT at 80oC, contact time of 60 min, pH 8 and a stirring speed of 200 rpm. Thus, nanoSiO2/fly ash/activated carbon composite could be used as effective adsorbent for the removal of TPT from contaminated water and wastewater.

Keywords: isotherm, kinetics, nanoSiO₂/fly ash/activated carbon composite, tributyltin

Procedia PDF Downloads 284
7090 Critical Conditions for the Initiation of Dynamic Recrystallization Prediction: Analytical and Finite Element Modeling

Authors: Pierre Tize Mha, Mohammad Jahazi, Amèvi Togne, Olivier Pantalé

Abstract:

Large-size forged blocks made of medium carbon high-strength steels are extensively used in the automotive industry as dies for the production of bumpers and dashboards through the plastic injection process. The manufacturing process of the large blocks starts with ingot casting, followed by open die forging and a quench and temper heat treatment process to achieve the desired mechanical properties and numerical simulation is widely used nowadays to predict these properties before the experiment. But the temperature gradient inside the specimen remains challenging in the sense that the temperature before loading inside the material is not the same, but during the simulation, constant temperature is used to simulate the experiment because it is assumed that temperature is homogenized after some holding time. Therefore to be close to the experiment, real distribution of the temperature through the specimen is needed before the mechanical loading. Thus, We present here a robust algorithm that allows the calculation of the temperature gradient within the specimen, thus representing a real temperature distribution within the specimen before deformation. Indeed, most numerical simulations consider a uniform temperature gradient which is not really the case because the surface and core temperatures of the specimen are not identical. Another feature that influences the mechanical properties of the specimen is recrystallization which strongly depends on the deformation conditions and the type of deformation like Upsetting, Cogging...etc. Indeed, Upsetting and Cogging are the stages where the greatest deformations are observed, and a lot of microstructural phenomena can be observed, like recrystallization, which requires in-depth characterization. Complete dynamic recrystallization plays an important role in the final grain size during the process and therefore helps to increase the mechanical properties of the final product. Thus, the identification of the conditions for the initiation of dynamic recrystallization is still relevant. Also, the temperature distribution within the sample and strain rate influence the recrystallization initiation. So the development of a technique allowing to predict the initiation of this recrystallization remains challenging. In this perspective, we propose here, in addition to the algorithm allowing to get the temperature distribution before the loading stage, an analytical model leading to determine the initiation of this recrystallization. These two techniques are implemented into the Abaqus finite element software via the UAMP and VUHARD subroutines for comparison with a simulation where an isothermal temperature is imposed. The Artificial Neural Network (ANN) model to describe the plastic behavior of the material is also implemented via the VUHARD subroutine. From the simulation, the temperature distribution inside the material and recrystallization initiation is properly predicted and compared to the literature models.

Keywords: dynamic recrystallization, finite element modeling, artificial neural network, numerical implementation

Procedia PDF Downloads 71
7089 Strategy Management of Soybean (Glycine max L.) for Dealing with Extreme Climate through the Use of Cropsyst Model

Authors: Aminah Muchdar, Nuraeni, Eddy

Abstract:

The aims of the research are: (1) to verify the cropsyst plant model of experimental data in the field of soybean plants and (2) to predict planting time and potential yield soybean plant with the use of cropsyst model. This research is divided into several stages: (1) first calibration stage which conducted in the field from June until September 2015.(2) application models stage, where the data obtained from calibration in the field will be included in cropsyst models. The required data models are climate data, ground data/soil data,also crop genetic data. The relationship between the obtained result in field with simulation cropsyst model indicated by Efficiency Index (EF) which the value is 0,939.That is showing that cropsyst model is well used. From the calculation result RRMSE which the value is 1,922%.That is showing that comparative fault prediction results from simulation with result obtained in the field is 1,92%. The conclusion has obtained that the prediction of soybean planting time cropsyst based models that have been made valid for use. and the appropriate planting time for planting soybeans mainly on rain-fed land is at the end of the rainy season, in which the above study first planting time (June 2, 2015) which gives the highest production, because at that time there was still some rain. Tanggamus varieties more resistant to slow planting time cause the percentage decrease in the yield of each decade is lower than the average of all varieties.

Keywords: soybean, Cropsyst, calibration, efficiency Index, RRMSE

Procedia PDF Downloads 167
7088 Feasibility of Voluntary Deep Inspiration Breath-Hold Radiotherapy Technique Implementation without Deep Inspiration Breath-Hold-Assisting Device

Authors: Auwal Abubakar, Shazril Imran Shaukat, Noor Khairiah A. Karim, Mohammed Zakir Kassim, Gokula Kumar Appalanaido, Hafiz Mohd Zin

Abstract:

Background: Voluntary deep inspiration breath-hold radiotherapy (vDIBH-RT) is an effective cardiac dose reduction technique during left breast radiotherapy. This study aimed to assess the accuracy of the implementation of the vDIBH technique among left breast cancer patients without the use of a special device such as a surface-guided imaging system. Methods: The vDIBH-RT technique was implemented among thirteen (13) left breast cancer patients at the Advanced Medical and Dental Institute (AMDI), Universiti Sains Malaysia. Breath-hold monitoring was performed based on breath-hold skin marks and laser light congruence observed on zoomed CCTV images from the control console during each delivery. The initial setup was verified using cone beam computed tomography (CBCT) during breath-hold. Each field was delivered using multiple beam segments to allow a delivery time of 20 seconds, which can be tolerated by patients in breath-hold. The data were analysed using an in-house developed MATLAB algorithm. PTV margin was computed based on van Herk's margin recipe. Results: The setup error analysed from CBCT shows that the population systematic error in lateral (x), longitudinal (y), and vertical (z) axes was 2.28 mm, 3.35 mm, and 3.10 mm, respectively. Based on the CBCT image guidance, the Planning target volume (PTV) margin that would be required for vDIBH-RT using CCTV/Laser monitoring technique is 7.77 mm, 10.85 mm, and 10.93 mm in x, y, and z axes, respectively. Conclusion: It is feasible to safely implement vDIBH-RT among left breast cancer patients without special equipment. The breath-hold monitoring technique is cost-effective, radiation-free, easy to implement, and allows real-time breath-hold monitoring.

Keywords: vDIBH, cone beam computed tomography, radiotherapy, left breast cancer

Procedia PDF Downloads 36
7087 Application of GA Optimization in Analysis of Variable Stiffness Composites

Authors: Nasim Fallahi, Erasmo Carrera, Alfonso Pagani

Abstract:

Variable angle tow describes the fibres which are curvilinearly steered in a composite lamina. Significantly, stiffness tailoring freedom of VAT composite laminate can be enlarged and enabled. Composite structures with curvilinear fibres have been shown to improve the buckling load carrying capability in contrast with the straight laminate composites. However, the optimal design and analysis of VAT are faced with high computational efforts due to the increasing number of variables. In this article, an efficient optimum solution has been used in combination with 1D Carrera’s Unified Formulation (CUF) to investigate the optimum fibre orientation angles for buckling analysis. The particular emphasis is on the LE-based CUF models, which provide a Lagrange Expansions to address a layerwise description of the problem unknowns. The first critical buckling load has been considered under simply supported boundary conditions. Special attention is lead to the sensitivity of buckling load corresponding to the fibre orientation angle in comparison with the results which obtain through the Genetic Algorithm (GA) optimization frame and then Artificial Neural Network (ANN) is applied to investigate the accuracy of the optimized model. As a result, numerical CUF approach with an optimal solution demonstrates the robustness and computational efficiency of proposed optimum methodology.

Keywords: beam structures, layerwise, optimization, variable stiffness

Procedia PDF Downloads 122
7086 A Parallel Approach for 3D-Variational Data Assimilation on GPUs in Ocean Circulation Models

Authors: Rossella Arcucci, Luisa D'Amore, Simone Celestino, Giuseppe Scotti, Giuliano Laccetti

Abstract:

This work is the first dowel in a rather wide research activity in collaboration with Euro Mediterranean Center for Climate Changes, aimed at introducing scalable approaches in Ocean Circulation Models. We discuss designing and implementation of a parallel algorithm for solving the Variational Data Assimilation (DA) problem on Graphics Processing Units (GPUs). The algorithm is based on the fully scalable 3DVar DA model, previously proposed by the authors, which uses a Domain Decomposition approach (we refer to this model as the DD-DA model). We proceed with an incremental porting process consisting of 3 distinct stages: requirements and source code analysis, incremental development of CUDA kernels, testing and optimization. Experiments confirm the theoretic performance analysis based on the so-called scale up factor demonstrating that the DD-DA model can be suitably mapped on GPU architectures.

Keywords: data assimilation, GPU architectures, ocean models, parallel algorithm

Procedia PDF Downloads 400