Search results for: support vector data description
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 30371

Search results for: support vector data description

29981 Measuring Financial Asset Return and Volatility Spillovers, with Application to Sovereign Bond, Equity, Foreign Exchange and Commodity Markets

Authors: Petra Palic, Maruska Vizek

Abstract:

We provide an in-depth analysis of interdependence of asset returns and volatilities in developed and developing countries. The analysis is split into three parts. In the first part, we use multivariate GARCH model in order to provide stylized facts on cross-market volatility spillovers. In the second part, we use a generalized vector autoregressive methodology developed by Diebold and Yilmaz (2009) in order to estimate separate measures of return spillovers and volatility spillovers among sovereign bond, equity, foreign exchange and commodity markets. In particular, our analysis is focused on cross-market return, and volatility spillovers in 19 developed and developing countries. In order to estimate named spillovers, we use daily data from 2008 to 2017. In the third part of the analysis, we use a generalized vector autoregressive framework in order to estimate total and directional volatility spillovers. We use the same daily data span for one developed and one developing country in order to characterize daily volatility spillovers across stock, bond, foreign exchange and commodities markets.

Keywords: cross-market spillovers, sovereign bond markets, equity markets, value at risk (VAR)

Procedia PDF Downloads 261
29980 Wave Pressure Metering with the Specific Instrument and Measure Description Determined by the Shape and Surface of the Instrument including the Number of Sensors and Angle between Them

Authors: Branimir Jurun, Elza Jurun

Abstract:

Focus of this paper is description and functioning manner of the instrument for wave pressure metering. Moreover, an essential component of this paper is the proposal of a metering unit for the direct wave pressure measurement determined by the shape and surface of the instrument including the number of sensors and angle between them. Namely, far applied instruments by means of height, length, direction, wave time period and other components determine wave pressure on a particular area. This instrument, allows the direct measurement i.e. measurement without additional calculation, of the wave pressure expressed in a standardized unit of measure. That way the instrument has a standardized form, surface, number of sensors and the angle between them. In addition, it is made with the status that follows the wave and always is on the water surface. Database quality which is listed by the instrument is made possible by using the Arduino chip. This chip is programmed for receiving by two data from each of the sensors each second. From these data by a pre-defined manner a unique representative value is estimated. By this procedure all relevant wave pressure measurement results are directly and immediately registered. Final goal of establishing such a rich database is a comprehensive statistical analysis that ranges from multi-criteria analysis across different modeling and parameters testing to hypothesis accepting relating to the widest variety of man-made activities such as filling of beaches, security cages for aquaculture, bridges construction.

Keywords: instrument, metering, water, waves

Procedia PDF Downloads 264
29979 Informational Support, Anxiety and Satisfaction with Care among Family Caregivers of Patients Admitted in Critical Care Units of B.P. Koirala Institute of Health Sciences, Nepal

Authors: Rosy Chaudhary, Pushpa Parajuli

Abstract:

Background and Objectives: Informational support to family members has a significant potential for reducing this distress related to hospitalization of their patient into the critical care unit, enabling them to cope better and support the patient. The objective of the study is to assess family members’ perception of informational support, anxiety, satisfaction with care and to reveal the association with selected socio-demographic variables and to investigate the correlation between informational support, anxiety and satisfaction with care. Materials and Methods: A descriptive cross-sectional study was conducted in 39 family caregivers of patients admitted in critical care unit of BPKIHS(B.P. Koirala Institute of Health Sciences). Consecutive sampling technique was used wherein data was collected over duration of one month using interview schedule. Descriptive and inferential statistics were used. Results: The mean age of the respondents was 34.97 ± 10.64 and two third (66.70%) were male. Mean score for informational support was 25.72(SD = 5.66; theoretical range of 10 - 40). Mean anxiety was 10.41 (SD = 5.02; theoretical range of 7 - 21). Mean score for satisfaction with care was 40.77 (SD = 6.77; theoretical range of 14 - 64). A moderate positive correlation was found between informational support and satisfaction with care (r = 0.551, p < .001) and a moderate negative correlation was found between anxiety and satisfaction with care (r = -0.590; p = 0.000). No relationship was noted between informational support and anxiety. Conclusion: The informational support and satisfaction of the family caregivers with the care provided to their patients was satisfactory. More than three fourth of the family caregivers had anxiety; the factors associated being educational status of the caregivers, the family income and duration of visiting hours. There was positive correlation between informational support and satisfaction with care provided justifying the need for comprehensive information to the family caregivers by the health personnel. There was negative correlation between anxiety and satisfaction with care.

Keywords: anxiety, caregivers, critical care unit, informational support, family

Procedia PDF Downloads 352
29978 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 513
29977 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings

Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir

Abstract:

Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.

Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine

Procedia PDF Downloads 162
29976 Incorporating Information Gain in Regular Expressions Based Classifiers

Authors: Rosa L. Figueroa, Christopher A. Flores, Qing Zeng-Treitler

Abstract:

A regular expression consists of sequence characters which allow describing a text path. Usually, in clinical research, regular expressions are manually created by programmers together with domain experts. Lately, there have been several efforts to investigate how to generate them automatically. This article presents a text classification algorithm based on regexes. The algorithm named REX was designed, and then, implemented as a simplified method to create regexes to classify Spanish text automatically. In order to classify ambiguous cases, such as, when multiple labels are assigned to a testing example, REX includes an information gain method Two sets of data were used to evaluate the algorithm’s effectiveness in clinical text classification tasks. The results indicate that the regular expression based classifier proposed in this work performs statically better regarding accuracy and F-measure than Support Vector Machine and Naïve Bayes for both datasets.

Keywords: information gain, regular expressions, smith-waterman algorithm, text classification

Procedia PDF Downloads 320
29975 Business Intelligence for Profiling of Telecommunication Customer

Authors: Rokhmatul Insani, Hira Laksmiwati Soemitro

Abstract:

Business Intelligence is a methodology that exploits the data to produce information and knowledge systematically, business intelligence can support the decision-making process. Some methods in business intelligence are data warehouse and data mining. A data warehouse can store historical data from transactional data. For data modelling in data warehouse, we apply dimensional modelling by Kimball. While data mining is used to extracting patterns from the data and get insight from the data. Data mining has many techniques, one of which is segmentation. For profiling of telecommunication customer, we use customer segmentation according to customer’s usage of services, customer invoice and customer payment. Customers can be grouped according to their characteristics and can be identified the profitable customers. We apply K-Means Clustering Algorithm for segmentation. The input variable for that algorithm we use RFM (Recency, Frequency and Monetary) model. All process in data mining, we use tools IBM SPSS modeller.

Keywords: business intelligence, customer segmentation, data warehouse, data mining

Procedia PDF Downloads 483
29974 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94
29973 Developing a Rational Database Management System (RDBMS) Supporting Product Life Cycle Appications

Authors: Yusri Yusof, Chen Wong Keong

Abstract:

This paper presents the implementation details of a Relational Database Management System of a STEP-technology product model repository. It is able support the implementation of any EXPRESS language schema, although it has been primarily implemented to support mechanical product life cycle applications. This database support the input of STEP part 21 file format from CAD in geometrical and topological data format and support a range of queries for mechanical product life cycle applications. This proposed relational database management system uses entity-to-table method (R1) rather than type-to-table method (R4). The two mapping methods have their own strengths and drawbacks.

Keywords: RDBMS, CAD, ISO 10303, part-21 file

Procedia PDF Downloads 536
29972 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
29971 Tackling the Digital Divide: Enhancing Video Consultation Access for Digital Illiterate Patients in the Hospital

Authors: Wieke Ellen Bouwes

Abstract:

This study aims to unravel which factors enhance accessibility of video consultations (VCs) for patients with low digital literacy. Thirteen in-depth interviews with patients, hospital employees, eHealth experts, and digital support organizations were held. Patients with low digital literacy received in-home support during real-time video consultations and are observed during the set-up of these consultations. Key findings highlight the importance of patient acceptance, emphasizing video consultations benefits and avoiding standardized courses. The lack of a uniform video consultation system across healthcare providers poses a barrier. Familiarity with support organizations – to support patients in usage of digital tools - among healthcare practitioners enhances accessibility. Moreover, considerations regarding the Dutch General Data Protection Regulation (GDPR) law influence support patients receive. Also, provider readiness to use video consultations influences patient access. Further, alignment between learning styles and support methods seems to determine abilities to learn how to use video consultations. Future research could delve into tailored learning styles and technological solutions for remote access to further explore effectiveness of learning methods.

Keywords: video consultations, digital literacy skills, effectiveness of support, intra- and inter-organizational relationships, patient acceptance of video consultations

Procedia PDF Downloads 74
29970 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 267
29969 The Impact of Two Factors on EFL Learners' Fluency

Authors: Alireza Behfar, Mohammad Mahdavi

Abstract:

Nowadays, in the light of progress in the world of science, technology and communications, mastery of learning international languages is a sure and needful matter. In learning any language as a second language, progress and achieving a desirable level in speaking is indeed important for approximately all learners. In this research, we find out how preparation can influence L2 learners' oral fluency with respect to individual differences in working memory capacity. The participants consisted of sixty-one advanced L2 learners including MA students of TEFL at Isfahan University as well as instructors teaching English at Sadr Institute in Isfahan. The data collection consisted of two phases: A working memory test (reading span test) and a picture description task, with a one-month interval between the two tasks. Speaking was elicited through speech generation task in which the individuals were asked to discuss four topics emerging in two pairs. The two pairs included one simple and one complex topic and was accompanied by planning time and without any planning time respectively. Each topic was accompanied by several relevant pictures. L2 fluency was assessed based on preparation. The data were then analyzed in terms of the number of syllables, the number of silent pauses, and the mean length of pauses produced per minute. The study offers implications for strategies to improve learners’ both fluency and working memory.

Keywords: two factors, fluency, working memory capacity, preparation, L2 speech production reading span test picture description

Procedia PDF Downloads 230
29968 Development of Prediction Models of Day-Ahead Hourly Building Electricity Consumption and Peak Power Demand Using the Machine Learning Method

Authors: Dalin Si, Azizan Aziz, Bertrand Lasternas

Abstract:

To encourage building owners to purchase electricity at the wholesale market and reduce building peak demand, this study aims to develop models that predict day-ahead hourly electricity consumption and demand using artificial neural network (ANN) and support vector machine (SVM). All prediction models are built in Python, with tool Scikit-learn and Pybrain. The input data for both consumption and demand prediction are time stamp, outdoor dry bulb temperature, relative humidity, air handling unit (AHU), supply air temperature and solar radiation. Solar radiation, which is unavailable a day-ahead, is predicted at first, and then this estimation is used as an input to predict consumption and demand. Models to predict consumption and demand are trained in both SVM and ANN, and depend on cooling or heating, weekdays or weekends. The results show that ANN is the better option for both consumption and demand prediction. It can achieve 15.50% to 20.03% coefficient of variance of root mean square error (CVRMSE) for consumption prediction and 22.89% to 32.42% CVRMSE for demand prediction, respectively. To conclude, the presented models have potential to help building owners to purchase electricity at the wholesale market, but they are not robust when used in demand response control.

Keywords: building energy prediction, data mining, demand response, electricity market

Procedia PDF Downloads 316
29967 Forecasting Regional Data Using Spatial Vars

Authors: Taisiia Gorshkova

Abstract:

Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regions

Keywords: forecasting, regional data, spatial econometrics, vector autoregression

Procedia PDF Downloads 141
29966 Role of Machine Learning in Internet of Things Enabled Smart Cities

Authors: Amit Prakash Singh, Shyamli Singh, Chavi Srivastav

Abstract:

This paper presents the idea of Internet of Thing (IoT) for the infrastructure of smart cities. Internet of Thing has been visualized as a communication prototype that incorporates myriad of digital services. The various component of the smart cities shall be implemented using microprocessor, microcontroller, sensors for network communication and protocols. IoT enabled systems have been devised to support the smart city vision, of which aim is to exploit the currently available precocious communication technologies to support the value-added services for function of the city. Due to volume, variety, and velocity of data, it requires analysis using Big Data concept. This paper presented the various techniques used to analyze big data using machine learning.

Keywords: IoT, smart city, embedded systems, sustainable environment

Procedia PDF Downloads 575
29965 Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections

Authors: A. Sopharak, B. Uyyanonvara, S. Barman

Abstract:

Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared.

Keywords: diabetic retinopathy, microaneurysm, naive Bayes classifier, SVM classifier

Procedia PDF Downloads 328
29964 Emotional Labour and Employee Performance Appraisal: The Missing Link in Some Hotels in South East Nigeria

Authors: Polycarp Igbojekwe

Abstract:

The main objective of this study was to determine if emotional labour has become a criterion in performance appraisal, job description, selection, and training schemes in the hotel industry in Nigeria. Our main assumption was that majority of hotel organizations have not built emotional labour into their human resources management schemes. Data were gathered by the use of structured questionnaires designed in Likert format, and interviews. The focus group was managers of the selected hotels. Analyses revealed that majority of the hotels have not built emotional labour into their human resources schemes particularly in the 1, 2, and 3-star hotels. It was observed that service employees of 1, 2, and 3-star hotels have not been adequately trained to perform emotional labour; a critical factor in quality service delivery. Managers of 1, 2, and 3-star hotels have not given serious thought to emotional labour as a critical factor in quality service delivery. The study revealed that suitability of an individual’s characteristics is not being considered as a criterion for selection and performance appraisal for service employees. The implication of this is that, person-job-fit is not seriously considered. It was observed that there has been a disconnect between required emotional competency, its recognition, evaluation, and training. Based on the findings of this study, it is concluded that selection, training, job description and performance appraisal instruments in use in hotels in Nigeria are inadequate. Human resource implications of the findings in this study are presented. It is recommended that hotel organizations should re-design and plan the emotional content and context of their human resources practices to reflect the emotional demands of front line jobs in the hotel industry and the crucial role emotional labour plays during service encounters.

Keywords: emotional labour, employee selection, job description, performance appraisal, person-job-fit, employee compensation

Procedia PDF Downloads 191
29963 Beyond Sport: Understanding the Retirement Experiences and Support Needs of Retired Elite Athletes

Authors: Nadia Jurkovic, Sarven McLinton, Alyson Crozier, Amber Mosewich, Ed O'Connor

Abstract:

Retiring from elite sport can have detrimental effects on the mental health and life satisfaction of retired elite athletes. To aid in this transition, sporting organisations use retirement interventions. The aim of this study is to understand the experience of elite sport retirement from a holistic perspective, exploring the experiences of retiring athletes, retired athletes, and sport support staff. A secondary aim is used to uncover any recommendations both retiring/retired athletes and sport support staff suggest towards improving retirement programs or interventions. A total of N=15 participants took part in semi-structured interviews to explore their experiences with sport retirement. Retiring and retired elite athletes were asked about how they felt during their transition into retirement, and sport support staff were asked about their experience working with retiring athletes. Data collection and iterative qualitative analysis are still ongoing; however, it is anticipated that the final key themes to emerge will include isolation, identity loss, and lack of support, with varying sub-themes such as organisational support and family support. Relationships across and within themes will be explored within the study. The anticipated findings present retiring from elite sports as a challenging life and career transition; however, current support and resources for elite athletes are not addressing the core difficulties experienced by retiring elite athletes. The findings of this study will inform future development of new co-designed elite sport retirement interventions.

Keywords: elite athlete, retired elite athlete, retirement interventions, transition into retirement, interviews

Procedia PDF Downloads 13
29962 Forecasting Container Throughput: Using Aggregate or Terminal-Specific Data?

Authors: Gu Pang, Bartosz Gebka

Abstract:

We forecast the demand of total container throughput at the Indonesia’s largest seaport, Tanjung Priok Port. We propose four univariate forecasting models, including SARIMA, the additive Seasonal Holt-Winters, the multiplicative Seasonal Holt-Winters and the Vector Error Correction Model. Our aim is to provide insights into whether forecasting the total container throughput obtained by historical aggregated port throughput time series is superior to the forecasts of the total throughput obtained by summing up the best individual terminal forecasts. We test the monthly port/individual terminal container throughput time series between 2003 and 2013. The performance of forecasting models is evaluated based on Mean Absolute Error and Root Mean Squared Error. Our results show that the multiplicative Seasonal Holt-Winters model produces the most accurate forecasts of total container throughput, whereas SARIMA generates the worst in-sample model fit. The Vector Error Correction Model provides the best model fits and forecasts for individual terminals. Our results report that the total container throughput forecasts based on modelling the total throughput time series are consistently better than those obtained by combining those forecasts generated by terminal-specific models. The forecasts of total throughput until the end of 2018 provide an essential insight into the strategic decision-making on the expansion of port's capacity and construction of new container terminals at Tanjung Priok Port.

Keywords: SARIMA, Seasonal Holt-Winters, Vector Error Correction Model, container throughput

Procedia PDF Downloads 504
29961 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments

Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic

Abstract:

Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.

Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder

Procedia PDF Downloads 289
29960 Ontology-Based Approach for Temporal Semantic Modeling of Social Networks

Authors: Souâad Boudebza, Omar Nouali, Faiçal Azouaou

Abstract:

Social networks have recently gained a growing interest on the web. Traditional formalisms for representing social networks are static and suffer from the lack of semantics. In this paper, we will show how semantic web technologies can be used to model social data. The SemTemp ontology aligns and extends existing ontologies such as FOAF, SIOC, SKOS and OWL-Time to provide a temporal and semantically rich description of social data. We also present a modeling scenario to illustrate how our ontology can be used to model social networks.

Keywords: ontology, semantic web, social network, temporal modeling

Procedia PDF Downloads 386
29959 Development of a Decision-Making Method by Using Machine Learning Algorithms in the Early Stage of School Building Design

Authors: Rajaian Hoonejani Mohammad, Eshraghi Pegah, Zomorodian Zahra Sadat, Tahsildoost Mohammad

Abstract:

Over the past decade, energy consumption in educational buildings has steadily increased. The purpose of this research is to provide a method to quickly predict the energy consumption of buildings using separate evaluation of zones and decomposing the building to eliminate the complexity of geometry at the early design stage. To produce this framework, machine learning algorithms such as Support vector regression (SVR) and Artificial neural network (ANN) are used to predict energy consumption and thermal comfort metrics in a school as a case. The database consists of more than 55000 samples in three climates of Iran. Cross-validation evaluation and unseen data have been used for validation. In a specific label, cooling energy, it can be said the accuracy of prediction is at least 84% and 89% in SVR and ANN, respectively. The results show that the SVR performed much better than the ANN.

Keywords: early stage of design, energy, thermal comfort, validation, machine learning

Procedia PDF Downloads 73
29958 Climatic and Environmental Variables Do Not Affect the Diversity of Possible Phytoplasmic Vector Insects Associated with Quercus humboltii Oak Trees in Bogota, Colombia

Authors: J. Lamilla-Monje, C. Solano-Puerto, L. Franco-Lara

Abstract:

Trees play an essential role in cities due to their ability to provide multiple ecosystem goods and services. Bogota trees are threatened by factors such as pests, pathogens, contamination, among others. Among the pathogens, phytoplasmas are a potential risk for urban trees, generating symptoms that affect the ecosystem services that these trees provide in Bogota, an example of this is the affectation of Q. humboldtii by phytoplasmas, these bacteria are transmitted for insects of the order Hemiptera, this is why the objective of this work was to know if the climatic variables (humidity, precipitation, and temperature) and environmental variables (PM10 and PM2.5) could be related to the distribution of the Oak Quercus entomofauna and specifically with the phytoplasma vector insects in Bogota. For this study, the sampling points were distributed in areas of the city with contrasting variables in two types of locations: parks and streets. A total of 68 trees were sampled in which the associated insects were collected using two methodologies: jameo and agitation traps. The results show that insects of the order Hemiptera were the most abundant, including a total of 1682 individuals represented by 29 morphotypes, within this order individuals from eight families were collected (Aphidae, Aradidae, Berytidae, Cicadellidae, Issidae, Membracidae, Miridae, and Psyllidae), finding as possible vectors the families Cicadellidae, Membracidae, and Psyllidae with 959, 8 and 14 individuals respectively. Within the Cicadellidae family, 21 morphotypes were found, being reported as vectors in the literature: Amplicephalus, Exitianus atratus, Haldorus sp., Xestocephalus desertorum, Idiocerinae sp., Scaphytopius sp., the Membracidae family was represented by two morphotypes and the Psyllidae by one. Results that suggest that there is no correlation between climatic and environmental variables with the diversity of insects associated with oak. Knowing the vector insects of phytoplasmas in oak trees will complete the pathosystem and generate effective vector control.

Keywords: vector insects, diversity, phytoplasmas, Cicadellidae

Procedia PDF Downloads 150
29957 Decision Support System for Optimal Placement of Wind Turbines in Electric Distribution Grid

Authors: Ahmed Ouammi

Abstract:

This paper presents an integrated decision framework to support decision makers in the selection and optimal allocation of wind power plants in the electric grid. The developed approach intends to maximize the benefice related to the project investment during the planning period. The proposed decision model considers the main cost components, meteorological data, environmental impacts, operation and regulation constraints, and territorial information. The decision framework is expressed as a stochastic constrained optimization problem with the aim to identify the suitable locations and related optimal wind turbine technology considering the operational constraints and maximizing the benefice. The developed decision support system is applied to a case study to demonstrate and validate its performance.

Keywords: decision support systems, electric power grid, optimization, wind energy

Procedia PDF Downloads 153
29956 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults

Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura

Abstract:

The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.

Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing

Procedia PDF Downloads 283
29955 Determining Current and Future Training Needs of Ontario Workers Supporting Persons with Developmental Disabilities

Authors: Erin C. Rodenburg, Jennifer McWhirter, Andrew Papadopoulos

Abstract:

Support workers for adults with developmental disabilities promote the care and wellbeing of a historically underserved population. Poor employment training and low work satisfaction for these disability support workers are linked to low productivity, poor quality of care, turnover, and intention to leave employment. Therefore, to improve the lives of those within disability support homes, both client and caregiver, it is vital to determine where improvements to training and support for those providing direct care can be made. The current study aims to explore disability support worker’s perceptions of the training received in their employment at the residential homes, how it prepared them for their role, and where there is room for improvement with the aim of developing recommendations for an improved training experience. Responses were collected from 85 disability support workers across 40 Ontario group homes. Findings suggest most disability support workers within the 40 support homes feel adequately trained in their responsibilities of employment. For those who did not feel adequately trained, the main issues expressed were a lack of standardization in training, a need for more continuous training, and a move away from trial and error in performing tasks to support clients with developmental disabilities.

Keywords: developmental disabilities, disability workers, support homes, training

Procedia PDF Downloads 188
29954 Terraria AI: YOLO Interface for Decision-Making Algorithms

Authors: Emmanuel Barrantes Chaves, Ernesto Rivera Alvarado

Abstract:

This paper presents a method to enable agents for the Terraria game to evaluate algorithms commonly used in general video game artificial intelligence competitions. The usage of the ‘You Only Look Once’ model in the first layer of the process obtains information from the screen, translating this information into a video game description language known as “Video Game Description Language”; the agents take that as input to make decisions. For this, the state-of-the-art algorithms were tested and compared; Monte Carlo Tree Search and Rolling Horizon Evolutionary; in this case, Rolling Horizon Evolutionary shows a better performance. This approach’s main advantage is that a VGDL beforehand is unnecessary. It will be built on the fly and opens the road for using more games as a framework for AI.

Keywords: AI, MCTS, RHEA, Terraria, VGDL, YOLOv5

Procedia PDF Downloads 96
29953 Experimental Implementation of Model Predictive Control for Permanent Magnet Synchronous Motor

Authors: Abdelsalam A. Ahmed

Abstract:

Fast speed drives for Permanent Magnet Synchronous Motor (PMSM) is a crucial performance for the electric traction systems. In this paper, PMSM is drived with a Model-based Predictive Control (MPC) technique. Fast speed tracking is achieved through optimization of the DC source utilization using MPC. The technique is based on predicting the optimum voltage vector applied to the driver. Control technique is investigated by comparing to the cascaded PI control based on Space Vector Pulse Width Modulation (SVPWM). MPC and SVPWM-based FOC are implemented with the TMS320F2812 DSP and its power driver circuits. The designed MPC for a PMSM drive is experimentally validated on a laboratory test bench. The performances are compared with those obtained by a conventional PI-based system in order to highlight the improvements, especially regarding speed tracking response.

Keywords: permanent magnet synchronous motor, model-based predictive control, DC source utilization, cascaded PI control, space vector pulse width modulation, TMS320F2812 DSP

Procedia PDF Downloads 644
29952 Compliance of Systematic Reviews in Plastic Surgery with the PRISMA Statement: A Systematic Review

Authors: Seon-Young Lee, Harkiran Sagoo, Katherine Whitehurst, Georgina Wellstead, Alexander Fowler, Riaz Agha, Dennis Orgill

Abstract:

Introduction: Systematic reviews attempt to answer research questions by synthesising the data within primary papers. They are an increasingly important tool within evidence-based medicine, guiding clinical practice, future research and healthcare policy. We sought to determine the reporting quality of recent systematic reviews in plastic surgery. Methods: This systematic review was conducted in line with the Cochrane handbook, reported in line with the PRISMA statement and registered at the ResearchRegistry (UIN: reviewregistry18). MEDLINE and EMBASE databases were searched in 2013 and 2014 for systematic reviews by five major plastic surgery journals. Screening, identification and data extraction was performed independently by two teams. Results: From an initial set of 163 articles, 79 met the inclusion criteria. The median PRISMA score was 16 out of 27 items (59.3%; range 6-26, 95% CI 14-17). Compliance between individual PRISMA items showed high variability. It was poorest for items related to the use of review protocol (item 5; 5%) and presentation of data on risk of bias of each study (item 19; 18%), while being the highest for description of rationale (item 3; 99%) and sources of funding and other support (item 27; 95%), and for structured summary in the abstract (item 2; 95%). Conclusion: The reporting quality of systematic reviews in plastic surgery requires improvement. ‘Hard-wiring’ of compliance through journal submission systems, as well as improved education, awareness and a cohesive strategy among all stakeholders is called for.

Keywords: PRISMA, reporting quality, plastic surgery, systematic review, meta-analysis

Procedia PDF Downloads 294