Search results for: automatic classification
2132 Analysis of Urban Rail Transit Station's Accessibility Reliability: A Case Study of Hangzhou Metro, China
Authors: Jin-Qu Chen, Jie Liu, Yong Yin, Zi-Qi Ju, Yu-Yao Wu
Abstract:
Increase in travel fare and station’s failure will have huge impact on passengers’ travel. The Urban Rail Transit (URT) station’s accessibility reliability under increasing travel fare and station failure are analyzed in this paper. Firstly, the passenger’s travel path is resumed based on stochastic user equilibrium and Automatic Fare Collection (AFC) data. Secondly, calculating station’s importance by combining LeaderRank algorithm and Ratio of Station Affected Passenger Volume (RSAPV), and then the station’s accessibility evaluation indicators are proposed based on the analysis of passenger’s travel characteristic. Thirdly, station’s accessibility under different scenarios are measured and rate of accessibility change is proposed as station’s accessibility reliability indicator. Finally, the accessibility of Hangzhou metro stations is analyzed by the formulated models. The result shows that Jinjiang station and Liangzhu station are the most important and convenient station in the Hangzhou metro, respectively. Station failure and increase in travel fare and station failure have huge impact on station’s accessibility, except for increase in travel fare. Stations in Hangzhou metro Line 1 have relatively worse accessibility reliability and Fengqi Road station’s accessibility reliability is weakest. For Hangzhou metro operational department, constructing new metro line around Line 1 and protecting Line 1’s station preferentially can effective improve the accessibility reliability of Hangzhou metro.Keywords: automatic fare collection data, AFC, station’s accessibility reliability, stochastic user equilibrium, urban rail transit, URT
Procedia PDF Downloads 1352131 Multi-Sensor Target Tracking Using Ensemble Learning
Authors: Bhekisipho Twala, Mantepu Masetshaba, Ramapulana Nkoana
Abstract:
Multiple classifier systems combine several individual classifiers to deliver a final classification decision. However, an increasingly controversial question is whether such systems can outperform the single best classifier, and if so, what form of multiple classifiers system yields the most significant benefit. Also, multi-target tracking detection using multiple sensors is an important research field in mobile techniques and military applications. In this paper, several multiple classifiers systems are evaluated in terms of their ability to predict a system’s failure or success for multi-sensor target tracking tasks. The Bristol Eden project dataset is utilised for this task. Experimental and simulation results show that the human activity identification system can fulfill requirements of target tracking due to improved sensors classification performances with multiple classifier systems constructed using boosting achieving higher accuracy rates.Keywords: single classifier, ensemble learning, multi-target tracking, multiple classifiers
Procedia PDF Downloads 2682130 Automatic and High Precise Modeling for System Optimization
Authors: Stephanie Chen, Mitja Echim, Christof Büskens
Abstract:
To describe and propagate the behavior of a system mathematical models are formulated. Parameter identification is used to adapt the coefficients of the underlying laws of science. For complex systems this approach can be incomplete and hence imprecise and moreover too slow to be computed efficiently. Therefore, these models might be not applicable for the numerical optimization of real systems, since these techniques require numerous evaluations of the models. Moreover not all quantities necessary for the identification might be available and hence the system must be adapted manually. Therefore, an approach is described that generates models that overcome the before mentioned limitations by not focusing on physical laws, but on measured (sensor) data of real systems. The approach is more general since it generates models for every system detached from the scientific background. Additionally, this approach can be used in a more general sense, since it is able to automatically identify correlations in the data. The method can be classified as a multivariate data regression analysis. In contrast to many other data regression methods this variant is also able to identify correlations of products of variables and not only of single variables. This enables a far more precise and better representation of causal correlations. The basis and the explanation of this method come from an analytical background: the series expansion. Another advantage of this technique is the possibility of real-time adaptation of the generated models during operation. Herewith system changes due to aging, wear or perturbations from the environment can be taken into account, which is indispensable for realistic scenarios. Since these data driven models can be evaluated very efficiently and with high precision, they can be used in mathematical optimization algorithms that minimize a cost function, e.g. time, energy consumption, operational costs or a mixture of them, subject to additional constraints. The proposed method has successfully been tested in several complex applications and with strong industrial requirements. The generated models were able to simulate the given systems with an error in precision less than one percent. Moreover the automatic identification of the correlations was able to discover so far unknown relationships. To summarize the above mentioned approach is able to efficiently compute high precise and real-time-adaptive data-based models in different fields of industry. Combined with an effective mathematical optimization algorithm like WORHP (We Optimize Really Huge Problems) several complex systems can now be represented by a high precision model to be optimized within the user wishes. The proposed methods will be illustrated with different examples.Keywords: adaptive modeling, automatic identification of correlations, data based modeling, optimization
Procedia PDF Downloads 4092129 Identification and Classification of Fiber-Fortified Semolina by Near-Infrared Spectroscopy (NIR)
Authors: Amanda T. Badaró, Douglas F. Barbin, Sofia T. Garcia, Maria Teresa P. S. Clerici, Amanda R. Ferreira
Abstract:
Food fortification is the intentional addition of a nutrient in a food matrix and has been widely used to overcome the lack of nutrients in the diet or increasing the nutritional value of food. Fortified food must meet the demand of the population, taking into account their habits and risks that these foods may cause. Wheat and its by-products, such as semolina, has been strongly indicated to be used as a food vehicle since it is widely consumed and used in the production of other foods. These products have been strategically used to add some nutrients, such as fibers. Methods of analysis and quantification of these kinds of components are destructive and require lengthy sample preparation and analysis. Therefore, the industry has searched for faster and less invasive methods, such as Near-Infrared Spectroscopy (NIR). NIR is a rapid and cost-effective method, however, it is based on indirect measurements, yielding high amount of data. Therefore, NIR spectroscopy requires calibration with mathematical and statistical tools (Chemometrics) to extract analytical information from the corresponding spectra, as Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA). PCA is well suited for NIR, once it can handle many spectra at a time and be used for non-supervised classification. Advantages of the PCA, which is also a data reduction technique, is that it reduces the data spectra to a smaller number of latent variables for further interpretation. On the other hand, LDA is a supervised method that searches the Canonical Variables (CV) with the maximum separation among different categories. In LDA, the first CV is the direction of maximum ratio between inter and intra-class variances. The present work used a portable infrared spectrometer (NIR) for identification and classification of pure and fiber-fortified semolina samples. The fiber was added to semolina in two different concentrations, and after the spectra acquisition, the data was used for PCA and LDA to identify and discriminate the samples. The results showed that NIR spectroscopy associate to PCA was very effective in identifying pure and fiber-fortified semolina. Additionally, the classification range of the samples using LDA was between 78.3% and 95% for calibration and 75% and 95% for cross-validation. Thus, after the multivariate analysis such as PCA and LDA, it was possible to verify that NIR associated to chemometric methods is able to identify and classify the different samples in a fast and non-destructive way.Keywords: Chemometrics, fiber, linear discriminant analysis, near-infrared spectroscopy, principal component analysis, semolina
Procedia PDF Downloads 2122128 Classification of IoT Traffic Security Attacks Using Deep Learning
Authors: Anum Ali, Kashaf ad Dooja, Asif Saleem
Abstract:
The future smart cities trend will be towards Internet of Things (IoT); IoT creates dynamic connections in a ubiquitous manner. Smart cities offer ease and flexibility for daily life matters. By using small devices that are connected to cloud servers based on IoT, network traffic between these devices is growing exponentially, whose security is a concerned issue, since ratio of cyber attack may make the network traffic vulnerable. This paper discusses the latest machine learning approaches in related work further to tackle the increasing rate of cyber attacks, machine learning algorithm is applied to IoT-based network traffic data. The proposed algorithm train itself on data and identify different sections of devices interaction by using supervised learning which is considered as a classifier related to a specific IoT device class. The simulation results clearly identify the attacks and produce fewer false detections.Keywords: IoT, traffic security, deep learning, classification
Procedia PDF Downloads 1532127 A Hybrid System for Boreholes Soil Sample
Authors: Ali Ulvi Uzer
Abstract:
Data reduction is an important topic in the field of pattern recognition applications. The basic concept is the reduction of multitudinous amounts of data down to the meaningful parts. The Principal Component Analysis (PCA) method is frequently used for data reduction. The Support Vector Machine (SVM) method is a discriminative classifier formally defined by a separating hyperplane. In other words, given labeled training data, the algorithm outputs an optimal hyperplane which categorizes new examples. This study offers a hybrid approach that uses the PCA for data reduction and Support Vector Machines (SVM) for classification. In order to detect the accuracy of the suggested system, two boreholes taken from the soil sample was used. The classification accuracies for this dataset were obtained through using ten-fold cross-validation method. As the results suggest, this system, which is performed through size reduction, is a feasible system for faster recognition of dataset so our study result appears to be very promising.Keywords: feature selection, sequential forward selection, support vector machines, soil sample
Procedia PDF Downloads 4552126 Greyscale: A Tree-Based Taxonomy for Grey Literature Published by Fisheries Agencies
Authors: Tatiana Tunon, Gottfried Pestal
Abstract:
Government agencies responsible for the management of fisheries resources publish many types of grey literature, and these materials are increasingly accessible to the public on agency websites. However, scope and quality vary considerably, and end-users need meta-data about the report series when deciding whether to use the information (e.g. apply the methods, include the results in a systematic review), or when prioritizing materials for archiving (e.g. library holdings, reference databases). A proposed taxonomy for these report series was developed based on a review of 41 report series from 6 government agencies in 4 countries (Canada, New Zealand, Scotland, and United States). Each report series was categorized according to multiple criteria describing peer-review process, content, and purpose. A robust classification tree was then fitted to these descriptions, and the resulting taxonomic groups were used to compare agency output from 4 countries using reports available in their online repositories.Keywords: classification tree, fisheries, government, grey literature
Procedia PDF Downloads 2822125 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface
Authors: Ping Tan, Xiaomeng Su, Yi Shen
Abstract:
The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean
Procedia PDF Downloads 1252124 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions
Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead
Abstract:
In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence
Procedia PDF Downloads 1252123 Prediction of Remaining Life of Industrial Cutting Tools with Deep Learning-Assisted Image Processing Techniques
Authors: Gizem Eser Erdek
Abstract:
This study is research on predicting the remaining life of industrial cutting tools used in the industrial production process with deep learning methods. When the life of cutting tools decreases, they cause destruction to the raw material they are processing. This study it is aimed to predict the remaining life of the cutting tool based on the damage caused by the cutting tools to the raw material. For this, hole photos were collected from the hole-drilling machine for 8 months. Photos were labeled in 5 classes according to hole quality. In this way, the problem was transformed into a classification problem. Using the prepared data set, a model was created with convolutional neural networks, which is a deep learning method. In addition, VGGNet and ResNet architectures, which have been successful in the literature, have been tested on the data set. A hybrid model using convolutional neural networks and support vector machines is also used for comparison. When all models are compared, it has been determined that the model in which convolutional neural networks are used gives successful results of a %74 accuracy rate. In the preliminary studies, the data set was arranged to include only the best and worst classes, and the study gave ~93% accuracy when the binary classification model was applied. The results of this study showed that the remaining life of the cutting tools could be predicted by deep learning methods based on the damage to the raw material. Experiments have proven that deep learning methods can be used as an alternative for cutting tool life estimation.Keywords: classification, convolutional neural network, deep learning, remaining life of industrial cutting tools, ResNet, support vector machine, VggNet
Procedia PDF Downloads 772122 Profit and Nonprofit Sports Clubs, Financial and Organizational Comparison in Poland
Authors: Igor Perechuda, Wojciech Cieśliński
Abstract:
The paper identifies the features of Polish sports clubs in the particular organizational forms: profit and nonprofit. Identification and description of these features is carried out in terms of financial efficiency of the given organizational form. Under the terms of the efficiency the research allows you to specify the advantages of particular organizational sports club form and the following limitations. Paper considers features of sports clubs in range of Polish conditions as legal regulations. The sources of the functioning efficiency of sports clubs may lie in the organizational forms in which they operate. Each of the available forms can be considered either a for-profit or nonprofit enterprise. Depending on this classification there are different capabilities of increasing organizational and financial efficiency of a given sports club. Authors start with general classification and difference between for-profit and non-profit sport clubs. Next identifies specific financial and organizational conditions of both organizational form and then show examples of mixed activity forms and their efficiency effect.Keywords: financial efficiency, for-profit, non-profit, sports club
Procedia PDF Downloads 5472121 Corporate Culture and Subcultures: Corporate Culture Analysis in a Company without a Public Relations Department
Authors: Sibel Kurt
Abstract:
In this study, with the use of Goffee and Jones’s corporate culture classification and the scale of this classification, there aimed to analyze a company’s corporate culture which does not have a public relations or communication department. First of all, the type of corporate culture in the company had been determined. Then it questioned if there are subcultures which formed according to demographics or the department of work. In the survey questionnaire, there are 53 questions total. 6 of these questions are about demographics, and 47 of them are about corporate culture. 152 personnel of the company had answered the survey, and the data have been evaluated according to frequency, descriptive, and compare means tests. The type of corporate culture of the company was determined as the 'communal' from the typology of Goffee and Jones in the positive form. There are no subcultures in the company which bases on the demographics, but only one subculture has determined according to the department of work. As a result, the absence of public relations department, personnel’s low level of awareness about corporate culture, and the lack of information between management and employees has been revealed.Keywords: corporate culture, subculture, public relations, organizational communication
Procedia PDF Downloads 1682120 A World Map of Seabed Sediment Based on 50 Years of Knowledge
Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès
Abstract:
Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.Keywords: marine sedimentology, seabed map, sediment classification, world ocean
Procedia PDF Downloads 2322119 Establishment of Air Quality Zones in Italy
Authors: M. G. Dirodi, G. Gugliotta, C. Leonardi
Abstract:
The member states shall establish zones and agglomerations throughout their territory to assess and manage air quality in order to comply with European directives. In Italy decree 155/2010, transposing Directive 2008/50/EC on ambient air quality and cleaner air for Europe, merged into a single act the previous provisions on ambient air quality assessment and management, including those resulting from the implementation of Directive 2004/107/EC relating to arsenic, cadmium, nickel, mercury, and polycyclic aromatic hydrocarbons in ambient air. Decree 155/2010 introduced stricter rules for identifying zones on the basis of the characteristics of the territory in spite of considering pollution levels, as it was in the past. The implementation of such new criteria has reduced the great variability of the previous zoning, leading to a significant reduction of the total number of zones and to a complete and uniform ambient air quality assessment and management throughout the Country. The present document is related to the new zones definition in Italy according to Decree 155/2010. In particular, the paper contains the description and the analysis of the outcome of zoning and classification.Keywords: zones, agglomerations, air quality assessment, classification
Procedia PDF Downloads 3302118 Optimizing Load Shedding Schedule Problem Based on Harmony Search
Authors: Almahd Alshereef, Ahmed Alkilany, Hammad Said, Azuraliza Abu Bakar
Abstract:
From time to time, electrical power grid is directed by the National Electricity Operator to conduct load shedding, which involves hours' power outages on the area of this study, Southern Electrical Grid of Libya (SEGL). Load shedding is conducted in order to alleviate pressure on the National Electricity Grid at times of peak demand. This approach has chosen a set of categories to study load-shedding problem considering the effect of the demand priorities on the operation of the power system during emergencies. Classification of category region for load shedding problem is solved by a new algorithm (the harmony algorithm) based on the "random generation list of category region", which is a possible solution with a proximity degree to the optimum. The obtained results prove additional enhancements compared to other heuristic approaches. The case studies are carried out on SEGL.Keywords: optimization, harmony algorithm, load shedding, classification
Procedia PDF Downloads 3962117 Speaker Identification by Atomic Decomposition of Learned Features Using Computational Auditory Scene Analysis Principals in Noisy Environments
Authors: Thomas Bryan, Veton Kepuska, Ivica Kostanic
Abstract:
Speaker recognition is performed in high Additive White Gaussian Noise (AWGN) environments using principals of Computational Auditory Scene Analysis (CASA). CASA methods often classify sounds from images in the time-frequency (T-F) plane using spectrograms or cochleargrams as the image. In this paper atomic decomposition implemented by matching pursuit performs a transform from time series speech signals to the T-F plane. The atomic decomposition creates a sparsely populated T-F vector in “weight space” where each populated T-F position contains an amplitude weight. The weight space vector along with the atomic dictionary represents a denoised, compressed version of the original signal. The arraignment or of the atomic indices in the T-F vector are used for classification. Unsupervised feature learning implemented by a sparse autoencoder learns a single dictionary of basis features from a collection of envelope samples from all speakers. The approach is demonstrated using pairs of speakers from the TIMIT data set. Pairs of speakers are selected randomly from a single district. Each speak has 10 sentences. Two are used for training and 8 for testing. Atomic index probabilities are created for each training sentence and also for each test sentence. Classification is performed by finding the lowest Euclidean distance between then probabilities from the training sentences and the test sentences. Training is done at a 30dB Signal-to-Noise Ratio (SNR). Testing is performed at SNR’s of 0 dB, 5 dB, 10 dB and 30dB. The algorithm has a baseline classification accuracy of ~93% averaged over 10 pairs of speakers from the TIMIT data set. The baseline accuracy is attributable to short sequences of training and test data as well as the overall simplicity of the classification algorithm. The accuracy is not affected by AWGN and produces ~93% accuracy at 0dB SNR.Keywords: time-frequency plane, atomic decomposition, envelope sampling, Gabor atoms, matching pursuit, sparse dictionary learning, sparse autoencoder
Procedia PDF Downloads 2892116 The International Classification of Functioning, Disability and Health (ICF) as a Problem-Solving Tool in Disability Rehabilitation and Education Alliance in Metabolic Disorders (DREAM) at Sultan Bin Abdul Aziz Humanitarian City:A Prototype for Reh
Authors: Hamzeh Awad
Abstract:
Disability is considered to be a worldwide complex phenomenon which rising at a phenomenal rate and caused by many different factors. Chronic diseases such as cardiovascular disease and diabetes can lead to mobility disability in particular and disability in general. The ICF is an integrative bio-psycho-social model of functioning and disability and considered by the World Health Organization (WHO) to be a reference for disability classification using its categories and core set to classify disorder’s functional limitations. Specialist programs at Sultan Bin Abdul Aziz Humanitarian City (SBAHC) are providing both inpatient and outpatient services have started to implement the ICF and use it as a problem solving tool in Rehab. Diabetes is leading contributing factor for disability and considered epidemic in several Gulf countries including the Kingdom of Saudi Arabia (KSA), where its prevalence continues to increase dramatically. Metabolic disorders, mainly diabetes are not well covered in Rehab field. The purpose of this study is present to research and clinical rehabilitation field of DREAM and ICF as a framework in clinical and research setting in Rehab service. Also, shed the light on using the ICF as problem solving tool at SBAHC. There are synergies between disability causes and wider public health priorities in relation to both chronic disease and disability prevention. Therefore, there is a need for strong advocacy and understanding of the role of ICF as a reference in Rehab settings in Middle East if we wish to seize the opportunity to reverse current trends of acquired disability in the region.Keywords: international classification of functioning, disability and health (ICF), prototype, rehabilitation and diabetes
Procedia PDF Downloads 3512115 Predictive Spectral Lithological Mapping, Geomorphology and Geospatial Correlation of Structural Lineaments in Bornu Basin, Northeast Nigeria
Authors: Aminu Abdullahi Isyaku
Abstract:
Semi-arid Bornu basin in northeast Nigeria is characterised with flat topography, thick cover sediments and lack of continuous bedrock outcrops discernible for field geology. This paper presents the methodology for the characterisation of neotectonic surface structures and surface lithology in the north-eastern Bornu basin in northeast Nigeria as an alternative approach to field geological mapping using free multispectral Landsat 7 ETM+, SRTM DEM and ASAR Earth Observation datasets. Spectral lithological mapping herein developed utilised spectral discrimination of the surface features identified on Landsat 7 ETM+ images to infer on the lithology using four steps including; computations of band combination images; band ratio images; supervised image classification and inferences of the lithological compositions. Two complementary approaches to lineament mapping are carried out in this study involving manual digitization and automatic lineament extraction to validate the structural lineaments extracted from the Landsat 7 ETM+ image mosaic covering the study. A comparison between the mapped surface lineaments and lineament zones show good geospatial correlation and identified the predominant NE-SW and NW-SE structural trends in the basin. Topographic profiles across different parts of the Bama Beach Ridge palaeoshorelines in the basin appear to show different elevations across the feature. It is determined that most of the drainage systems in the northeastern Bornu basin are structurally controlled with drainage lines terminating against the paleo-lake border and emptying into the Lake Chad mainly arising from the extensive topographic high-stand Bama Beach Ridge palaeoshoreline.Keywords: Bornu Basin, lineaments, spectral lithology, tectonics
Procedia PDF Downloads 1392114 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality
Authors: Heichia Wang, Yalan Chao
Abstract:
Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network
Procedia PDF Downloads 1282113 Probabilistic Crash Prediction and Prevention of Vehicle Crash
Authors: Lavanya Annadi, Fahimeh Jafari
Abstract:
Transportation brings immense benefits to society, but it also has its costs. Costs include such as the cost of infrastructure, personnel and equipment, but also the loss of life and property in traffic accidents on the road, delays in travel due to traffic congestion and various indirect costs in terms of air transport. More research has been done to identify the various factors that affect road accidents, such as road infrastructure, traffic, sociodemographic characteristics, land use, and the environment. The aim of this research is to predict the probabilistic crash prediction of vehicles using machine learning due to natural and structural reasons by excluding spontaneous reasons like overspeeding etc., in the United States. These factors range from weather factors, like weather conditions, precipitation, visibility, wind speed, wind direction, temperature, pressure, and humidity to human made structures like road structure factors like bump, roundabout, no exit, turning loop, give away, etc. Probabilities are dissected into ten different classes. All the predictions are based on multiclass classification techniques, which are supervised learning. This study considers all crashes that happened in all states collected by the US government. To calculate the probability, multinomial expected value was used and assigned a classification label as the crash probability. We applied three different classification models, including multiclass Logistic Regression, Random Forest and XGBoost. The numerical results show that XGBoost achieved a 75.2% accuracy rate which indicates the part that is being played by natural and structural reasons for the crash. The paper has provided in-deep insights through exploratory data analysis.Keywords: road safety, crash prediction, exploratory analysis, machine learning
Procedia PDF Downloads 1112112 An Attentional Bi-Stream Sequence Learner (AttBiSeL) for Credit Card Fraud Detection
Authors: Amir Shahab Shahabi, Mohsen Hasirian
Abstract:
Modern societies, marked by expansive Internet connectivity and the rise of e-commerce, are now integrated with digital platforms at an unprecedented level. The efficiency, speed, and accessibility of e-commerce have garnered a substantial consumer base. Against this backdrop, electronic banking has undergone rapid proliferation within the realm of online activities. However, this growth has inadvertently given rise to an environment conducive to illicit activities, notably electronic payment fraud, posing a formidable challenge to the domain of electronic banking. A pivotal role in upholding the integrity of electronic commerce and business transactions is played by electronic fraud detection, particularly in the context of credit cards which underscores the imperative of comprehensive research in this field. To this end, our study introduces an Attentional Bi-Stream Sequence Learner (AttBiSeL) framework that leverages attention mechanisms and recurrent networks. By incorporating bidirectional recurrent layers, specifically bidirectional Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) layers, the proposed model adeptly extracts past and future transaction sequences while accounting for the temporal flow of information in both directions. Moreover, the integration of an attention mechanism accentuates specific transactions to varying degrees, as manifested in the output of the recurrent networks. The effectiveness of the proposed approach in automatic credit card fraud classification is evaluated on the European Cardholders' Fraud Dataset. Empirical results validate that the hybrid architectural paradigm presented in this study yields enhanced accuracy compared to previous studies.Keywords: credit card fraud, deep learning, attention mechanism, recurrent neural networks
Procedia PDF Downloads 132111 Using Machine Learning to Predict Answers to Big-Five Personality Questions
Authors: Aadityaa Singla
Abstract:
The big five personality traits are as follows: openness, conscientiousness, extraversion, agreeableness, and neuroticism. In order to get an insight into their personality, many flocks to these categories, which each have different meanings/characteristics. This information is important not only to individuals but also to career professionals and psychologists who can use this information for candidate assessment or job recruitment. The links between AI and psychology have been well studied in cognitive science, but it is still a rather novel development. It is possible for various AI classification models to accurately predict a personality question via ten input questions. This would contrast with the hundred questions that normal humans have to answer to gain a complete picture of their five personality traits. In order to approach this problem, various AI classification models were used on a dataset to predict what a user may answer. From there, the model's prediction was compared to its actual response. Normally, there are five answer choices (a 20% chance of correct guess), and the models exceed that value to different degrees, proving their significance. By utilizing an MLP classifier, decision tree, linear model, and K-nearest neighbors, they were able to obtain a test accuracy of 86.643, 54.625, 47.875, and 52.125, respectively. These approaches display that there is potential in the future for more nuanced predictions to be made regarding personality.Keywords: machine learning, personally, big five personality traits, cognitive science
Procedia PDF Downloads 1452110 The Increasing of Unconfined Compression Strength of Clay Soils Stabilized with Cement
Authors: Ali̇ Si̇nan Soğanci
Abstract:
The cement stabilization is one of the ground improvement method applied worldwide to increase the strength of clayey soils. The using of cement has got lots of advantages compared to other stabilization methods. Cement stabilization can be done quickly, the cost is low and creates a more durable structure with the soil. Cement can be used in the treatment of a wide variety of soils. The best results of the cement stabilization were seen on silts as well as coarse-grained soils. In this study, blocks of clay were taken from the Apa-Hotamış conveyance channel route which is 125km long will be built in Konya that take the water with 70m3/sec from Mavi tunnel to Hotamış storage. Firstly, the index properties of clay samples were determined according to the Unified Soil Classification System. The experimental program was carried out on compacted soil specimens with 0%, 7 %, 15% and 30 % cement additives and the results of unconfined compression strength were discussed. The results of unconfined compression tests indicated an increase in strength with increasing cement content.Keywords: cement stabilization, unconfined compression test, clayey soils, unified soil classification system.
Procedia PDF Downloads 4222109 Classification of Health Information Needs of Hypertensive Patients in the Online Health Community Based on Content Analysis
Authors: Aijing Luo, Zirui Xin, Yifeng Yuan
Abstract:
Background: With the rapid development of the online health community, more and more patients or families are seeking health information on the Internet. Objective: This study aimed to discuss how to fully reveal the health information needs expressed by hypertensive patients in their questions in the online environment. Methods: This study randomly selected 1,000 text records from the question data of hypertensive patients from 2008 to 2018 collected from the website www.haodf.com and constructed a classification system through literature research and content analysis. This paper identified the background characteristics and questioning the intention of each hypertensive patient based on the patient’s question and used co-occurrence network analysis to explore the features of the health information needs of hypertensive patients. Results: The classification system for health information needs of patients with hypertension is composed of 9 parts: 355 kinds of drugs, 395 kinds of symptoms and signs, 545 kinds of tests and examinations , 526 kinds of demographic data, 80 kinds of diseases, 37 kinds of risk factors, 43 kinds of emotions, 6 kinds of lifestyles, 49 kinds of questions. The characteristics of the explored online health information needs of the hypertensive patients include: i)more than 49% of patients describe the features such as drugs, symptoms and signs, tests and examinations, demographic data, diseases, etc. ii) these groups are most concerned about treatment (77.8%), followed by diagnosis (32.3%); iii) 65.8% of hypertensive patients will ask doctors online several questions at the same time. 28.3% of the patients are very concerned about how to adjust the medication, and they will ask other treatment-related questions at the same time, including drug side effects, whether to take drugs, how to treat a disease, etc.; secondly, 17.6% of the patients will consult the doctors online about the causes of the clinical findings, including the relationship between the clinical findings and a disease, the treatment of a disease, medication, and examinations. Conclusion: In the online environment, the health information needs expressed by Chinese hypertensive patients to doctors are personalized; that is, patients with different background features express their questioning intentions to doctors. The classification system constructed in this study can guide health information service providers in the construction of online health resources, to help solve the problem of information asymmetry in communication between doctors and patients.Keywords: online health community, health information needs, hypertensive patients, doctor-patient communication
Procedia PDF Downloads 1192108 Early-Warning Lights Classification Management System for Industrial Parks in Taiwan
Authors: Yu-Min Chang, Kuo-Sheng Tsai, Hung-Te Tsai, Chia-Hsin Li
Abstract:
This paper presents the early-warning lights classification management system for industrial parks promoted by the Taiwan Environmental Protection Administration (EPA) since 2011, including the definition of each early-warning light, objectives, action program and accomplishments. All of the 151 industrial parks in Taiwan were classified into four early-warning lights, including red, orange, yellow and green, for carrying out respective pollution management according to the monitoring data of soil and groundwater quality, regulatory compliance, and regulatory listing of control site or remediation site. The Taiwan EPA set up a priority list for high potential polluted industrial parks and investigated their soil and groundwater qualities based on the results of the light classification and pollution potential assessment. In 2011-2013, there were 44 industrial parks selected and carried out different investigation, such as the early warning groundwater well networks establishment and pollution investigation/verification for the red and orange-light industrial parks and the environmental background survey for the yellow-light industrial parks. Among them, 22 industrial parks were newly or continuously confirmed that the concentrations of pollutants exceeded those in soil or groundwater pollution control standards. Thus, the further investigation, groundwater use restriction, listing of pollution control site or remediation site, and pollutant isolation measures were implemented by the local environmental protection and industry competent authorities; the early warning lights of those industrial parks were proposed to adjust up to orange or red-light. Up to the present, the preliminary positive effect of the soil and groundwater quality management system for industrial parks has been noticed in several aspects, such as environmental background information collection, early warning of pollution risk, pollution investigation and control, information integration and application, and inter-agency collaboration. Finally, the work and goal of self-initiated quality management of industrial parks will be carried out on the basis of the inter-agency collaboration by the classified lights system of early warning and management as well as the regular announcement of the status of each industrial park.Keywords: industrial park, soil and groundwater quality management, early-warning lights classification, SOP for reporting and treatment of monitored abnormal events
Procedia PDF Downloads 3262107 Preparation on Sentimental Analysis on Social Media Comments with Bidirectional Long Short-Term Memory Gated Recurrent Unit and Model Glove in Portuguese
Authors: Leonardo Alfredo Mendoza, Cristian Munoz, Marco Aurelio Pacheco, Manoela Kohler, Evelyn Batista, Rodrigo Moura
Abstract:
Natural Language Processing (NLP) techniques are increasingly more powerful to be able to interpret the feelings and reactions of a person to a product or service. Sentiment analysis has become a fundamental tool for this interpretation but has few applications in languages other than English. This paper presents a classification of sentiment analysis in Portuguese with a base of comments from social networks in Portuguese. A word embedding's representation was used with a 50-Dimension GloVe pre-trained model, generated through a corpus completely in Portuguese. To generate this classification, the bidirectional long short-term memory and bidirectional Gated Recurrent Unit (GRU) models are used, reaching results of 99.1%.Keywords: natural processing language, sentiment analysis, bidirectional long short-term memory, BI-LSTM, gated recurrent unit, GRU
Procedia PDF Downloads 1592106 Classification of Digital Chest Radiographs Using Image Processing Techniques to Aid in Diagnosis of Pulmonary Tuberculosis
Authors: A. J. S. P. Nileema, S. Kulatunga , S. H. Palihawadana
Abstract:
Computer aided detection (CAD) system was developed for the diagnosis of pulmonary tuberculosis using digital chest X-rays with MATLAB image processing techniques using a statistical approach. The study comprised of 200 digital chest radiographs collected from the National Hospital for Respiratory Diseases - Welisara, Sri Lanka. Pre-processing was done to remove identification details. Lung fields were segmented and then divided into four quadrants; right upper quadrant, left upper quadrant, right lower quadrant, and left lower quadrant using the image processing techniques in MATLAB. Contrast, correlation, homogeneity, energy, entropy, and maximum probability texture features were extracted using the gray level co-occurrence matrix method. Descriptive statistics and normal distribution analysis were performed using SPSS. Depending on the radiologists’ interpretation, chest radiographs were classified manually into PTB - positive (PTBP) and PTB - negative (PTBN) classes. Features with standard normal distribution were analyzed using an independent sample T-test for PTBP and PTBN chest radiographs. Among the six features tested, contrast, correlation, energy, entropy, and maximum probability features showed a statistically significant difference between the two classes at 95% confidence interval; therefore, could be used in the classification of chest radiograph for PTB diagnosis. With the resulting value ranges of the five texture features with normal distribution, a classification algorithm was then defined to recognize and classify the quadrant images; if the texture feature values of the quadrant image being tested falls within the defined region, it will be identified as a PTBP – abnormal quadrant and will be labeled as ‘Abnormal’ in red color with its border being highlighted in red color whereas if the texture feature values of the quadrant image being tested falls outside of the defined value range, it will be identified as PTBN–normal and labeled as ‘Normal’ in blue color but there will be no changes to the image outline. The developed classification algorithm has shown a high sensitivity of 92% which makes it an efficient CAD system and with a modest specificity of 70%.Keywords: chest radiographs, computer aided detection, image processing, pulmonary tuberculosis
Procedia PDF Downloads 1262105 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 862104 Towards Real-Time Classification of Finger Movement Direction Using Encephalography Independent Components
Authors: Mohamed Mounir Tellache, Hiroyuki Kambara, Yasuharu Koike, Makoto Miyakoshi, Natsue Yoshimura
Abstract:
This study explores the practicality of using electroencephalographic (EEG) independent components to predict eight-direction finger movements in pseudo-real-time. Six healthy participants with individual-head MRI images performed finger movements in eight directions with two different arm configurations. The analysis was performed in two stages. The first stage consisted of using independent component analysis (ICA) to separate the signals representing brain activity from non-brain activity signals and to obtain the unmixing matrix. The resulting independent components (ICs) were checked, and those reflecting brain-activity were selected. Finally, the time series of the selected ICs were used to predict eight finger-movement directions using Sparse Logistic Regression (SLR). The second stage consisted of using the previously obtained unmixing matrix, the selected ICs, and the model obtained by applying SLR to classify a different EEG dataset. This method was applied to two different settings, namely the single-participant level and the group-level. For the single-participant level, the EEG dataset used in the first stage and the EEG dataset used in the second stage originated from the same participant. For the group-level, the EEG datasets used in the first stage were constructed by temporally concatenating each combination without repetition of the EEG datasets of five participants out of six, whereas the EEG dataset used in the second stage originated from the remaining participants. The average test classification results across datasets (mean ± S.D.) were 38.62 ± 8.36% for the single-participant, which was significantly higher than the chance level (12.50 ± 0.01%), and 27.26 ± 4.39% for the group-level which was also significantly higher than the chance level (12.49% ± 0.01%). The classification accuracy within [–45°, 45°] of the true direction is 70.03 ± 8.14% for single-participant and 62.63 ± 6.07% for group-level which may be promising for some real-life applications. Clustering and contribution analyses further revealed the brain regions involved in finger movement and the temporal aspect of their contribution to the classification. These results showed the possibility of using the ICA-based method in combination with other methods to build a real-time system to control prostheses.Keywords: brain-computer interface, electroencephalography, finger motion decoding, independent component analysis, pseudo real-time motion decoding
Procedia PDF Downloads 1382103 An Application to Predict the Best Study Path for Information Technology Students in Learning Institutes
Authors: L. S. Chathurika
Abstract:
Early prediction of student performance is an important factor to be gained academic excellence. Whatever the study stream in secondary education, students lay the foundation for higher studies during the first year of their degree or diploma program in Sri Lanka. The information technology (IT) field has certain improvements in the education domain by selecting specialization areas to show the talents and skills of students. These specializations can be software engineering, network administration, database administration, multimedia design, etc. After completing the first-year, students attempt to select the best path by considering numerous factors. The purpose of this experiment is to predict the best study path using machine learning algorithms. Five classification algorithms: decision tree, support vector machine, artificial neural network, Naïve Bayes, and logistic regression are selected and tested. The support vector machine obtained the highest accuracy, 82.4%. Then affecting features are recognized to select the best study path.Keywords: algorithm, classification, evaluation, features, testing, training
Procedia PDF Downloads 119