Search results for: Vicon physical data.
7076 Theoretical and Analytical Approaches for Investigating the Relations between Sediment Transport and Channel Shape
Authors: Nidal Hadadin
Abstract:
This study investigated the effect of cross sectional geometry on sediment transport rate. The processes of sediment transport are generally associated to environmental management, such as pollution caused by the forming of suspended sediment in the channel network of a watershed and preserving physical habitats and native vegetations, and engineering applications, such as the influence of sediment transport on hydraulic structures and flood control design. Many equations have been proposed for computing the sediment transport, the influence of many variables on sediment transport has been understood; however, the effect of other variables still requires further research. For open channel flow, sediment transport capacity is recognized to be a function of friction slope, flow velocity, grain size, grain roughness and form roughness, the hydraulic radius of the bed section and the type and quantity of vegetation cover. The effect of cross sectional geometry of the channel on sediment transport is one of the variables that need additional investigation. The width-depth ratio (W/d) is a comparative indicator of the channel shape. The width is the total distance across the channel and the depth is the mean depth of the channel. The mean depth is best calculated as total cross-sectional area divided by the top width. Channels with high W/d ratios tend to be shallow and wide, while channels with low (W/d) ratios tend to be narrow and deep. In this study, the effects of the width-depth ratio on sediment transport was demonstrated theoretically by inserting the shape factor in sediment continuity equation and analytically by utilizing the field data sets for Yalobusha River. It was found by utilizing the two approaches as a width-depth ratio increases the sediment transport decreases.Keywords: Sediment transport, shape factor, hydraulicgeometry, flow discharge, width depth ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13957075 Application of Neural Networks for 24-Hour-Ahead Load Forecasting
Authors: Fatemeh Mosalman Yazdi
Abstract:
One of the most important requirements for the operation and planning activities of an electrical utility is the prediction of load for the next hour to several days out, known as short term load forecasting. This paper presents the development of an artificial neural network based short-term load forecasting model. The model can forecast daily load profiles with a load time of one day for next 24 hours. In this method can divide days of year with using average temperature. Groups make according linearity rate of curve. Ultimate forecast for each group obtain with considering weekday and weekend. This paper investigates effects of temperature and humidity on consuming curve. For forecasting load curve of holidays at first forecast pick and valley and then the neural network forecast is re-shaped with the new data. The ANN-based load models are trained using hourly historical. Load data and daily historical max/min temperature and humidity data. The results of testing the system on data from Yazd utility are reported.Keywords: Artificial neural network, Holiday forecasting, pickand valley load forecasting, Short-term load-forecasting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21927074 On the Joint Optimization of Performance and Power Consumption in Data Centers
Authors: Samee Ullah Khan, C. Ardil
Abstract:
We model the process of a data center as a multi- objective problem of mapping independent tasks onto a set of data center machines that simultaneously minimizes the energy consump¬tion and response time (makespan) subject to the constraints of deadlines and architectural requirements. A simple technique based on multi-objective goal programming is proposed that guarantees Pareto optimal solution with excellence in convergence process. The proposed technique also is compared with other traditional approach. The simulation results show that the proposed technique achieves superior performance compared to the min-min heuristics, and com¬petitive performance relative to the optimal solution implemented in UNDO for small-scale problems.
Keywords: Energy-efficient computing, distributed systems, multi-objective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16917073 Preparing Data for Calibration of Mechanistic-Empirical Pavement Design Guide in Central Saudi Arabia
Authors: Abdulraaof H. Alqaili, Hamad A. Alsoliman
Abstract:
Through progress in pavement design developments, a pavement design method was developed, which is titled the Mechanistic Empirical Pavement Design Guide (MEPDG). Nowadays, the evolution in roads network and highways is observed in Saudi Arabia as a result of increasing in traffic volume. Therefore, the MEPDG currently is implemented for flexible pavement design by the Saudi Ministry of Transportation. Implementation of MEPDG for local pavement design requires the calibration of distress models under the local conditions (traffic, climate, and materials). This paper aims to prepare data for calibration of MEPDG in Central Saudi Arabia. Thus, the first goal is data collection for the design of flexible pavement from the local conditions of the Riyadh region. Since, the modifying of collected data to input data is needed; the main goal of this paper is the analysis of collected data. The data analysis in this paper includes processing each: Trucks Classification, Traffic Growth Factor, Annual Average Daily Truck Traffic (AADTT), Monthly Adjustment Factors (MAFi), Vehicle Class Distribution (VCD), Truck Hourly Distribution Factors, Axle Load Distribution Factors (ALDF), Number of axle types (single, tandem, and tridem) per truck class, cloud cover percent, and road sections selected for the local calibration. Detailed descriptions of input parameters are explained in this paper, which leads to providing of an approach for successful implementation of MEPDG. Local calibration of MEPDG to the conditions of Riyadh region can be performed based on the findings in this paper.
Keywords: Mechanistic-empirical pavement design guide, traffic characteristics, materials properties, climate, Riyadh.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12227072 Preparation and Characterization of Pure PVA and PVA/MMT Matrix: Effect of Thermal Treatment
Authors: Albana Hasimi, Edlira Tako, Partizan Malkaj, Elvin Çomo, Blerina Papajani, Mirela Ndrita, Ledjan Malaj
Abstract:
Many endeavors have been exerted during the last years for developing new artificial polymeric membranes, which fulfill the demanded conditions for biomedical uses. One of the most tested polymers is Poly(vinyl alcohol) [PVA]. Our teams are based on the possibility of using PVA for personal protective equipment against COVID-19. In personal protective equipment, we explore the possibility of modifying the properties of the polymer by adding Montmorillonite [MMT]. Heat-treatment above the glass transition temperature is used to improve mechanical properties mainly by increasing the crystallinity of the polymer, which acts as a physical network. Temperature-Modulated Differential Scanning Calorimetry (TMDSC) measurements indicated that the presence of 0.5% MMT in PVA causes a higher Tg value and shaped peak of crystallinity. Decomposition is observed at two of the melting points of the crystals during heating 25-240 oC and overlap of the recrystallization ridges during cooling 240-25 oC. This is indicative of the presence of two types (quality or structure) of polymer crystals. On the other hand, some indication of improvement of the quality of the crystals by heat-treatment is given by the distinct non-reversing contribution to melting. Data on sorption and transport of water in PVA films: PVA pure and PVA/MMT matrix, modified by thermal treatment are presented. The membranes become more rigid as a result of the heat treatment and because of this the water uptake is significantly lower in membranes. That is indicated by analysis of the resulting water uptake kinetics. The presence of 0.5% w/w of MMT has no significant impact on the properties of PVA membranes. Water uptake kinetics deviate from Fick’s law due to slow relaxation of glassy polymer matrix for all types of membranes.
Keywords: Crystallinity, montmorillonite, nanocomposite, poly(vinyl alcohol).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2267071 Economized Sensor Data Processing with Vehicle Platooning
Authors: Henry Hexmoor, Kailash Yelasani
Abstract:
We present vehicular platooning as a special case of crowd-sensing framework where sharing sensory information among a crowd is used for their collective benefit. After offering an abstract policy that governs processes involving a vehicular platoon, we review several common scenarios and components surrounding vehicular platooning. We then present a simulated prototype that illustrates efficiency of road usage and vehicle travel time derived from platooning. We have argued that one of the paramount benefits of platooning that is overlooked elsewhere, is the substantial computational savings (i.e., economizing benefits) in acquisition and processing of sensory data among vehicles sharing the road. The most capable vehicle can share data gathered from its sensors with nearby vehicles grouped into a platoon.
Keywords: Cloud network, collaboration, Internet of Things, social network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7107070 Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process
Authors: Jan Stodt, Christoph Reich
Abstract:
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.Keywords: Audit, machine learning, assessment, metrics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10267069 Information Seeking through Assimilation Process in Thai Organization
Authors: Pornprom Chomngam
Abstract:
The purpose of this study is to examine employee assessments of the usefulness/value of different types of information available to those employees during the process of organizational assimilation. Participants in the study were 247 “new" employees at Bangkok Bank. Bangkok Bank considers employees whose length of stay with the bank has been less than 18 months as new employees. Questionnaires were administered to all of the Bank-s new employees to obtain the data for this study. Repeated measures analysis was used to analyze the data. The data were summed and coded by using Statistical Package for Social Science. Newcomers indicate that social information is the most useful information, followed by job (technical, referent, and appraisal information), political, normative, and organizational information. Essentially, social, job, and political information are evaluated by newcomers as highly useful, while normative and organizational information are rated as moderately useful.
Keywords: Information seeking, organization assimilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16617068 Image Steganography Using Least Significant Bit Technique
Authors: Preeti Kumari, Ridhi Kapoor
Abstract:
In any communication, security is the most important issue in today’s world. In this paper, steganography is the process of hiding the important data into other data, such as text, audio, video, and image. The interest in this topic is to provide availability, confidentiality, integrity, and authenticity of data. The steganographic technique that embeds hides content with unremarkable cover media so as not to provoke eavesdropper’s suspicion or third party and hackers. In which many applications of compression, encryption, decryption, and embedding methods are used for digital image steganography. Due to compression, the nose produces in the image. To sustain noise in the image, the LSB insertion technique is used. The performance of the proposed embedding system with respect to providing security to secret message and robustness is discussed. We also demonstrate the maximum steganography capacity and visual distortion.Keywords: Steganography, LSB, encoding, information hiding, color image.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10927067 Embodied Cognition as a Concept of Educational Neuroscience and Phenomenology
Authors: Elham Shirvani-Ghadikolaei
Abstract:
In this paper, we examine the connection between the human mind and body within the framework of Merleau-Ponty's phenomenology. We study the role of this connection in designing more efficient learning environments, alongside the findings in physical recognition and educational neuroscience. Our research shows the interplay between the mind and the body in the external world and discusses its implications. Based on these observations, we make suggestions as to how the educational system can benefit from taking into account the interaction between the mind and the body in educational affairs.
Keywords: Educational neurosciences, embodied cognition, pedagogical neurosciences, phenomenology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14267066 Using Data Mining Techniques for Finding Cardiac Outlier Patients
Authors: Farhan Ismaeel Dakheel, Raoof Smko, K. Negrat, Abdelsalam Almarimi
Abstract:
In this paper we used data mining techniques to identify outlier patients who are using large amount of drugs over a long period of time. Any healthcare or health insurance system should deal with the quantities of drugs utilized by chronic diseases patients. In Kingdom of Bahrain, about 20% of health budget is spent on medications. For the managers of healthcare systems, there is no enough information about the ways of drug utilization by chronic diseases patients, is there any misuse or is there outliers patients. In this work, which has been done in cooperation with information department in the Bahrain Defence Force hospital; we select the data for Cardiac patients in the period starting from 1/1/2008 to December 31/12/2008 to be the data for the model in this paper. We used three techniques for finding the drug utilization for cardiac patients. First we applied a clustering technique, followed by measuring of clustering validity, and finally we applied a decision tree as classification algorithm. The clustering results is divided into three clusters according to the drug utilization, for 1603 patients, who received 15,806 prescriptions during this period can be partitioned into three groups, where 23 patients (2.59%) who received 1316 prescriptions (8.32%) are classified to be outliers. The classification algorithm shows that the use of average drug utilization and the age, and the gender of the patient can be considered to be the main predictive factors in the induced model.Keywords: Data Mining, Clustering, Classification, Drug Utilization..
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18987065 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data
Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu
Abstract:
Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.
Keywords: Food waste reduction, particle filter, point of sales, sustainable development goals, Taylor's Law, time series analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8717064 Experimental teaching, Perceived usefulness, Ease of use, Learning Interest and Science Achievement of Taiwan 8th Graders in TIMSS 2007 Database
Authors: Pei Wen Liao, Tsung Hau Jen
Abstract:
the data of Taiwanese 8th grader in the 4th cycle of Trends in International Mathematics and Science Study (TIMSS) are analyzed to examine the influence of the science teachers- preference in experimental teaching on the relationships between the affective variables ( the perceived usefulness of science, ease of using science and science learning interest) and the academic achievement in science. After dealing with the missing data, 3711 students and 145 science teacher-s data were analyzed through a Hierarchical Linear Modeling technique. The major objective of this study was to determine the role of the experimental teaching moderates the relationship between perceived usefulness and achievement.Keywords: TIMSS database, Science achievement, Experimental teaching, Perceived Usefulness, Perceived Ease of Use
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16577063 A Comparative Study of Cardio Respiratory Efficiency between Aquatic and Track and Field Performers
Authors: Sumanta Daw, Gopal Chandra Saha
Abstract:
The present study was conducted to explore the basic pulmonary functions which may generally vary according to the bio-physical characteristics including age, height, body weight, and environment etc. of the sports performers. Regular and specific training exercises also change the characteristics of an athlete’s prowess and produce a positive effect on the physiological functioning, mostly upon cardio-pulmonary efficiency and thereby improving the body mechanism. The objective of the present study was to compare the differences in cardio-respiratory functions between aquatics and track and field performers. As cardio-respiratory functions are influenced by pulse rate and blood pressure (systolic and diastolic), so both of the factors were also taken into consideration. The component selected under cardio-respiratory functions for the present study were i) FEVI/FVC ratio (forced expiratory volume divided by forced vital capacity ratio, i.e. the number represents the percentage of lung capacity to exhale in one second) ii) FVC1 (this is the amount of air which can force out of lungs in one second) and iii) FVC (forced vital capacity is the greatest total amount of air forcefully breathe out after breathing in as deeply as possible). All the three selected components of the cardio-respiratory efficiency were measured by spirometry method. Pulse rate was determined manually. The radial artery which is located on the thumb side of our wrist was used to assess the pulse rate. Blood pressure was assessed by sphygmomanometer. All the data were taken in the resting condition. 36subjects were selected for the present study out of which 18were water polo players and rest were sprinters. The age group of the subjects was considered between 18 to 23 years. In this study the obtained data inform of digital score were treated statistically to get result and draw conclusions. The Mean and Standard Deviation (SD) were used as descriptive statistics and the significant difference between the two subject groups was assessed with the help of statistical ‘t’-test. It was found from the study that all the three components i.e. FEVI/FVC ratio (p-value 0.0148 < 0.01), FVC1 (p-value 0.0010 < 0.01) and FVC (p-value 0.0067 < 0.01) differ significantly as water polo players proved to be better in terms of cardio-respiratory functions than sprinters. Thus study clearly suggests that the exercise training as well as the medium of practice arena associated with water polo players has played an important role to determine better cardio respiratory efficiency than track and field athletes. The outcome of the present study revealed that the lung function in land-based activities may not provide much impact than that of in water activities.
Keywords: Cardio-respiratory efficiency, spirometry, water polo players, sprinters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6107062 A Cumulative Learning Approach to Data Mining Employing Censored Production Rules (CPRs)
Authors: Rekha Kandwal, Kamal K.Bharadwaj
Abstract:
Knowledge is indispensable but voluminous knowledge becomes a bottleneck for efficient processing. A great challenge for data mining activity is the generation of large number of potential rules as a result of mining process. In fact sometimes result size is comparable to the original data. Traditional data mining pruning activities such as support do not sufficiently reduce the huge rule space. Moreover, many practical applications are characterized by continual change of data and knowledge, thereby making knowledge voluminous with each change. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. Michalski & Winston proposed Censored Production Rules (CPRs), as an extension of production rules, that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence, are tight or there is simply no information available as to whether it holds or not. Thus the 'If P Then D' part of the CPR expresses important information while the Unless C part acts only as a switch changes the polarity of D to ~D. In this paper a scheme based on Dempster-Shafer Theory (DST) interpretation of a CPR is suggested for discovering CPRs from the discovered flat PRs. The discovery of CPRs from flat rules would result in considerable reduction of the already discovered rules. The proposed scheme incrementally incorporates new knowledge and also reduces the size of knowledge base considerably with each episode. Examples are given to demonstrate the behaviour of the proposed scheme. The suggested cumulative learning scheme would be useful in mining data streams.
Keywords: Censored production rules, cumulative learning, data mining, machine learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14857061 Pattern Classification of Back-Propagation Algorithm Using Exclusive Connecting Network
Authors: Insung Jung, Gi-Nam Wang
Abstract:
The objective of this paper is to a design of pattern classification model based on the back-propagation (BP) algorithm for decision support system. Standard BP model has done full connection of each node in the layers from input to output layers. Therefore, it takes a lot of computing time and iteration computing for good performance and less accepted error rate when we are doing some pattern generation or training the network. However, this model is using exclusive connection in between hidden layer nodes and output nodes. The advantage of this model is less number of iteration and better performance compare with standard back-propagation model. We simulated some cases of classification data and different setting of network factors (e.g. hidden layer number and nodes, number of classification and iteration). During our simulation, we found that most of simulations cases were satisfied by BP based using exclusive connection network model compared to standard BP. We expect that this algorithm can be available to identification of user face, analysis of data, mapping data in between environment data and information.Keywords: Neural network, Back-propagation, classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16567060 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices
Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues
Abstract:
This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.
Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2477059 Physicochemical Characterizations of Marine and River Sediments in the North of France
Authors: Abriak Nor Edine, Zentar Rachid, Achour Raouf, Tran Ngoc Thanh
Abstract:
This work is undertaken to develop a methodology to enhance the management of dredged marine and river sediments in the North of France. The main objective of this study is to determine the main characteristics of these sediments. In this order, physical, mineralogical and chemical properties of both types of sediments are measured. Moreover, their potential impacts on the environment are assessed throughout leaching tests. From the obtained results, the potential of their use in road engineering is discussed.
Keywords: Marine sediments, River sediments, Physicochemical characterizations, Environmental characterizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18007058 Development of Bicomponent Fibre to Combat Insects
Authors: M. Bischoff, F. Schmidt, J. Herrmann, J. Mattheß, G. Seide, T. Gries
Abstract:
Crop yields have not increased as dramatically as the demand for food. One method to counteract this is to use pesticides to keep away predators, e.g. several forms of insecticide are available to fight insects. These insecticides and pesticides are both controversial as their application and their residue in the food product can also harm humans. In this study an alternative method to combat insects is studied. A physical insect-killing effect of SiO2 particles is used. The particles are applied on fibres to avoid erosion in the fields, which would occur when applied separately. The development of such SiO2 functionalized PP fibres is shown.Keywords: Agriculture, environment, insects, protection, silica, textile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16247057 A Survey in Techniques for Imbalanced Intrusion Detection System Datasets
Authors: Najmeh Abedzadeh, Matthew Jacobs
Abstract:
An intrusion detection system (IDS) is a software application that monitors malicious activities and generates alerts if any are detected. However, most network activities in IDS datasets are normal, and the relatively few numbers of attacks make the available data imbalanced. Consequently, cyber-attacks can hide inside a large number of normal activities, and machine learning algorithms have difficulty learning and classifying the data correctly. In this paper, a comprehensive literature review is conducted on different types of algorithms for both implementing the IDS and methods in correcting the imbalanced IDS dataset. The most famous algorithms are machine learning (ML), deep learning (DL), synthetic minority over-sampling technique (SMOTE), and reinforcement learning (RL). Most of the research use the CSE-CIC-IDS2017, CSE-CIC-IDS2018, and NSL-KDD datasets for evaluating their algorithms.
Keywords: IDS, intrusion detection system, imbalanced datasets, sampling algorithms, big data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11257056 Data Mining Techniques in Computer-Aided Diagnosis: Non-Invasive Cancer Detection
Authors: Florin Gorunescu
Abstract:
Diagnosis can be achieved by building a model of a certain organ under surveillance and comparing it with the real time physiological measurements taken from the patient. This paper deals with the presentation of the benefits of using Data Mining techniques in the computer-aided diagnosis (CAD), focusing on the cancer detection, in order to help doctors to make optimal decisions quickly and accurately. In the field of the noninvasive diagnosis techniques, the endoscopic ultrasound elastography (EUSE) is a recent elasticity imaging technique, allowing characterizing the difference between malignant and benign tumors. Digitalizing and summarizing the main EUSE sample movies features in a vector form concern with the use of the exploratory data analysis (EDA). Neural networks are then trained on the corresponding EUSE sample movies vector input in such a way that these intelligent systems are able to offer a very precise and objective diagnosis, discriminating between benign and malignant tumors. A concrete application of these Data Mining techniques illustrates the suitability and the reliability of this methodology in CAD.Keywords: Endoscopic ultrasound elastography, exploratorydata analysis, neural networks, non-invasive cancer detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18677055 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.
Keywords: Artificial Neural Network, Data Mining, Electroencephalogram, Epilepsy, Feature Extraction, Seizure Detection, Signal Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13147054 TELUM Land Use Model: An Investigation of Data Requirements and Calibration Results for Chittenden County MPO, U.S.A.
Authors: Georgia Pozoukidou
Abstract:
TELUM software is a land use model designed specifically to help metropolitan planning organizations (MPOs) prepare their transportation improvement programs and fulfill their numerous planning responsibilities. In this context obtaining, preparing, and validating socioeconomic forecasts are becoming fundamental tasks for an MPO in order to ensure that consistent population and employment data are provided to travel demand models. Chittenden County Metropolitan Planning Organization of Vermont State was used as a case study to test the applicability of TELUM land use model. The technical insights and lessons learned from the land use model application have transferable value for all MPOs faced with land use forecasting development and transportation modeling.
Keywords: Calibration data requirements, land use models, land use planning, Metropolitan Planning Organizations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21007053 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: Data mining, K-means, road traffic accidents, Waze, Weka.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12157052 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models
Authors: Yoonsuh Jung
Abstract:
As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an ‘optimal’ value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.Keywords: Cross Validation, Parameter Averaging, Parameter Selection, Regularization Parameter Search.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15727051 Using ALOHA Code to Evaluate CO2 Concentration for Maanshan Nuclear Power Plant
Authors: W. S. Hsu, S. W. Chen, Y. T. Ku, Y. Chiang, J. R. Wang , J. H. Yang, C. Shih
Abstract:
ALOHA code was used to calculate the concentration under the CO2 storage burst condition for Maanshan nuclear power plant (NPP) in this study. Five main data are input into ALOHA code including location, building, chemical, atmospheric, and source data. The data from Final Safety Analysis Report (FSAR) and some reports were used in this study. The ALOHA results are compared with the failure criteria of R.G. 1.78 to confirm the habitability of control room. The result of comparison presents that the ALOHA result is below the R.G. 1.78 criteria. This implies that the habitability of control room can be maintained in this case. The sensitivity study for atmospheric parameters was performed in this study. The results show that the wind speed has the larger effect in the concentration calculation.
Keywords: PWR, ALOHA, habitability, Maanshan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7427050 Review and Comparison of Associative Classification Data Mining Approaches
Authors: Suzan Wedyan
Abstract:
Associative classification (AC) is a data mining approach that combines association rule and classification to build classification models (classifiers). AC has attracted a significant attention from several researchers mainly because it derives accurate classifiers that contain simple yet effective rules. In the last decade, a number of associative classification algorithms have been proposed such as Classification based Association (CBA), Classification based on Multiple Association Rules (CMAR), Class based Associative Classification (CACA), and Classification based on Predicted Association Rule (CPAR). This paper surveys major AC algorithms and compares the steps and methods performed in each algorithm including: rule learning, rule sorting, rule pruning, classifier building, and class prediction.
Keywords: Associative Classification, Classification, Data Mining, Learning, Rule Ranking, Rule Pruning, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66337049 The Impact of Protein Content on Athletes’ Body Composition
Authors: G. Vici, L. Cesanelli, L. Belli, R. Ceci, V. Polzonetti
Abstract:
Several factors contribute to success in sport and diet is one of them. Evidence-based sport nutrition guidelines underline the importance of macro- and micro-nutrients’ balance and timing in order to improve athlete’s physical status and performance. Nevertheless, a high content of proteins is commonly found in resistance training athletes’ diet with carbohydrate intake that is not enough or not well planned. The aim of the study was to evaluate the impact of different protein and carbohydrate diet contents on body composition and sport performance on a group of resistance training athletes. Subjects were divided as study group (n=16) and control group (n=14). For a period of 4 months, both groups were subjected to the same resistance training fitness program with study group following a specific diet and control group following an ab libitum diet. Body compositions were evaluated trough anthropometric measurement (weight, height, body circumferences and skinfolds) and Bioimpedence Analysis. Physical strength and training status of individuals were evaluated through the One Repetition Maximum test (RM1). Protein intake in studied group was found to be lower than in control group. There was a statistically significant increase of body weight, free fat mass and body mass cell of studied group respect to the control group. Fat mass remains almost constant. Statistically significant changes were observed in quadriceps and biceps circumferences, with an increase in studied group. The MR1 test showed improvement in study group’s strength but no changes in control group. Usually people consume hyper-proteic diet to achieve muscle mass development. Through this study, it was possible to show that protein intake fixed at 1,7 g/kg/d can meet the individual's needs. In parallel, the increased intake of carbohydrates, focusing on quality and timing of assumption, has enabled the obtainment of desired results with a training protocol supporting a hypertrophic strategy. Therefore, the key point seems related to the planning of a structured program both from a nutritional and training point of view.Keywords: Body composition, diet, exercise, protein.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10767048 Distributed Splay Suffix Arrays: A New Structure for Distributed String Search
Authors: Tu Kun, Gu Nai-jie, Bi Kun, Liu Gang, Dong Wan-li
Abstract:
As a structure for processing string problem, suffix array is certainly widely-known and extensively-studied. But if the string access pattern follows the “90/10" rule, suffix array can not take advantage of the fact that we often find something that we have just found. Although the splay tree is an efficient data structure for small documents when the access pattern follows the “90/10" rule, it requires many structures and an excessive amount of pointer manipulations for efficiently processing and searching large documents. In this paper, we propose a new and conceptually powerful data structure, called splay suffix arrays (SSA), for string search. This data structure combines the features of splay tree and suffix arrays into a new approach which is suitable to implementation on both conventional and clustered computers.Keywords: suffix arrays, splay tree, string search, distributedalgorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17777047 Clinical Parameters Response to Low-Level Laser versus Monochromatic Near-Infrared Photo Energy in Diabetic Patients with Peripheral Neuropathy
Authors: Abeer A. Abdelhamed
Abstract:
Background: Diabetic sensorimotor polyneuropathy (DSP) is one of the most common microvascular complications of type 2 diabetes. Loss of sensation is thought to contribute to a lack of static and dynamic stability and increased risk of falling. Purpose: The purpose of this study was to compare the effects of low-level laser (LLL) and monochromatic near-infrared photo energy (MIRE) on pain, cutaneous sensation, static stability, and index of lower limb blood flow in diabetic patients with peripheral neuropathy. Methods: Forty diabetic patients with peripheral neuropathy were recruited for participation in this study. They were divided into two groups: The MIRE group, which contained 20 patients, and the LLL group, which contained 20 patients. All patients who participated in the study had been subjected to various physical assessment procedures, including pain, cutaneous sensation, Doppler flow meter, and static stability assessments. The baseline measurements were followed by treatment sessions that were conducted twice a week for six successive weeks. Results: The statistical analysis of the data revealed significant improvement of pain in both groups, with significant improvement in cutaneous sensation and static balance in the MIRE group compared to the LLL group; on the other hand, the results showed no significant differences in lower limb blood flow between the groups. Conclusion: LLL and MIRE can improve painful symptoms in patients with diabetic neuropathy. On the other hand, MIRE is also useful in improving cutaneous sensation and static stability in patients with diabetic neuropathy.Keywords: Diabetic neuropathy, Doppler flow meter, –Lowlevel laser, Monochromatic near-infrared photo energy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887