Search results for: prediction of iron ore reduction.
2271 Lexicon-Based Sentiment Analysis for Stock Movement Prediction
Authors: Zane Turner, Kevin Labille, Susan Gauch
Abstract:
Sentiment analysis is a broad and expanding field that aims to extract and classify opinions from textual data. Lexicon-based approaches are based on the use of a sentiment lexicon, i.e., a list of words each mapped to a sentiment score, to rate the sentiment of a text chunk. Our work focuses on predicting stock price change using a sentiment lexicon built from financial conference call logs. We introduce a method to generate a sentiment lexicon based upon an existing probabilistic approach. By using a domain-specific lexicon, we outperform traditional techniques and demonstrate that domain-specific sentiment lexicons provide higher accuracy than generic sentiment lexicons when predicting stock price change.
Keywords: Computational finance, sentiment analysis, sentiment lexicon, stock movement prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11372270 Prediction the Limiting Drawing Ratio in Deep Drawing Process by Back Propagation Artificial Neural Network
Authors: H.Mohammadi Majd, M.Jalali Azizpour, M. Goodarzi
Abstract:
In this paper back-propagation artificial neural network (BPANN) with Levenberg–Marquardt algorithm is employed to predict the limiting drawing ratio (LDR) of the deep drawing process. To prepare a training set for BPANN, some finite element simulations were carried out. die and punch radius, die arc radius, friction coefficient, thickness, yield strength of sheet and strain hardening exponent were used as the input data and the LDR as the specified output used in the training of neural network. As a result of the specified parameters, the program will be able to estimate the LDR for any new given condition. Comparing FEM and BPANN results, an acceptable correlation was found.Keywords: BPANN, deep drawing, prediction, limiting drawingratio (LDR), Levenberg–Marquardt algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18542269 Application of EEG Wavelet Power to Prediction of Antidepressant Treatment Response
Authors: Dorota Witkowska, Paweł Gosek, Lukasz Swiecicki, Wojciech Jernajczyk, Bruce J. West, Miroslaw Latka
Abstract:
In clinical practice, the selection of an antidepressant often degrades to lengthy trial-and-error. In this work we employ a normalized wavelet power of alpha waves as a biomarker of antidepressant treatment response. This novel EEG metric takes into account both non-stationarity and intersubject variability of alpha waves. We recorded resting, 19-channel EEG (closed eyes) in 22 inpatients suffering from unipolar (UD, n=10) or bipolar (BD, n=12) depression. The EEG measurement was done at the end of the short washout period which followed previously unsuccessful pharmacotherapy. The normalized alpha wavelet power of 11 responders was markedly different than that of 11 nonresponders at several, mostly temporoparietal sites. Using the prediction of treatment response based on the normalized alpha wavelet power, we achieved 81.8% sensitivity and 81.8% specificity for channel T4.
Keywords: Alpha waves, antidepressant, treatment outcome, wavelet.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19742268 GPS Signal Correction to Improve Vehicle Location during Experimental Campaign
Authors: L. Della Ragione, G. Meccariello
Abstract:
In recent years in Italy the progress of the automobile industry, in the field of reduction of emissions values, is very remarkable. Nevertheless their evaluation and reduction is a key problem, especially in the cities, that account for more than 50% of world population. In this paper we dealt with the problem of describing a quantitatively approach for the reconstruction of GPS coordinates and altitude, in the context of correlation study between driving cycles / emission / geographical location, during an experimental campaign realized with some instrumented cars.
Keywords: Air pollution, Driving cycles, GPS signal.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19132267 Effect of Peak-to-Average Power Ratio Reduction on the Multicarrier Communication System Performance Parameters
Authors: Sanjay Singh, M Sathish Kumar, H. S Mruthyunjaya
Abstract:
Multicarrier transmission system such as Orthogonal Frequency Division Multiplexing (OFDM) is a promising technique for high bit rate transmission in wireless communication system. OFDM is a spectrally efficient modulation technique that can achieve high speed data transmission over multipath fading channels without the need for powerful equalization techniques. However the price paid for this high spectral efficiency and less intensive equalization is low power efficiency. OFDM signals are very sensitive to nonlinear effects due to the high Peak-to-Average Power Ratio (PAPR), which leads to the power inefficiency in the RF section of the transmitter. This paper investigates the effect of PAPR reduction on the performance parameter of multicarrier communication system. Performance parameters considered are power consumption of Power Amplifier (PA) and Digital-to-Analog Converter (DAC), power amplifier efficiency, SNR of DAC and BER performance of the system. From our analysis it is found that irrespective of PAPR reduction technique being employed, the power consumption of PA and DAC reduces and power amplifier efficiency increases due to reduction in PAPR. Moreover, it has been shown that for a given BER performance the requirement of Input-Backoff (IBO) reduces with reduction in PAPR.Keywords: BER, Crest Factor (CF), Digital-to-Analog Converter(DAC), Input-Backoff (IBO), Orthogonal Frequency Division Multiplexing(OFDM), Peak-to-Average Power Ratio (PAPR), PowerAmplifier efficiency, SNR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32862266 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: Computational social science, movie preference, machine learning, SVM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16512265 A Fuzzy Predictive Filter for Sinusoidal Signals with Time-Varying Frequencies
Authors: X. Z. Gao, S. J. Ovaska, X. Wang
Abstract:
Prediction of sinusoidal signals with time-varying frequencies has been an important research topic in power electronics systems. To solve this problem, we propose a new fuzzy predictive filtering scheme, which is based on a Finite Impulse Response (FIR) filter bank. Fuzzy logic is introduced here to provide appropriate interpolation of individual filter outputs. Therefore, instead of regular 'hard' switching, our method has the advantageous 'soft' switching among different filters. Simulation comparisons between the fuzzy predictive filtering and conventional filter bank-based approach are made to demonstrate that the new scheme can achieve an enhanced prediction performance for slowly changing sinusoidal input signals.Keywords: Predictive filtering, fuzzy logic, sinusoidal signals, time-varying frequencies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14942264 Review and Comparison of Associative Classification Data Mining Approaches
Authors: Suzan Wedyan
Abstract:
Associative classification (AC) is a data mining approach that combines association rule and classification to build classification models (classifiers). AC has attracted a significant attention from several researchers mainly because it derives accurate classifiers that contain simple yet effective rules. In the last decade, a number of associative classification algorithms have been proposed such as Classification based Association (CBA), Classification based on Multiple Association Rules (CMAR), Class based Associative Classification (CACA), and Classification based on Predicted Association Rule (CPAR). This paper surveys major AC algorithms and compares the steps and methods performed in each algorithm including: rule learning, rule sorting, rule pruning, classifier building, and class prediction.
Keywords: Associative Classification, Classification, Data Mining, Learning, Rule Ranking, Rule Pruning, Prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66332263 Chemical and Hydro-Geologic Analysis of Ikogosi Warm Spring Water in Nigeria
Authors: Akinola Ikudayisi, Folasade Adeyemo, Josiah Adeyemo
Abstract:
This study focuses on the hydro-geology and chemical constituents analysis of Ikogosi Warm Spring waters in South West Nigeria. Ikogosi warm spring is a global tourist attraction because it has both warm and cold spring sources. Water samples from the cold spring, warm spring and the meeting point were collected, analyzed and the result shows close similarity in temperature, hydrogen iron concentration (pH), alkalinity, hardness, Calcium, Magnesium, Sodium, Iron, total dissolved solid and heavy metals. The measured parameters in the water samples are within World Health Organisation standards for fresh water. The study of the geology of the warm spring reveals that the study area is underlain by a group of slightly migmatised to non-migmatised paraschists and meta-igneous rocks. Also, concentration levels of selected heavy metals, (Copper, Cadmium, Zinc, Arsenic and Cromium) were determined in the water (ppm) samples. Chromium had the highest concentration value of 1.52ppm (an average of 49.67%) and Cadmium had the lowest concentration with value of 0.15ppm (an average of 4.89%). Comparison of these results showed that, their mean levels are within the standard values obtained in Nigeria. It can be concluded that both warm and spring water are safe for drinking.Keywords: Cold spring, Ikogosi, melting point, warm spring, water samples.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23052262 Nanoparticles-Protein Hybrid Based Magnetic Liposome
Authors: Amlan Kumar Das, Avinash Marwal, Vikram Pareek
Abstract:
Liposome plays an important role in medical and pharmaceutical science as e.g. nano scale drug carriers. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment. Magnet-driven liposome used for the targeted delivery of drugs to organs and tissues. These liposome preparations contain encapsulated drug components and finely dispersed magnetic particles. Liposomes are vesicles of varying size consisting of a spherical lipid bilayer and an aqueous inner compartment that are generated in vitro. These are useful in terms of biocompatibility, biodegradability, and low toxicity, and can control biodistribution by changing the size, lipid composition, and physical characteristics. Furthermore, liposomes can entrap both hydrophobic and hydrophilic drugs and are able to continuously release the entrapped substrate, thus being useful drug carriers. Magnetic liposomes (MLs) are phospholipid vesicles that encapsulate magneticor paramagnetic nanoparticles. They are applied as contrast agents for magnetic resonance imaging (MRI). The biological synthesis of nanoparticles using plant extracts plays an important role in the field of nanotechnology. Green-synthesized magnetite nanoparticles-protein hybrid has been produced by treating Iron (III) / Iron (II) chloride with the leaf extract of Datura inoxia. The phytochemicals present in the leaf extracts act as a reducing as well stabilizing agents preventing agglomeration, which include flavonoids, phenolic compounds, cardiac glycosides, proteins and sugars. The magnetite nanoparticles-protein hybrid has been trapped inside the aqueous core of the liposome prepared by reversed phase evaporation (REV) method using oleic and linoleic acid which has been shown to be driven under magnetic field confirming the formation magnetic liposome (ML). Chemical characterization of stealth magnetic liposome has been performed by breaking the liposome and release of magnetic nanoparticles. The presence iron has been confirmed by colour complex formation with KSCN and UV-Vis study using spectrophotometer Cary 60, Agilent. This magnet driven liposome using nanoparticles-protein hybrid can be a smart vesicles for the targeted drug delivery.
Keywords: Nanoparticles-Protein Hybrid, Magnetic Liposome.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 30232261 The Role of Heat Pumps for the Decarbonization of European Regions
Authors: D. M. Mongelli, M. De Carli, L. Carnieletto, F. Busato
Abstract:
This research aims to provide a contribution to the reduction of fossil fuels and the consequent reduction of CO2eq emissions for each European region. Simulations have been carried out to replace fossil fuel fired heating boilers with air-to-water heat pumps, when allowed by favorable environmental conditions (outdoor temperature, water temperature in emission systems, etc.). To estimate the potential coverage of high-temperature heat pumps in European regions, the energy profiles of buildings were considered together with the potential coefficient of performance (COP) of heat pumps operating with flow temperature with variable climatic regulation. The electrification potential for heating buildings was estimated by dividing the 38 European countries examined into 179 territorial units. The yields have been calculated in terms of energy savings and CO2eq reduction.
Keywords: Decarbonization, Space heating, Heat pumps, Energy policies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2122260 Selecting Negative Examples for Protein-Protein Interaction
Authors: Mohammad Shoyaib, M. Abdullah-Al-Wadud, Oksam Chae
Abstract:
Proteomics is one of the largest areas of research for bioinformatics and medical science. An ambitious goal of proteomics is to elucidate the structure, interactions and functions of all proteins within cells and organisms. Predicting Protein-Protein Interaction (PPI) is one of the crucial and decisive problems in current research. Genomic data offer a great opportunity and at the same time a lot of challenges for the identification of these interactions. Many methods have already been proposed in this regard. In case of in-silico identification, most of the methods require both positive and negative examples of protein interaction and the perfection of these examples are very much crucial for the final prediction accuracy. Positive examples are relatively easy to obtain from well known databases. But the generation of negative examples is not a trivial task. Current PPI identification methods generate negative examples based on some assumptions, which are likely to affect their prediction accuracy. Hence, if more reliable negative examples are used, the PPI prediction methods may achieve even more accuracy. Focusing on this issue, a graph based negative example generation method is proposed, which is simple and more accurate than the existing approaches. An interaction graph of the protein sequences is created. The basic assumption is that the longer the shortest path between two protein-sequences in the interaction graph, the less is the possibility of their interaction. A well established PPI detection algorithm is employed with our negative examples and in most cases it increases the accuracy more than 10% in comparison with the negative pair selection method in that paper.Keywords: Interaction graph, Negative training data, Protein-Protein interaction, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17022259 Location Management in Cellular Networks
Authors: Bhavneet Sidhu, Hardeep Singh
Abstract:
Cellular networks provide voice and data services to the users with mobility. To deliver services to the mobile users, the cellular network is capable of tracking the locations of the users, and allowing user movement during the conversations. These capabilities are achieved by the location management. Location management in mobile communication systems is concerned with those network functions necessary to allow the users to be reached wherever they are in the network coverage area. In a cellular network, a service coverage area is divided into smaller areas of hexagonal shape, referred to as cells. The cellular concept was introduced to reuse the radio frequency. Continued expansion of cellular networks, coupled with an increasingly restricted mobile spectrum, has established the reduction of communication overhead as a highly important issue. Much of this traffic is used in determining the precise location of individual users when relaying calls, with the field of location management aiming to reduce this overhead through prediction of user location. This paper describes and compares various location management schemes in the cellular networks.Keywords: Cellular Networks, Location Area, MobilityManagement, Paging.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40222258 Reduction of MMP Using Oleophilic Chemicals
Authors: C. L. Voon, M. Awang
Abstract:
CO2 miscible displacement is not feasible in many oil fields due to high reservoir temperature as higher pressure is required to achieve miscibility. The miscibility pressure is far higher than the formation fracture pressure making it impossible to have CO2 miscible displacement. However, by using oleophilic chemicals, minimum miscibility pressure (MMP) could be lowered. The main objective of this research is to find the best oleophilic chemical in MMP reduction using slim-tube test and Vanishing Interfacial Tension (VIT) The chemicals are selected based on the characteristics that it must be oil soluble, low water solubility, have 4 – 8 carbons, semi polar, economical, and safe for human operation. The families of chemicals chosen are carboxylic acid, alcohol, and ketone. The whole experiment would be conducted at 100°C and the best chemical is said to be effective when it is able to lower CO2-crude oil MMP the most. Findings of this research would have great impact to the oil and gas industry in reduction of operation cost for CO2EOR which is applicable to both onshore and offshore operation.
Keywords: Enhanced Oil Recovery, Oleophilic Chemical, Minimum Miscibility Pressure, CO2 Miscible Displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24612257 Product Ecodesign Approaches in ISO 14001 Certified Companies
Authors: Gregor Radonjič, Aleksandra P. Korda, Damijan Krajnc
Abstract:
The aim of the study was to investigate whether there is the promotion of product ecodesign measures as a result of adopting ISO 14001 certification in manufacturing companies in the Republic of Slovenia. Companies gave the most of their product development attention to waste and energy reduction during manufacturing process and reduction of material consumption per unit of product. Regarding the importance of different ecodesign criteria reduction of material consumption per unit of product was reported as the most important criterion. Less attention is paid to endof- life issues considering recycling or packaging. Most manufacturing enterprises considered ISO 14001 standard as a very useful tool or at least a useful tool helping them to accelerate and establish product ecodesign activities. Two most frequently considered ecodesign drivers are increased competitive advantage and legal requirements and two most important barriers are high development costs and insufficient market demand.Keywords: ecodesign, environmental management system, ISO 14001, products
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15302256 An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-
Authors: Hiroki Satou, Kayoko Yamamoto
Abstract:
This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.Keywords: Emissions Trading, Tokyo Cap and Trade Program, Carbon Dioxide (CO2), Global Warming, Geographic Information Systems (GIS)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21722255 Adjustment and Scale-Up Strategy of Pilot Liquid Fermentation Process of Azotobacter sp.
Authors: G. Quiroga-Cubides, A. Díaz, M. Gómez
Abstract:
The genus Azotobacter has been widely used as bio-fertilizer due to its significant effects on the stimulation and promotion of plant growth in various agricultural species of commercial interest. In order to obtain significantly viable cellular concentration, a scale-up strategy for a liquid fermentation process (SmF) with two strains of A. chroococcum (named Ac1 and Ac10) was validated and adjusted at laboratory and pilot scale. A batch fermentation process under previously defined conditions was carried out on a biorreactor Infors®, model Minifors of 3.5 L, which served as a baseline for this research. For the purpose of increasing process efficiency, the effect of the reduction of stirring speed was evaluated in combination with a fed-batch-type fermentation laboratory scale. To reproduce the efficiency parameters obtained, a scale-up strategy with geometric and fluid dynamic behavior similarities was evaluated. According to the analysis of variance, this scale-up strategy did not have significant effect on cellular concentration and in laboratory and pilot fermentations (Tukey, p > 0.05). Regarding air consumption, fermentation process at pilot scale showed a reduction of 23% versus the baseline. The percentage of reduction related to energy consumption reduction under laboratory and pilot scale conditions was 96.9% compared with baseline.
Keywords: Azotobacter chroococcum, scale-up, liquid fermentation, fed-batch process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13222254 Learning to Recommend with Negative Ratings Based on Factorization Machine
Authors: Caihong Sun, Xizi Zhang
Abstract:
Rating prediction is an important problem for recommender systems. The task is to predict the rating for an item that a user would give. Most of the existing algorithms for the task ignore the effect of negative ratings rated by users on items, but the negative ratings have a significant impact on users’ purchasing decisions in practice. In this paper, we present a rating prediction algorithm based on factorization machines that consider the effect of negative ratings inspired by Loss Aversion theory. The aim of this paper is to develop a concave and a convex negative disgust function to evaluate the negative ratings respectively. Experiments are conducted on MovieLens dataset. The experimental results demonstrate the effectiveness of the proposed methods by comparing with other four the state-of-the-art approaches. The negative ratings showed much importance in the accuracy of ratings predictions.
Keywords: Factorization machines, feature engineering, negative ratings, recommendation systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9422253 Prediction of Cutting Tool Life in Drilling of Reinforced Aluminum Alloy Composite Using a Fuzzy Method
Authors: Mohammed T. Hayajneh
Abstract:
Machining of Metal Matrix Composites (MMCs) is very significant process and has been a main problem that draws many researchers to investigate the characteristics of MMCs during different machining process. The poor machining properties of hard particles reinforced MMCs make drilling process a rather interesting task. Unlike drilling of conventional materials, many problems can be seriously encountered during drilling of MMCs, such as tool wear and cutting forces. Cutting tool wear is a very significant concern in industries. Cutting tool wear not only influences the quality of the drilled hole, but also affects the cutting tool life. Prediction the cutting tool life during drilling is essential for optimizing the cutting conditions. However, the relationship between tool life and cutting conditions, tool geometrical factors and workpiece material properties has not yet been established by any machining theory. In this research work, fuzzy subtractive clustering system has been used to model the cutting tool life in drilling of Al2O3 particle reinforced aluminum alloy composite to investigate of the effect of cutting conditions on cutting tool life. This investigation can help in controlling and optimizing of cutting conditions when the process parameters are adjusted. The built model for prediction the tool life is identified by using drill diameter, cutting speed, and cutting feed rate as input data. The validity of the model was confirmed by the examinations under various cutting conditions. Experimental results have shown the efficiency of the model to predict cutting tool life.
Keywords: Composite, fuzzy, tool life, wear.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20882252 A New History Based Method to Handle the Recurring Concept Shifts in Data Streams
Authors: Hossein Morshedlou, Ahmad Abdollahzade Barforoush
Abstract:
Recent developments in storage technology and networking architectures have made it possible for broad areas of applications to rely on data streams for quick response and accurate decision making. Data streams are generated from events of real world so existence of associations, which are among the occurrence of these events in real world, among concepts of data streams is logical. Extraction of these hidden associations can be useful for prediction of subsequent concepts in concept shifting data streams. In this paper we present a new method for learning association among concepts of data stream and prediction of what the next concept will be. Knowing the next concept, an informed update of data model will be possible. The results of conducted experiments show that the proposed method is proper for classification of concept shifting data streams.Keywords: Data Stream, Classification, Concept Shift, History.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12782251 An Investigation into the Application of Artificial Neural Networks to the Prediction of Injuries in Sport
Authors: J. McCullagh, T. Whitfort
Abstract:
Artificial Neural Networks (ANNs) have been used successfully in many scientific, industrial and business domains as a method for extracting knowledge from vast amounts of data. However the use of ANN techniques in the sporting domain has been limited. In professional sport, data is stored on many aspects of teams, games, training and players. Sporting organisations have begun to realise that there is a wealth of untapped knowledge contained in the data and there is great interest in techniques to utilise this data. This study will use player data from the elite Australian Football League (AFL) competition to train and test ANNs with the aim to predict the onset of injuries. The results demonstrate that an accuracy of 82.9% was achieved by the ANNs’ predictions across all examples with 94.5% of all injuries correctly predicted. These initial findings suggest that ANNs may have the potential to assist sporting clubs in the prediction of injuries.Keywords: Artificial Neural Networks, data, injuries, sport
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28912250 Grid-HPA: Predicting Resource Requirements of a Job in the Grid Computing Environment
Authors: M. Bohlouli, M. Analoui
Abstract:
For complete support of Quality of Service, it is better that environment itself predicts resource requirements of a job by using special methods in the Grid computing. The exact and correct prediction causes exact matching of required resources with available resources. After the execution of each job, the used resources will be saved in the active database named "History". At first some of the attributes will be exploit from the main job and according to a defined similarity algorithm the most similar executed job will be exploited from "History" using statistic terms such as linear regression or average, resource requirements will be predicted. The new idea in this research is based on active database and centralized history maintenance. Implementation and testing of the proposed architecture results in accuracy percentage of 96.68% to predict CPU usage of jobs and 91.29% of memory usage and 89.80% of the band width usage.
Keywords: Active Database, Grid Computing, ResourceRequirement Prediction, Scheduling,
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14322249 Determining the Maximum Lateral Displacement Due to Sever Earthquakes without Using Nonlinear Analysis
Authors: Mussa Mahmoudi
Abstract:
For Seismic design, it is important to estimate, maximum lateral displacement (inelastic displacement) of the structures due to sever earthquakes for several reasons. Seismic design provisions estimate the maximum roof and storey drifts occurring in major earthquakes by amplifying the drifts of the structures obtained by elastic analysis subjected to seismic design load, with a coefficient named “displacement amplification factor" which is greater than one. Here, this coefficient depends on various parameters, such as ductility and overstrength factors. The present research aims to evaluate the value of the displacement amplification factor in seismic design codes and then tries to propose a value to estimate the maximum lateral structural displacement from sever earthquakes, without using non-linear analysis. In seismic codes, since the displacement amplification is related to “force reduction factor" hence; this aspect has been accepted in the current study. Meanwhile, two methodologies are applied to evaluate the value of displacement amplification factor and its relation with the force reduction factor. In the first methodology, which is applied for all structures, the ratio of displacement amplification and force reduction factors is determined directly. Whereas, in the second methodology that is applicable just for R/C moment resisting frame, the ratio is obtained by calculating both factors, separately. The acquired results of these methodologies are alike and estimate the ratio of two factors from 1 to 1.2. The results indicate that the ratio of the displacement amplification factor and the force reduction factor differs to those proposed by seismic provisions such as NEHRP, IBC and Iranian seismic code (standard no. 2800).Keywords: Displacement amplification factor, Ductility factor, Force reduction factor, Maximum lateral displacement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28882248 Feature Selection Approaches with Missing Values Handling for Data Mining - A Case Study of Heart Failure Dataset
Authors: N.Poolsawad, C.Kambhampati, J. G. F. Cleland
Abstract:
In this paper, we investigated the characteristic of a clinical dataseton the feature selection and classification measurements which deal with missing values problem.And also posed the appropriated techniques to achieve the aim of the activity; in this research aims to find features that have high effect to mortality and mortality time frame. We quantify the complexity of a clinical dataset. According to the complexity of the dataset, we proposed the data mining processto cope their complexity; missing values, high dimensionality, and the prediction problem by using the methods of missing value replacement, feature selection, and classification.The experimental results will extend to develop the prediction model for cardiology.Keywords: feature selection, missing values, classification, clinical dataset, heart failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32112247 The Role of Home Composting in Waste Management Cost Reduction
Authors: Nahid Hassanshahi, Ayoub Karimi-Jashni, Nasser Talebbeydokhti
Abstract:
Due to the economic and environmental benefits of producing less waste, the US Environmental Protection Agency (EPA) introduces source reduction as one of the most important means to deal with the problems caused by increased landfills and pollution. Waste reduction involves all waste management methods, including source reduction, recycling, and composting, which reduce waste flow to landfills or other disposal facilities. Source reduction of waste can be studied from two perspectives: avoiding waste production, or reducing per capita waste production, and waste deviation that indicates the reduction of waste transfer to landfills. The present paper has investigated home composting as a managerial solution for reduction of waste transfer to landfills. Home composting has many benefits. The use of household waste for the production of compost will result in a much smaller amount of waste being sent to landfills, which in turn will reduce the costs of waste collection, transportation and burial. Reducing the volume of waste for disposal and using them for the production of compost and plant fertilizer might help to recycle the material in a shorter time and to use them effectively in order to preserve the environment and reduce contamination. Producing compost in a home-based manner requires very small piece of land for preparation and recycling compared with other methods. The final product of home-made compost is valuable and helps to grow crops and garden plants. It is also used for modifying the soil structure and maintaining its moisture. The food that is transferred to landfills will spoil and produce leachate after a while. It will also release methane and greenhouse gases. But, composting these materials at home is the best way to manage degradable materials, use them efficiently and reduce environmental pollution. Studies have shown that the benefits of the sale of produced compost and the reduced costs of collecting, transporting, and burying waste can well be responsive to the costs of purchasing home compost machine and the cost of related trainings. Moreover, the process of producing home compost may be profitable within 4 to 5 years and as a result, it will have a major role in reducing waste management.
Keywords: Compost, home compost, reducing waste, waste management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8842246 Controller Design of Discrete Systems by Order Reduction Technique Employing Differential Evolution Optimization Algorithm
Authors: J. S. Yadav, N. P. Patidar, J. Singhai
Abstract:
One of the main objectives of order reduction is to design a controller of lower order which can effectively control the original high order system so that the overall system is of lower order and easy to understand. In this paper, a simple method is presented for controller design of a higher order discrete system. First the original higher order discrete system in reduced to a lower order model. Then a Proportional Integral Derivative (PID) controller is designed for lower order model. An error minimization technique is employed for both order reduction and controller design. For the error minimization purpose, Differential Evolution (DE) optimization algorithm has been employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the desired response and actual response pertaining to a unit step input. Finally the designed PID controller is connected to the original higher order discrete system to get the desired specification. The validity of the proposed method is illustrated through a numerical example.Keywords: Discrete System, Model Order Reduction, PIDController, Integral Squared Error, Differential Evolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19042245 Preparation and Characterization of Photocatalyst for the Conversion of Carbon Dioxide to Methanol
Authors: D. M. Reddy Prasad, Nur Sabrina Binti Rahmat, Huei Ruey Ong, Chin Kui Cheng, Maksudur Rahman Khan, D. Sathiyamoorthy
Abstract:
Carbon dioxide (CO2) emission to the environment is inevitable which is responsible for global warming. Photocatalytic reduction of CO2 to fuel, such as methanol, methane etc. is a promising way to reduce greenhouse gas CO2 emission. In the present work, Bi2S3/CdS was synthesized as an effective visible light responsive photocatalyst for CO2 reduction into methanol. The Bi2S3/CdS photocatalyst was prepared by hydrothermal reaction. The catalyst was characterized by X-ray diffraction (XRD) instrument. The photocatalytic activity of the catalyst has been investigated for methanol production as a function of time. Gas chromatograph flame ionization detector (GC-FID) was employed to analyze the product. The yield of methanol was found to increase with higher CdS concentration in Bi2S3/CdS and the maximum yield was obtained for 45 wt% of Bi2S3/CdS under visible light irradiation was 20 μmole/g. The result establishes that Bi2S3/CdS is favorable catalyst to reduce CO2 to methanol.
Keywords: Photocatalyst, Carbon dioxide reduction, visible light, Irradiation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20322244 Increasing Profitability Supported by Innovative Methods and Designing Monitoring Software in Condition-Based Maintenance: A Case Study
Authors: Nasrin Farajiparvar
Abstract:
In the present article, a new method has been developed to enhance the application of equipment monitoring, which in turn results in improving condition-based maintenance economic impact in an automobile parts manufacturing factory. This study also describes how an effective software with a simple database can be utilized to achieve cost-effective improvements in maintenance performance. The most important results of this project are indicated here: 1. 63% reduction in direct and indirect maintenance costs. 2. Creating a proper database to analyse failures. 3. Creating a method to control system performance and develop it to similar systems. 4. Designing a software to analyse database and consequently create technical knowledge to face unusual condition of the system. Moreover, the results of this study have shown that the concept and philosophy of maintenance has not been understood in most Iranian industries. Thus, more investment is strongly required to improve maintenance conditions.
Keywords: Condition-based maintenance, Economic savings, Iran industries, Machine life prediction software.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15762243 Reduction of Leakage Power in Digital Logic Circuits Using Stacking Technique in 45 Nanometer Regime
Authors: P.K. Sharma, B. Bhargava, S. Akashe
Abstract:
Power dissipation due to leakage current in the digital circuits is a biggest factor which is considered specially while designing nanoscale circuits. This paper is exploring the ideas of reducing leakage current in static CMOS circuits by stacking the transistors in increasing numbers. Clearly it means that the stacking of OFF transistors in large numbers result a significant reduction in power dissipation. Increase in source voltage of NMOS transistor minimizes the leakage current. Thus stacking technique makes circuit with minimum power dissipation losses due to leakage current. Also some of digital circuits such as full adder, D flip flop and 6T SRAM have been simulated in this paper, with the application of reduction technique on ‘cadence virtuoso tool’ using specter at 45nm technology with supply voltage 0.7V.
Keywords: Stack, 6T SRAM cell, low power, threshold voltage
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34212242 Assessment of Diagnostic Enzymes as Indices of Heavy Metal Pollution in Tilapia Fish
Authors: Justina I. R. Udotong
Abstract:
Diagnostic enzymes like aspartate aminotransferase (AST), alanine aminotransferase (ALT) and alkaline phosphatase (ALP) were determined as indices of heavy metal pollution in Tilapia guinensis. Three different sets of fishes treated with lead (Pb), iron (Fe) and copper (Cu) were used for the study while a fourth group with no heavy metal served as a control. Fishes in each of the groups were exposed to 2.65mg/l of Pb, 0.85mg/l of Fe and 0.35 mg/l of Cu in aerated aquaria for 96 hours. Tissue fractionation of the liver tissues was carried out and the three diagnostic enzymes (AST, ALT, and ALP) were estimated. Serum levels of the same diagnostic enzymes were also measured. The mean values of the serum enzyme activity for ALP in each experimental group were 19.5±1.62, 29.67±2.17 and 1.15±0.27 IU/L for Pb, Fe and Cu groups compared with 9.99±1.34 IU/L enzyme activity in the control. This result showed that Pb and Fe caused increased release of the enzyme into the blood circulation indicating increased tissue damage while Cu caused a reduction in the serum level as compared with the level in the control group. The mean values of enzyme activity obtained in the liver were 102.14±6.12, 140.17±2.06 and 168.23±3.52 IU/L for Pb, Fe and Cu groups, respectively compared to 91.20±9.42 IU/L enzyme activity for the control group. The serum and liver AST and ALT activities obtained in Pb, Fe, Cu and control groups are reported. It was generally noted that the presence of the heavy metal caused liver tissues damage and consequent increased level of the diagnostic enzymes in the serum.Keywords: Diagnostic enzymes, enzyme activity, heavy metals, tissues investigations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2342