Search results for: classification and clustering.
69 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis
Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen
Abstract:
The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.
Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59468 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining
Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride
Abstract:
In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.
Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 44967 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations
Authors: Satyanadh Gundimada, Vijayan K Asari
Abstract:
A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.
Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184966 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran
Authors: Saba Gachpaz, Hamid Reza Heidari
Abstract:
The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. This necessitates increased resource consumption and underscores the importance of addressing sustainable agriculture development along with other environmental considerations. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for 10 different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.
Keywords: Land suitability, machine learning, random forest, sustainable agriculture.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28365 LIDAR Obstacle Warning and Avoidance System for Unmanned Aircraft
Authors: Roberto Sabatini, Alessandro Gardi, Mark A. Richardson
Abstract:
The availability of powerful eye-safe laser sources and the recent advancements in electro-optical and mechanical beam-steering components have allowed laser-based Light Detection and Ranging (LIDAR) to become a promising technology for obstacle warning and avoidance in a variety of manned and unmanned aircraft applications. LIDAR outstanding angular resolution and accuracy characteristics are coupled to its good detection performance in a wide range of incidence angles and weather conditions, providing an ideal obstacle avoidance solution, which is especially attractive in low-level flying platforms such as helicopters and small-to-medium size Unmanned Aircraft (UA). The Laser Obstacle Avoidance Marconi (LOAM) system is one of such systems, which was jointly developed and tested by SELEX-ES and the Italian Air Force Research and Flight Test Centre. The system was originally conceived for military rotorcraft platforms and, in this paper, we briefly review the previous work and discuss in more details some of the key development activities required for integration of LOAM on UA platforms. The main hardware and software design features of this LOAM variant are presented, including a brief description of the system interfaces and sensor characteristics, together with the system performance models and data processing algorithms for obstacle detection, classification and avoidance. In particular, the paper focuses on the algorithm proposed for optimal avoidance trajectory generation in UA applications.
Keywords: LIDAR, Low-Level Flight, Nap-of-the-Earth Flight, Near Infra-Red, Obstacle Avoidance, Obstacle Detection, Obstacle Warning System, Sense and Avoid, Trajectory Optimisation, Unmanned Aircraft.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708564 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 174363 Influence of Compactive Efforts on Cement- Bagasse Ash Treatment of Expansive Black Cotton Soil
Authors: Moses, G, Osinubi, K. J.
Abstract:
A laboratory study on the influence of compactive effort on expansive black cotton specimens treated with up to 8% ordinary Portland cement (OPC) admixed with up to 8% bagasse ash (BA) by dry weight of soil and compacted using the energies of the standard Proctor (SP), West African Standard (WAS) or “intermediate” and modified Proctor (MP) were undertaken. The expansive black cotton soil was classified as A-7-6 (16) or CL using the American Association of Highway and Transportation Officials (AASHTO) and Unified Soil Classification System (USCS), respectively. The 7day unconfined compressive strength (UCS) values of the natural soil for SP, WAS and MP compactive efforts are 286, 401 and 515kN/m2 respectively, while peak values of 1019, 1328 and 1420kN/m2 recorded at 8% OPC/ 6% BA, 8% OPC/ 2% BA and 6% OPC/ 4% BA treatments, respectively were less than the UCS value of 1710kN/m2 conventionally used as criterion for adequate cement stabilization. The soaked California bearing ratio (CBR) values of the OPC/BA stabilized soil increased with higher energy level from 2, 4 and 10% for the natural soil to Peak values of 55, 18 and 8% were recorded at 8% OPC/4% BA 8% OPC/2% BA and 8% OPC/4% BA, treatments when SP, WAS and MP compactive effort were used, respectively. The durability of specimens was determined by immersion in water. Soils treatment at 8% OPC/ 4% BA blend gave a value of 50% resistance to loss in strength value which is acceptable because of the harsh test condition of 7 days soaking period specimens were subjected instead of the 4 days soaking period that specified a minimum resistance to loss in strength of 80%. Finally An optimal blend of is 8% OPC/ 4% BA is recommended for treatment of expansive black cotton soil for use as a sub-base material.
Keywords: Bagasse ash, California bearing ratio, Compaction, Durability, Ordinary Portland cement, Unconfined compressive strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 356362 Obesity and Bone Mineral Density in Patients with Large Joint Osteoarthritis
Authors: Vladyslav Povoroznyuk, Anna Musiienko, Nataliia Zaverukha, Roksolana Povoroznyuk
Abstract:
Along with the global aging of population, the number of people with somatic diseases is increasing, including such interrelated pathologies as obesity, osteoarthritis (OA) and osteoporosis (OP). The objective of the study is to examine the connection between body mass index (BMI), OA and bone mineral density (BMD) of lumbar spine, femoral neck and trabecular bone score (TBS) in postmenopausal women with OA. We have observed 359 postmenopausal women (50-89 years old) and divided them into four groups by age: 50-59 yrs, 60-69 yrs, 70-79 yrs and over 80 years old. In addition, according to the American College of Rheumatology (ACR) Clinical classification criteria for knee and hip OA, we divided them into 2 groups: group I – 117 females with symptomatic OA (including 89 patients with knee OA, 28 patients with hip OA) and group II –242 women with a normal functional activity of large joints. Analysis of data was performed taking into account their BMI, classified by World Health Organization (WHO). Diagnosis of obesity was established when BMI was above 30 kg/m2. In woman with obesity, a symptomatic OA was detected in 44 postmenopausal women (41.1%), a normal functional activity of large joints - in 63 women (58.9%). However, in women with normal BMI – 73 women, who account for 29.0% of cases, a symptomatic OA was detected. According to a chi-squared (χ2) test, a significantly higher level of BMI was detected in postmenopausal women with OA (χ2 = 5.05, p = 0.02). Women with a symptomatic OA had a significantly higher BMD of lumbar spine compared with women who had a normal functional activity of large joints. No significant differences of BMD of femoral necks or TBS were detected in either the group with OA or with a normal functional activity of large joints.
Keywords: Bone mineral density, BMD, body mass index, BMI, obesity, overweight, postmenopausal women, osteoarthritis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 68361 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 40860 Development of an Ensemble Classification Model Based on Hybrid Filter-Wrapper Feature Selection for Email Phishing Detection
Authors: R. B. Ibrahim, M. S. Argungu, I. M. Mungadi
Abstract:
It is obvious in this present time, internet has become an indispensable part of human life since its inception. The Internet has provided diverse opportunities to make life so easy for human beings, through the adoption of various channels. Among these channels are email, internet banking, video conferencing, and the like. Email is one of the easiest means of communication hugely accepted among individuals and organizations globally. But over decades the security integrity of this platform has been challenged with malicious activities like Phishing. Email phishing is designed by phishers to fool the recipient into handing over sensitive personal information such as passwords, credit card numbers, account credentials, social security numbers, etc. This activity has caused a lot of financial damage to email users globally which has resulted in bankruptcy, sudden death of victims, and other health-related sicknesses. Although many methods have been proposed to detect email phishing, in this research, the results of multiple machine-learning methods for predicting email phishing have been compared with the use of filter-wrapper feature selection. It is worth noting that all three models performed substantially but one outperformed the other. The dataset used for these models is obtained from Kaggle online data repository, while three classifiers: decision tree, Naïve Bayes, and Logistic regression are ensemble (Bagging) respectively. Results from the study show that the Decision Tree (CART) bagging ensemble recorded the highest accuracy of 98.13% using PEF (Phishing Essential Features). This result further demonstrates the dependability of the proposed model.
Keywords: Ensemble, hybrid, filter-wrapper, phishing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17859 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: Political tendency, prediction, sentiment analysis, Twitter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84858 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.
Keywords: Deep learning, data mining, gender predication, MOOCs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 136257 Semantic Enhanced Social Media Sentiments for Stock Market Prediction
Authors: K. Nirmala Devi, V. Murali Bhaskaran
Abstract:
Traditional document representation for classification follows Bag of Words (BoW) approach to represent the term weights. The conventional method uses the Vector Space Model (VSM) to exploit the statistical information of terms in the documents and they fail to address the semantic information as well as order of the terms present in the documents. Although, the phrase based approach follows the order of the terms present in the documents rather than semantics behind the word. Therefore, a semantic concept based approach is used in this paper for enhancing the semantics by incorporating the ontology information. In this paper a novel method is proposed to forecast the intraday stock market price directional movement based on the sentiments from Twitter and money control news articles. The stock market forecasting is a very difficult and highly complicated task because it is affected by many factors such as economic conditions, political events and investor’s sentiment etc. The stock market series are generally dynamic, nonparametric, noisy and chaotic by nature. The sentiment analysis along with wisdom of crowds can automatically compute the collective intelligence of future performance in many areas like stock market, box office sales and election outcomes. The proposed method utilizes collective sentiments for stock market to predict the stock price directional movements. The collective sentiments in the above social media have powerful prediction on the stock price directional movements as up/down by using Granger Causality test.
Keywords: Bag of Words, Collective Sentiments, Ontology, Semantic relations, Sentiments, Social media, Stock Prediction, Twitter, Vector Space Model and wisdom of crowds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 280056 Influence of Drought on Yield and Yield Components in White Bean
Authors: Gholamreza Habibi
Abstract:
In order to study seed yield and seed yield components in bean under reduced irrigation condition and assessment drought tolerance of genotypes, 15 lines of White beans were evaluated in two separate RCB design with 3 replications under stress and non stress conditions. Analysis of variance showed that there were significant differences among varieties in terms of traits under study, indicating the existence of genetic variation among varieties. The results indicate that drought stress reduced seed yield, number of seed per plant, biological yield and number of pod in White been. In non stress condition, yield was highly correlated with the biological yield, whereas in stress condition it was highly correlated with harvest index. Results of stepwise regression showed that, selection can we done based on, biological yield, harvest index, number of seed per pod, seed length, 100 seed weight. Result of path analysis showed that the highest direct effect, being positive, was related to biological yield in non stress and to harvest index in stress conditions. Factor analysis were accomplished in stress and nonstress condition a, there were 4 factors that explained more than 76 percent of total variations. We used several selection indices such as Stress Susceptibility Index ( SSI ), Geometric Mean Productivity ( GMP ), Mean Productivity ( MP ), Stress Tolerance Index ( STI ) and Tolerance Index ( TOL ) to study drought tolerance of genotypes, we found that the best Stress Index for selection tolerance genotypes were STI, GMP and MP were the greatest correlations between these Indices and seed yield under stress and non stress conditions. In classification of genotypes base on phenotypic characteristics, using cluster analysis ( UPGMA ), all allels classified in 5 separate groups in stress and non stress conditions.Keywords: Cluster analysis, factor analysis, path analysis, selection index, White bean
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 213855 International Tourists’ Travel Motivation by Push-Pull Factors and the Decision Making for Selecting Thailand as Destination Choice
Authors: Siripen Yiamjanya, Kevin Wongleedee
Abstract:
This research paper aims to identify travel motivation by push and pull factors that affected decision making of international tourists in selecting Thailand as their destination choice. A total of 200 international tourists who traveled to Thailand during January and February, 2014 were used as the sample in this study. A questionnaire was employed as a tool in collecting the data, conducted in Bangkok. The list consisted of 30 attributes representing both psychological factors as “push- based factors” and destination factors as “pull-based factors”. Mean and standard deviation were used in order to find the top ten travel motives that were important determinants in the respondents’ decision making process to select Thailand as their destination choice. The finding revealed the top ten travel motivations influencing international tourists to select Thailand as their destination choice included [i] getting experience in foreign land; [ii] Thai food; [iii] learning new culture; [iv] relaxing in foreign land; [v] wanting to learn new things; [vi] being interested in Thai culture, and traditional markets; [vii] escaping from same daily life; [viii] enjoying activities; [ix] adventure; and [x] good weather. Classification of push- based and pull- based motives suggested that getting experience in foreign land was the most important push motive for international tourists to travel, while Thai food portrayed its highest significance as pull motive. Discussion and suggestions were also made for tourism industry of Thailand.
Keywords: Decision Making, Destination Choice, International Tourist, Pull Factor, Push Factor, Thailand, Travel Motivation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1638354 Experimental Investigation on Geosynthetic-Reinforced Soil Sections via California Bearing Ratio Test
Authors: S. Abdi Goudazri, R. Ziaie Moayed, A. Nazeri
Abstract:
Loose soils normally are of weak bearing capacity due to their structural nature. Being exposed to heavy traffic loads, they would fail in most cases. To tackle the aforementioned issue, geotechnical engineers have come up with different approaches; one of which is making use of geosynthetic-reinforced soil-aggregate systems. As these polymeric reinforcements have highlighted economic and environmentally-friendly features, they have become widespread in practice during the last decades. The present research investigates the efficiency of four different types of these reinforcements in increasing the bearing capacity of two-layered soil sections using a series California Bearing Ratio (CBR) test. The studied sections are comprised of a 10 cm-thick layer of no. 161 Firouzkooh sand (weak subgrade) and a 10 cm-thick layer of compacted aggregate materials (base course) classified as SP and GW according to the United Soil Classification System (USCS), respectively. The aggregate layer was compacted to the relative density (Dr) of 95% at the optimum water content (Wopt) of 6.5%. The applied reinforcements were including two kinds of geocomposites (type A and B), a geotextile, and a geogrid that were embedded at the interface of the lower and the upper layers of the soil-aggregate system. As the standard CBR mold was not appropriate in height for this study, the mold used for soaked CBR tests were utilized. To make a comparison between the results of stress-settlement behavior in the studied specimens, CBR values pertinent to the penetrations of 2.5 mm and 5 mm were considered. The obtained results demonstrated 21% and 24.5% increments in the amount of CBR value in the presence of geocomposite type A and geogrid, respectively. On the other hand, the effect of both geotextile and geocomposite type B on CBR values was generally insignificant in this research.
Keywords: Geosynthetics, geogrid, geotextile, CBR test, increasing bearing capacity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69153 Comparative Correlation Investigation of Polynuclear Aromatic Hydrocarbons (PAHs) in Soils of Different Land Use: Sources Evaluation Perspective
Authors: O. Onoriode Emoyan, E. Eyitemi Akporhonor, Charles Otobrise
Abstract:
Polycyclic Aromatic Hydrocarbons (PAHs) are formed mainly because of incomplete combustion of organic materials during industrial, domestic activities or natural occurrence. Their toxicity and contamination of terrestrial and aquatic ecosystem have been established. However, with limited validity index, previous research has focused on PAHs isomer pair ratios of variable physicochemical properties in source identification. The objective of this investigation was to determine the empirical validity of Pearson Correlation Coefficient (PCC) and Cluster Analysis (CA) in PAHs source identification along soil samples of different land uses. Therefore, 16 PAHs grouped, as Endocrine Disruption Substances (EDSs) were determined in 10 sample stations in top and sub soils seasonally. PAHs was determined the use of Varian 300 gas chromatograph interfaced with flame ionization detector. Instruments and reagents used are of standard and chromatographic grades respectively. PCC and CA results showed that the classification of PAHs along pyrolitic and petrogenic organics used in source signature is about the predominance PAHs in environmental matrix. Therefore, the distribution of PAHs in the studied stations revealed the presence of trace quantities of the vast majority of the sixteen PAHs, which may ultimately inhabit the actual source signature authentication. Therefore, factors to be considered when evaluating possible sources of PAHs could be; type and extent of bacterial metabolism, transformation products/substrates, and environmental factors such as salinity, pH, oxygen concentration, nutrients, light intensity, temperature, co-substrates, and environmental medium are hereby recommended as factors to be considered when evaluating possible sources of PAHs.Keywords: Comparative correlation, kinetically, polynuclear aromatic hydrocarbons, thermodynamically- favored PAHs, sources evaluation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 198452 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16551 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: Deep neural models, natural language inference, recognizing textual entailment, sentence-to-sentence relation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 145450 Crude Glycerol Affects Canine Sperm Motility: Computer Assisted Semen Analysis in vitro
Authors: P. Massanyi, L. Kichi, T. Slanina, E. Kolesar, J. Danko, N. Lukac, E. Tvrda, R. Stawarz, A. Kolesarova
Abstract:
Target of this study was the analysis of the impact of crude glycerol on canine spermatozoa motility, morphology, viability, and membrane integrity. Experiments were realized in vitro. In the study, semen from 5 large dog breeds was used. They were typical representatives of large breeds, coming from healthy rearing, regularly vaccinated and integrated to the further breeding. Semen collections were realized at the owners of animals and in the veterinary clinic. Subsequently the experiments were realized at the Department of Animal Physiology of the SUA in Nitra. The spermatozoa motility was evaluated using CASA analyzer (SpermVisionTM, Minitub, Germany) at the temperature 5 and 37°C for 5 hours. In the study, 13 motility parameters were evaluated. Generally, crude glycerol has generally negative effect on spermatozoa motility. Morphological analysis was realized using Hancock staining and the preparations were evaluated at magnification 1000x using classification tables of morphologically changed spermatozoa. Data clearly detected the highest number of morphologically changed spermatozoa in the experimental groups (know twisted tails, tail torso and tail coiling). For acrosome alterations swelled acrosomes, removed acrosomes and acrosomes with undulated membrane were detected. In this study also the effect of crude glycerol on spermatozoa membrane integrity were analyzed. The highest crude glycerol concentration significantly affects spermatozoa integrity. Results of this study show that crude glycerol has effect of spermatozoa motility, viability, and membrane integrity. Detected changes are related to crude glycerol concentration, temperature, as well as time of incubation.Keywords: Dog, semen, spermatozoa, acrosome, glycerol, CASA, viability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 210249 The Loess Regression Relationship Between Age and BMI for both Sydney World Masters Games Athletes and the Australian National Population
Authors: Joe Walsh, Mike Climstein, Ian Timothy Heazlewood, Stephen Burke, Jyrki Kettunen, Kent Adams, Mark DeBeliso
Abstract:
Thousands of masters athletes participate quadrennially in the World Masters Games (WMG), yet this cohort of athletes remains proportionately under-investigated. Due to a growing global obesity pandemic in context of benefits of physical activity across the lifespan, the BMI trends for this unique population was of particular interest. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approach is necessary in order to counteract the obesity pandemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship.BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual sub-groups.This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages.Keywords: Aging, masters athlete, Quetelet Index, sport
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 171148 Hands-off Parking: Deep Learning Gesture-Based System for Individuals with Mobility Needs
Authors: Javier Romera, Alberto Justo, Ignacio Fidalgo, Javier Araluce, Joshué Pérez
Abstract:
Nowadays, individuals with mobility needs face a significant challenge when docking vehicles. In many cases, after parking, they encounter insufficient space to exit, leading to two undesired outcomes: either avoiding parking in that spot or settling for improperly placed vehicles. To address this issue, this paper presents a parking control system employing gestural teleoperation. The system comprises three main phases: capturing body markers, interpreting gestures, and transmitting orders to the vehicle. The initial phase is centered around the MediaPipe framework, a versatile tool optimized for real-time gesture recognition. MediaPipe excels at detecting and tracing body markers, with a special emphasis on hand gestures. Hands detection is done by generating 21 reference points for each hand. Subsequently, after data capture, the project employs the MultiPerceptron Layer (MPL) for in-depth gesture classification. This tandem of MediaPipe’s extraction prowess and MPL’s analytical capability ensures that human gestures are translated into actionable commands with high precision. Furthermore, the system has been trained and validated within a built-in dataset. To prove the domain adaptation, a framework based on the Robot Operating System 2 (ROS2), as a communication backbone, alongside CARLA Simulator, is used. Following successful simulations, the system is transitioned to a real-world platform, marking a significant milestone in the project. This real-vehicle implementation verifies the practicality and efficiency of the system beyond theoretical constructs.
Keywords: Gesture detection, MediaPipe, MultiLayer Perceptron Layer, Robot Operating System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13447 Adapting Tools for Text Monitoring and for Scenario Analysis Related to the Field of Social Disasters
Authors: Svetlana Cojocaru, Mircea Petic, Inga Titchiev
Abstract:
Humanity faces more and more often with different social disasters, which in turn can generate new accidents and catastrophes. To mitigate their consequences, it is important to obtain early possible signals about the events which are or can occur and to prepare the corresponding scenarios that could be applied. Our research is focused on solving two problems in this domain: identifying signals related that an accident occurred or may occur and mitigation of some consequences of disasters. To solve the first problem, methods of selecting and processing texts from global network Internet are developed. Information in Romanian is of special interest for us. In order to obtain the mentioned tools, we should follow several steps, divided into preparatory stage and processing stage. Throughout the first stage, we manually collected over 724 news articles and classified them into 10 categories of social disasters. It constitutes more than 150 thousand words. Using this information, a controlled vocabulary of more than 300 keywords was elaborated, that will help in the process of classification and identification of the texts related to the field of social disasters. To solve the second problem, the formalism of Petri net has been used. We deal with the problem of inhabitants’ evacuation in useful time. The analysis methods such as reachability or coverability tree and invariants technique to determine dynamic properties of the modeled systems will be used. To perform a case study of properties of extended evacuation system by adding time, the analysis modules of PIPE such as Generalized Stochastic Petri Nets (GSPN) Analysis, Simulation, State Space Analysis, and Invariant Analysis have been used. These modules helped us to obtain the average number of persons situated in the rooms and the other quantitative properties and characteristics related to its dynamics.Keywords: Lexicon of disasters, modelling, Petri nets, text annotation, social disasters.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 115746 Corporate Social Responsibility and Corporate Reputation: A Bibliometric Analysis
Authors: Songdi Li, Louise Spry, Tony Woodall
Abstract:
Nowadays, Corporate Social responsibility (CSR) is becoming a buzz word, and more and more academics are putting efforts on CSR studies. It is believed that CSR could influence Corporate Reputation (CR), and they hold a favourable view that CSR leads to a positive CR. To be specific, the CSR related activities in the reputational context have been regarded as ways that associate to excellent financial performance, value creation, etc. Also, it is argued that CSR and CR are two sides of one coin; hence, to some extent, doing CSR is equal to establishing a good reputation. Still, there is no consensus of the CSR-CR relationship in the literature; thus, a systematic literature review is highly in need. This research conducts a systematic literature review with both bibliometric and content analysis. Data are selected from English language sources, and academic journal articles only, then, keyword combinations are applied to identify relevant sources. Data from Scopus and WoS are gathered for bibliometric analysis. Scopus search results were saved in RIS and CSV formats, and Web of Science (WoS) data were saved in TXT format and CSV formats in order to process data in the Bibexcel software for further analysis which later will be visualised by the software VOSviewer. Also, content analysis was applied to analyse the data clusters and the key articles. In terms of the topic of CSR-CR, this literature review with bibliometric analysis has made four achievements. First, this paper has developed a systematic study which quantitatively depicts the knowledge structure of CSR and CR by identifying terms closely related to CSR-CR (such as ‘corporate governance’) and clustering subtopics emerged in co-citation analysis. Second, content analysis is performed to acquire insight on the findings of bibliometric analysis in the discussion section. And it highlights some insightful implications for the future research agenda, for example, a psychological link between CSR-CR is identified from the result; also, emerging economies and qualitative research methods are new elements emerged in the CSR-CR big picture. Third, a multidisciplinary perspective presents through the whole bibliometric analysis mapping and co-word and co-citation analysis; hence, this work builds a structure of interdisciplinary perspective which potentially leads to an integrated conceptual framework in the future. Finally, Scopus and WoS are compared and contrasted in this paper; as a result, Scopus which has more depth and comprehensive data is suggested as a tool for future bibliometric analysis studies. Overall, this paper has fulfilled its initial purposes and contributed to the literature. To the author’s best knowledge, this paper conducted the first literature review of CSR-CR researches that applied both bibliometric analysis and content analysis; therefore, this paper achieves its methodological originality. And this dual approach brings advantages of carrying out a comprehensive and semantic exploration in the area of CSR-CR in a scientific and realistic method. Admittedly, its work might exist subjective bias in terms of search terms selection and paper selection; hence triangulation could reduce the subjective bias to some degree.
Keywords: Corporate social responsibility, corporate reputation, bibliometric analysis, software data analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93745 Bioinformatic Analysis of Retroelement-Associated Sequences in Human and Mouse Promoters
Authors: Nadezhda M. Usmanova, Nikolai V. Tomilin
Abstract:
Mammalian genomes contain large number of retroelements (SINEs, LINEs and LTRs) which could affect expression of protein coding genes through associated transcription factor binding sites (TFBS). Activity of the retroelement-associated TFBS in many genes is confirmed experimentally but their global functional impact remains unclear. Human SINEs (Alu repeats) and mouse SINEs (B1 and B2 repeats) are known to be clustered in GCrich gene rich genome segments consistent with the view that they can contribute to regulation of gene expression. We have shown earlier that Alu are involved in formation of cis-regulatory modules (clusters of TFBS) in human promoters, and other authors reported that Alu located near promoter CpG islands have an increased frequency of CpG dinucleotides suggesting that these Alu are undermethylated. Human Alu and mouse B1/B2 elements have an internal bipartite promoter for RNA polymerase III containing conserved sequence motif called B-box which can bind basal transcription complex TFIIIC. It has been recently shown that TFIIIC binding to B-box leads to formation of a boundary which limits spread of repressive chromatin modifications in S. pombe. SINEassociated B-boxes may have similar function but conservation of TFIIIC binding sites in SINEs located near mammalian promoters has not been studied earlier. Here we analysed abundance and distribution of retroelements (SINEs, LINEs and LTRs) in annotated sequences of the Database of mammalian transcription start sites (DBTSS). Fractions of SINEs in human and mouse promoters are slightly lower than in all genome but >40% of human and mouse promoters contain Alu or B1/B2 elements within -1000 to +200 bp interval relative to transcription start site (TSS). Most of these SINEs is associated with distal segments of promoters (-1000 to -200 bp relative to TSS) indicating that their insertion at distances >200 bp upstream of TSS is tolerated during evolution. Distribution of SINEs in promoters correlates negatively with the distribution of CpG sequences. Using analysis of abundance of 12-mer motifs from the B1 and Alu consensus sequences in genome and DBTSS it has been confirmed that some subsegments of Alu and B1 elements are poorly conserved which depends in part on the presence of CpG dinucleotides. One of these CpG-containing subsegments in B1 elements overlaps with SINE-associated B-box and it shows better conservation in DBTSS compared to genomic sequences. It has been also studied conservation in DBTSS and genome of the B-box containing segments of old (AluJ, AluS) and young (AluY) Alu repeats and found that CpG sequence of the B-box of old Alu is better conserved in DBTSS than in genome. This indicates that Bbox- associated CpGs in promoters are better protected from methylation and mutation than B-box-associated CpGs in genomic SINEs. These results are consistent with the view that potential TFIIIC binding motifs in SINEs associated with human and mouse promoters may be functionally important. These motifs may protect promoters from repressive histone modifications which spread from adjacent sequences. This can potentially explain well known clustering of SINEs in GC-rich gene rich genome compartments and existence of unmethylated CpG islands.Keywords: Retroelement, promoter, CpG island, DNAmethylation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157244 Lung Cancer Detection and Multi Level Classification Using Discrete Wavelet Transform Approach
Authors: V. Veeraprathap, G. S. Harish, G. Narendra Kumar
Abstract:
Uncontrolled growth of abnormal cells in the lung in the form of tumor can be either benign (non-cancerous) or malignant (cancerous). Patients with Lung Cancer (LC) have an average of five years life span expectancy provided diagnosis, detection and prediction, which reduces many treatment options to risk of invasive surgery increasing survival rate. Computed Tomography (CT), Positron Emission Tomography (PET), and Magnetic Resonance Imaging (MRI) for earlier detection of cancer are common. Gaussian filter along with median filter used for smoothing and noise removal, Histogram Equalization (HE) for image enhancement gives the best results without inviting further opinions. Lung cavities are extracted and the background portion other than two lung cavities is completely removed with right and left lungs segmented separately. Region properties measurements area, perimeter, diameter, centroid and eccentricity measured for the tumor segmented image, while texture is characterized by Gray-Level Co-occurrence Matrix (GLCM) functions, feature extraction provides Region of Interest (ROI) given as input to classifier. Two levels of classifications, K-Nearest Neighbor (KNN) is used for determining patient condition as normal or abnormal, while Artificial Neural Networks (ANN) is used for identifying the cancer stage is employed. Discrete Wavelet Transform (DWT) algorithm is used for the main feature extraction leading to best efficiency. The developed technology finds encouraging results for real time information and on line detection for future research.
Keywords: ANN, DWT, GLCM, KNN, ROI, artificial neural networks, discrete wavelet transform, gray-level co-occurrence matrix, k-nearest neighbor, region of interest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96043 Variations of Body Mass Index with Age in Masters Athletes (World Masters Games)
Authors: Walsh Joe, Climstein Mike, Heazlewood Ian Timothy, Burke Stephen, Kettunen Jyrki, Adams Kent, DeBeliso Mark
Abstract:
Whilst there is growing evidence that activity across the lifespan is beneficial for improved health, there are also many changes involved with the aging process and subsequently the potential for reduced indices of health. The nexus between health, physical activity and aging is complex and has raised much interest in recent times due to the realization that a multifaceted approached is necessary in order to counteract a growing obesity epidemic. By investigating age based trends within a population adhering to competitive sport at older ages, further insight might be gleaned to assist in understanding one of many factors influencing this relationship. BMI was derived using data gathered on a total of 6,071 masters athletes (51.9% male, 48.1% female) aged 25 to 91 years ( =51.5, s =±9.7), competing at the Sydney World Masters Games (2009). Using linear and loess regression it was demonstrated that the usual tendency for prevalence of higher BMI increasing with age was reversed in the sample. This trend in reversal was repeated for both male and female only sub-sets of the sample participants, indicating the possibility of improved prevalence of BMI with increasing age for both the sample as a whole and these individual subgroups. This evidence of improved classification in one index of health (reduced BMI) for masters athletes (when compared to the general population) implies there are either improved levels of this index of health with aging due to adherence to sport or possibly the reduced BMI is advantageous and contributes to this cohort adhering (or being attracted) to masters sport at older ages. Demonstration of this proportionately under-investigated World Masters Games population having an improved relationship between BMI and increasing age over the general population is of particular interest in the context of the measures being taken globally to curb an obesity epidemic.Keywords: Aging, masters athlete, Quetelet Index, sport.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 167642 Value Index, a Novel Decision Making Approach for Waste Load Allocation
Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani
Abstract:
Waste load allocation (WLA) policies may use multiobjective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.Keywords: Waste load allocation (WLA), Value index, Multi objective particle swarm optimization (MOPSO), Haraz River, Equity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 202741 Land Use Land Cover Changes in Response to Urban Sprawl within North-West Anatolia, Turkey
Authors: Melis Inalpulat, Levent Genc
Abstract:
In the present study, an attempt was made to state the Land Use Land Cover (LULC) transformation over three decades around the urban regions of Balıkesir, Bursa, and Çanakkale provincial centers (PCs) in Turkey. Landsat imageries acquired in 1984, 1999 and 2014 were used to determine the LULC change. Images were classified using the supervised classification technique and five main LULC classes were considered including forest (F), agricultural land (A), residential area (urban) - bare soil (R-B), water surface (W), and other (O). Change detection analyses were conducted for 1984-1999 and 1999-2014, and the results were evaluated. Conversions of LULC types to R-B class were investigated. In addition, population changes (1985-2014) were assessed depending on census data, the relations between population and the urban areas were stated, and future populations and urban area needs were forecasted for 2030. The results of LULC analysis indicated that urban areas, which are covered under R-B class, were expanded in all PCs. During 1984-1999 R-B class within Balıkesir, Bursa and Çanakkale PCs were found to have increased by 7.1%, 8.4%, and 2.9%, respectively. The trend continued in the 1999-2014 term and the increment percentages reached to 15.7%, 15.5%, and 10.2% at the end of 30-year period (1984-2014). Furthermore, since A class in all provinces was found to be the principal contributor for the R-B class, urban sprawl lead to the loss of agricultural lands. Moreover, the areas of R-B classes were highly correlated with population within all PCs (R2>0.992). Depending on this situation, both future populations and R-B class areas were forecasted. The estimated values of increase in the R-B class areas for Balıkesir, Bursa, and Çanakkale PCs were 1,586 ha, 7,999 ha and 854 ha, respectively. Due to this fact, the forecasted values for 2,030 are 7,838 ha, 27,866, and 2,486 ha for Balıkesir, Bursa, and Çanakkale, and thus, 7.7%, 8.2%, and 9.7% more R-B class areas are expected to locate in PCs in respect to the same order.Keywords: Landsat, LULC change, population, urban sprawl.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143740 A Probabilistic Reinforcement-Based Approach to Conceptualization
Authors: Hadi Firouzi, Majid Nili Ahmadabadi, Babak N. Araabi
Abstract:
Conceptualization strengthens intelligent systems in generalization skill, effective knowledge representation, real-time inference, and managing uncertain and indefinite situations in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction by which the continuous state is formed as entities called concepts which are connected to the action space and thus, they illustrate somehow the complex action space. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the probabilistic framework is proposed. This approach exploits and extends the mirror neuron-s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, instead of building a huge numerical knowledge, the concepts are learnt gradually from rewards through interaction with the environment. Moreover the probabilistic formation of the concepts is employed to deal with uncertain and dynamic nature of real problems in addition to the ability of generalization. These characteristics as a whole distinguish the proposed learning algorithm from both a pure classification algorithm and typical reinforcement learning. Simulation results show advantages of the proposed framework in terms of convergence speed as well as generalization and asymptotic behavior because of utilizing both success and failures attempts through received rewards. Experimental results, on the other hand, show the applicability and effectiveness of the proposed method in continuous and noisy environments for a real robotic task such as maze as well as the benefits of implementing an incremental learning scenario in artificial agents.
Keywords: Concept learning, probabilistic decision making, reinforcement learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1526