Search results for: classification algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3744

Search results for: classification algorithms

924 Nanoparticle-Based Histidine-Rich Protein-2 Assay for the Detection of the Malaria Parasite Plasmodium Falciparum

Authors: Yagahira E. Castro-Sesquen, Chloe Kim, Robert H. Gilman, David J. Sullivan, Peter C. Searson

Abstract:

Diagnosis of severe malaria is particularly important in highly endemic regions since most patients are positive for parasitemia and treatment differs from non-severe malaria. Diagnosis can be challenging due to the prevalence of diseases with similar symptoms. Accurate diagnosis is increasingly important to avoid overprescribing antimalarial drugs, minimize drug resistance, and minimize costs. A nanoparticle-based assay for detection and quantification of Plasmodium falciparum histidine-rich protein 2 (HRP2) in urine and serum is reported. The assay uses magnetic beads conjugated with anti-HRP2 antibody for protein capture and concentration, and antibody-conjugated quantum dots for optical detection. Western Blot analysis demonstrated that magnetic beads allows the concentration of HRP2 protein in urine by 20-fold. The concentration effect was achieved because large volume of urine can be incubated with beads, and magnetic separation can be easily performed in minutes to isolate beads containing HRP2 protein. Magnetic beads and Quantum Dots 525 conjugated to anti-HRP2 antibodies allows the detection of low concentration of HRP2 protein (0.5 ng mL-1), and quantification in the range of 33 to 2,000 ng mL-1 corresponding to the range associated with non-severe to severe malaria. This assay can be easily adapted to a non-invasive point-of-care test for classification of severe malaria.

Keywords: HRP2 protein, malaria, magnetic beads, Quantum dots

Procedia PDF Downloads 306
923 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: structural health monitoring (SHM), bridge health monitoring (BHM), sensor-based methods, machine-learning algorithms, and model-based techniques, sensor placement, data acquisition, data analysis

Procedia PDF Downloads 69
922 Convolutional Neural Networks versus Radiomic Analysis for Classification of Breast Mammogram

Authors: Mehwish Asghar

Abstract:

Breast Cancer (BC) is a common type of cancer among women. Its screening is usually performed using different imaging modalities such as magnetic resonance imaging, mammogram, X-ray, CT, etc. Among these modalities’ mammogram is considered a powerful tool for diagnosis and screening of breast cancer. Sophisticated machine learning approaches have shown promising results in complementing human diagnosis. Generally, machine learning methods can be divided into two major classes: one is Radiomics analysis (RA), where image features are extracted manually; and the other one is the concept of convolutional neural networks (CNN), in which the computer learns to recognize image features on its own. This research aims to improve the incidence of early detection, thus reducing the mortality rate caused by breast cancer through the latest advancements in computer science, in general, and machine learning, in particular. It has also been aimed to ease the burden of doctors by improving and automating the process of breast cancer detection. This research is related to a relative analysis of different techniques for the implementation of different models for detecting and classifying breast cancer. The main goal of this research is to provide a detailed view of results and performances between different techniques. The purpose of this paper is to explore the potential of a convolutional neural network (CNN) w.r.t feature extractor and as a classifier. Also, in this research, it has been aimed to add the module of Radiomics for comparison of its results with deep learning techniques.

Keywords: breast cancer (BC), machine learning (ML), convolutional neural network (CNN), radionics, magnetic resonance imaging, artificial intelligence

Procedia PDF Downloads 197
921 A Preliminary Literature Review of Digital Transformation Case Studies

Authors: Vesna Bosilj Vukšić, Lucija Ivančić, Dalia Suša Vugec

Abstract:

While struggling to succeed in today’s complex market environment and provide better customer experience and services, enterprises encompass digital transformation as a means for reaching competitiveness and foster value creation. A digital transformation process consists of information technology implementation projects, as well as organizational factors such as top management support, digital transformation strategy, and organizational changes. However, to the best of our knowledge, there is little evidence about digital transformation endeavors in organizations and how they perceive it – is it only about digital technologies adoption or a true organizational shift is needed? In order to address this issue and as the first step in our research project, a literature review is conducted. The analysis included case study papers from Scopus and Web of Science databases. The following attributes are considered for classification and analysis of papers: time component; country of case origin; case industry and; digital transformation concept comprehension, i.e. focus. Research showed that organizations – public, as well as private ones, are aware of change necessity and employ digital transformation projects. Also, the changes concerning digital transformation affect both manufacturing and service-based industries. Furthermore, we discovered that organizations understand that besides technologies implementation, organizational changes must also be adopted. However, with only 29 relevant papers identified, research positioned digital transformation as an unexplored and emerging phenomenon in information systems research. The scarcity of evidence-based papers calls for further examination of this topic on cases from practice.

Keywords: digital strategy, digital technologies, digital transformation, literature review

Procedia PDF Downloads 192
920 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 139
919 Predicting the Compressive Strength of Geopolymer Concrete Using Machine Learning Algorithms: Impact of Chemical Composition and Curing Conditions

Authors: Aya Belal, Ahmed Maher Eltair, Maggie Ahmed Mashaly

Abstract:

Geopolymer concrete is gaining recognition as a sustainable alternative to conventional Portland Cement concrete due to its environmentally friendly nature, which is a key goal for Smart City initiatives. It has demonstrated its potential as a reliable material for the design of structural elements. However, the production of Geopolymer concrete is hindered by batch-to-batch variations, which presents a significant challenge to the widespread adoption of Geopolymer concrete. To date, Machine learning has had a profound impact on various fields by enabling models to learn from large datasets and predict outputs accurately. This paper proposes an integration between the current drift to Artificial Intelligence and the composition of Geopolymer mixtures to predict their mechanical properties. This study employs Python software to develop machine learning model in specific Decision Trees. The research uses the percentage oxides and the chemical composition of the Alkali Solution along with the curing conditions as the input independent parameters, irrespective of the waste products used in the mixture yielding the compressive strength of the mix as the output parameter. The results showed 90 % agreement of the predicted values to the actual values having the ratio of the Sodium Silicate to the Sodium Hydroxide solution being the dominant parameter in the mixture.

Keywords: decision trees, geopolymer concrete, machine learning, smart cities, sustainability

Procedia PDF Downloads 56
918 Client Hacked Server

Authors: Bagul Abhijeet

Abstract:

Background: Client-Server model is the backbone of today’s internet communication. In which normal user can not have control over particular website or server? By using the same processing model one can have unauthorized access to particular server. In this paper, we discussed about application scenario of hacking for simple website or server consist of unauthorized way to access the server database. This application emerges to autonomously take direct access of simple website or server and retrieve all essential information maintain by administrator. In this system, IP address of server given as input to retrieve user-id and password of server. This leads to breaking administrative security of server and acquires the control of server database. Whereas virus helps to escape from server security by crashing the whole server. Objective: To control malicious attack and preventing all government website, and also find out illegal work to do hackers activity. Results: After implementing different hacking as well as non-hacking techniques, this system hacks simple web sites with normal security credentials. It provides access to server database and allow attacker to perform database operations from client machine. Above Figure shows the experimental result of this application upon different servers and provides satisfactory results as required. Conclusion: In this paper, we have presented a to view to hack the server which include some hacking as well as non-hacking methods. These algorithms and methods provide efficient way to hack server database. By breaking the network security allow to introduce new and better security framework. The terms “Hacking” not only consider for its illegal activities but also it should be use for strengthen our global network.

Keywords: Hacking, Vulnerabilities, Dummy request, Virus, Server monitoring

Procedia PDF Downloads 231
917 Landscape Classification in North of Jordan by Integrated Approach of Remote Sensing and Geographic Information Systems

Authors: Taleb Odeh, Nizar Abu-Jaber, Nour Khries

Abstract:

The southern part of Wadi Al Yarmouk catchment area covers north of Jordan. It locates within latitudes 32° 20’ to 32° 45’N and longitudes 35° 42’ to 36° 23’ E and has an area of about 1426 km2. However, it has high relief topography where the elevation varies between 50 to 1100 meter above sea level. The variations in the topography causes different units of landforms, climatic zones, land covers and plant species. As a results of these different landscapes units exists in that region. Spatial planning is a major challenge in such a vital area for Jordan which could not be achieved without determining landscape units. However, an integrated approach of remote sensing and geographic information Systems (GIS) is an optimized tool to investigate and map landscape units of such a complicated area. Remote sensing has the capability to collect different land surface data, of large landscape areas, accurately and in different time periods. GIS has the ability of storage these land surface data, analyzing them spatially and present them in form of professional maps. We generated a geo-land surface data that include land cover, rock units, soil units, plant species and digital elevation model using ASTER image and Google Earth while analyzing geo-data spatially were done by ArcGIS 10.2 software. We found that there are twenty two different landscape units in the study area which they have to be considered for any spatial planning in order to avoid and environmental problems.

Keywords: landscape, spatial planning, GIS, spatial analysis, remote sensing

Procedia PDF Downloads 507
916 Computer Countenanced Diagnosis of Skin Nodule Detection and Histogram Augmentation: Extracting System for Skin Cancer

Authors: S. Zith Dey Babu, S. Kour, S. Verma, C. Verma, V. Pathania, A. Agrawal, V. Chaudhary, A. Manoj Puthur, R. Goyal, A. Pal, T. Danti Dey, A. Kumar, K. Wadhwa, O. Ved

Abstract:

Background: Skin cancer is now is the buzzing button in the field of medical science. The cyst's pandemic is drastically calibrating the body and well-being of the global village. Methods: The extracted image of the skin tumor cannot be used in one way for diagnosis. The stored image contains anarchies like the center. This approach will locate the forepart of an extracted appearance of skin. Partitioning image models has been presented to sort out the disturbance in the picture. Results: After completing partitioning, feature extraction has been formed by using genetic algorithm and finally, classification can be performed between the trained and test data to evaluate a large scale of an image that helps the doctors for the right prediction. To bring the improvisation of the existing system, we have set our objectives with an analysis. The efficiency of the natural selection process and the enriching histogram is essential in that respect. To reduce the false-positive rate or output, GA is performed with its accuracy. Conclusions: The objective of this task is to bring improvisation of effectiveness. GA is accomplishing its task with perfection to bring down the invalid-positive rate or outcome. The paper's mergeable portion conflicts with the composition of deep learning and medical image processing, which provides superior accuracy. Proportional types of handling create the reusability without any errors.

Keywords: computer-aided system, detection, image segmentation, morphology

Procedia PDF Downloads 125
915 Investigation of Cold Atmospheric Plasma Exposure Protocol on Wound Healing in Diabetic Foot Ulcer

Authors: P. Akbartehrani, M. Khaledi Pour, M. Amini, M. Khani, M. Mohajeri Tehrani, E. Ghasemi, P. Charipoor, B. Shokri

Abstract:

A common problem between diabetic patients is foot ulcers which are chronic and require specialized treatment. Previous studies illustrate that Cold atmospheric plasma (CAP) has beneficial effects on wound healing and infection. Nevertheless, the comparison of different cap exposure protocols in diabetic ulcer wound healing remained to be studied. This study aims to determine the effect of two different exposure protocols on wound healing in diabetic ulcers. A prospective, randomized clinical trial was conducted at two clinics. Diabetic patients with G1 and G2 wanger classification diabetic foot ulcers were divided into two groups of study. One group was treated by the first protocol, which was treating wounds by argon-generated cold atmospheric plasma jet once a week for five weeks in a row. The other group was treated by the second protocol, which was treating wounds every three days for five weeks in a row. The wounds were treated for 40 seconds/cubic centimeter, while the nozzle tip was moved nonlocalized 1 cm above the wounds. A patient with one or more wounds could participate in different groups as wounds were separately randomized, which allow a participant to be treated several times during the study. The study's significant findings were two different reductions rate in wound size, microbial load, and two different healing speeds. This study concludes that CAP therapy by the second protocol yields more effective healing speeds, reduction in wound sizes, and microbial loads of foot ulcers in diabetic patients.

Keywords: wound healing, diabetic ulcers, cold atmospheric plasma, cold argon jet

Procedia PDF Downloads 195
914 Factors Associated with Weight Loss Maintenance after an Intervention Program

Authors: Filipa Cortez, Vanessa Pereira

Abstract:

Introduction: The main challenge of obesity treatment is long-term weight loss maintenance. The 3 phases method is a weight loss program that combines a low carb and moderately high-protein diet, food supplements and a weekly one-to-one consultation with a certified nutritionist. Sustained weight control is the ultimate goal of phase 3. Success criterion was the minimum loss of 10% of initial weight and its maintenance after 12 months. Objective: The aim of this study was to identify factors associated with successful weight loss maintenance after 12 months at the end of 3 phases method. Methods: The study included 199 subjects that achieved their weight loss goal (phase 3). Weight and body mass index (BMI) were obtained at the baseline and every week until the end of the program. Therapeutic adherence was measured weekly on a Likert scale from 1 to 5. Subjects were considered in compliance with nutritional recommendation and supplementation when their classification was ≥ 4. After 12 months of the method, the current weight and number of previous weight-loss attempts were collected by telephone interview. The statistical significance was assumed at p-values < 0.05. Statistical analyses were performed using SPSS TM software v.21. Results: 65.3% of subjects met the success criterion. The factors which displayed a significant weight loss maintenance prediction were: greater initial percentage weight loss (OR=1.44) during the weight loss intervention and a higher number of consultations in phase 3 (OR=1.10). Conclusion: These findings suggest that the percentage weight loss during the weight loss intervention and the number of consultations in phase 3 may facilitate maintenance of weight loss after the 3 phases method.

Keywords: obesity, weight maintenance, low-carbohydrate diet, dietary supplements

Procedia PDF Downloads 127
913 A Neuropsychological Investigation of the Relationship between Anxiety Levels and Loss of Inhibitory Cognitive Control in Ageing and Dementia

Authors: Nasreen Basoudan, Andrea Tales, Frederic Boy

Abstract:

Non-clinical anxiety may be comprised of state anxiety - temporarily experienced anxiety related to a specific situation, and trait anxiety - a longer lasting response or a general disposition to anxiety. While temporary and occasional anxiety whether as a mood state or personality dimension is normal, nonclinical anxiety may influence many more components of information processing than previously recognized. In ageing and dementia-related research, disease characterization now involves attempts to understand a much wider range of brain function such as loss of inhibitory control, as against the more common focus on memory and cognition. However, in many studies, the tendency has been to include individuals with clinical anxiety disorders while excluding persons with lower levels of state or trait anxiety. Loss of inhibitory cognitive control can lead to behaviors such as aggression, reduced sensitivity to others, sociopathic thoughts and actions. Anxiety has also been linked to inhibitory control, with research suggesting that people with anxiety are less capable of inhibiting their emotions than the average person. This study investigates the relationship between anxiety and loss of inhibitory control in younger and older adults, using a variety of questionnaires and computers-based tests. Based on the premise that irrespective of classification, anxiety is associated with a wide range of physical, affective, and cognitive responses, this study explores evidence indicative of the potential influence anxiety per se on loss of inhibitory control, in order to contribute to discussion and appropriate consideration of anxiety-related factors in methodological practice.

Keywords: anxiety, ageing, dementia, inhibitory control

Procedia PDF Downloads 220
912 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique

Authors: Prabha Rohatgi

Abstract:

To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.

Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ

Procedia PDF Downloads 232
911 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 273
910 Phosphate Bonded Hemp (Cannabis sativa) Fibre Composites

Authors: Stephen O. Amiandamhen, Martina Meinken, Luvuyo Tyhoda

Abstract:

The properties of Hemp (Cannabis sativa) in phosphate bonded composites were investigated in this research. Hemp hurds were collected from the Hemporium institute for research, South Africa. The hurds were air-dried and shredded using a hammer mill. The shives were screened into different particle sizes and were treated separately with 5% solution of acetic anhydride and sodium hydroxide. The binding matrix was prepared using a reactive magnesia, phosphoric acid, class S fly ash and unslaked lime. The treated and untreated hemp fibers were mixed thoroughly in different ratios with the inorganic matrix. Boric acid and excess water were used to retard and control the rate of the reaction and the setting of the binder. The Hemp composite was formed in a rectangular mold and compressed at room temperature at a pressure of 100KPa. After de-molding the composites, they were cured in a conditioning room for 96 h. Physical and mechanical tests were conducted to evaluate the properties of the composites. A central composite design (CCD) was used to determine the best conditions to optimize the performance of the composites. Thereafter, these combinations were applied in the production of the composites, and the properties were evaluated. Scanning electron microscopy (SEM) was used to carry out the advance examination of the behavior of the composites while X-ray diffractometry (XRD) was used to analyze the reaction pathway in the composites. The results revealed that all properties of phosphate bonded Hemp composites exceeded the LD-1 grade classification of particle boards. The proposed product can be used for ceiling, partitioning, wall claddings and underlayment.

Keywords: CCD, fly ash, magnesia, phosphate bonded hemp composites, phosphoric acid, unslaked lime

Procedia PDF Downloads 419
909 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis

Authors: Catalina Sau Man Ng

Abstract:

Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.

Keywords: latent class analysis, prevalence, survey, workplace bullying

Procedia PDF Downloads 301
908 Research on the United Navigation Mechanism of Land, Sea and Air Targets under Multi-Sources Information Fusion

Authors: Rui Liu, Klaus Greve

Abstract:

The navigation information is a kind of dynamic geographic information, and the navigation information system is a kind of special geographic information system. At present, there are many researches on the application of centralized management and cross-integration application of basic geographic information. However, the idea of information integration and sharing is not deeply applied into the research of navigation information service. And the imperfection of navigation target coordination and navigation information sharing mechanism under certain navigation tasks has greatly affected the reliability and scientificity of navigation service such as path planning. Considering this, the project intends to study the multi-source information fusion and multi-objective united navigation information interaction mechanism: first of all, investigate the actual needs of navigation users in different areas, and establish the preliminary navigation information classification and importance level model; and then analyze the characteristics of the remote sensing and GIS vector data, and design the fusion algorithm from the aspect of improving the positioning accuracy and extracting the navigation environment data. At last, the project intends to analyze the feature of navigation information of the land, sea and air navigation targets, and design the united navigation data standard and navigation information sharing model under certain navigation tasks, and establish a test navigation system for united navigation simulation experiment. The aim of this study is to explore the theory of united navigation service and optimize the navigation information service model, which will lay the theory and technology foundation for the united navigation of land, sea and air targets.

Keywords: information fusion, united navigation, dynamic path planning, navigation information visualization

Procedia PDF Downloads 255
907 Determining G-γ Degradation Curve in Cohesive Soils by Dilatometer and in situ Seismic Tests

Authors: Ivandic Kreso, Spiranec Miljenko, Kavur Boris, Strelec Stjepan

Abstract:

This article discusses the possibility of using dilatometer tests (DMT) together with in situ seismic tests (MASW) in order to get the shape of G-g degradation curve in cohesive soils (clay, silty clay, silt, clayey silt and sandy silt). MASW test provides the small soil stiffness (Go from vs) at very small strains and DMT provides the stiffness of the soil at ‘work strains’ (MDMT). At different test locations, dilatometer shear stiffness of the soil has been determined by the theory of elasticity. Dilatometer shear stiffness has been compared with the theoretical G-g degradation curve in order to determine the typical range of shear deformation for different types of cohesive soil. The analysis also includes factors that influence the shape of the degradation curve (G-g) and dilatometer modulus (MDMT), such as the overconsolidation ratio (OCR), plasticity index (IP) and the vertical effective stress in the soil (svo'). Parametric study in this article defines the range of shear strain gDMT and GDMT/Go relation depending on the classification of a cohesive soil (clay, silty clay, clayey silt, silt and sandy silt), function of density (loose, medium dense and dense) and the stiffness of the soil (soft, medium hard and hard). The article illustrates the potential of using MASW and DMT to obtain G-g degradation curve in cohesive soils.

Keywords: dilatometer testing, MASW testing, shear wave, soil stiffness, stiffness reduction, shear strain

Procedia PDF Downloads 289
906 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 156
905 Applied of LAWA Classification for Assessment of the Water by Nutrients Elements: Case Oran Sebkha Basin

Authors: Boualla Nabila

Abstract:

The increasing demand on water, either for the drinkable water supply, or for the agricultural and industrial custom, requires a very thorough hydrochemical study to protect better and manage this resource. Oran is relatively a city with the worst quality of the water. Recently, the growing populations may put stress on natural waters by impairing the quality of the water. Campaign of water sampling of 55 points capturing different levels of the aquifer system was done for chemical analyzes of nutriments elements. The results allowed us to approach the problem of contamination based on the largely uniform nationwide approach LAWA (LänderarbeitsgruppeWasser), based on the EU CIS guidance, has been applied for the identification of pressures and impacts, allowing for easy comparison. Groundwater samples were analyzed, also, for physico-chemical parameters such as pH, sodium, potassium, calcium, magnesium, chloride, sulphate, carbonate and bicarbonate. The analytical results obtained in this hydrochemistry study were interpreted using Durov diagram. Based on these representations, the anomaly of high groundwater salinity observed in Oran Sebkha basin was explained by the high chloride concentration and to the presence of inverse cation exchange reaction. Durov diagram plot revealed that the groundwater has been evolved from Ca-HCO3 recharge water through mixing with the pre-existing groundwater to give mixed water of Mg-SO4 and Mg-Cl types that eventually reached a final stage of evolution represented by a Na-Cl water type.

Keywords: contamination, water quality, nutrients elements, approach LAWA, durov diagram

Procedia PDF Downloads 250
904 Convolutional Neural Networks-Optimized Text Recognition with Binary Embeddings for Arabic Expiry Date Recognition

Authors: Mohamed Lotfy, Ghada Soliman

Abstract:

Recognizing Arabic dot-matrix digits is a challenging problem due to the unique characteristics of dot-matrix fonts, such as irregular dot spacing and varying dot sizes. This paper presents an approach for recognizing Arabic digits printed in dot matrix format. The proposed model is based on Convolutional Neural Networks (CNN) that take the dot matrix as input and generate embeddings that are rounded to generate binary representations of the digits. The binary embeddings are then used to perform Optical Character Recognition (OCR) on the digit images. To overcome the challenge of the limited availability of dotted Arabic expiration date images, we developed a True Type Font (TTF) for generating synthetic images of Arabic dot-matrix characters. The model was trained on a synthetic dataset of 3287 images and 658 synthetic images for testing, representing realistic expiration dates from 2019 to 2027 in the format of yyyy/mm/dd. Our model achieved an accuracy of 98.94% on the expiry date recognition with Arabic dot matrix format using fewer parameters and less computational resources than traditional CNN-based models. By investigating and presenting our findings comprehensively, we aim to contribute substantially to the field of OCR and pave the way for advancements in Arabic dot-matrix character recognition. Our proposed approach is not limited to Arabic dot matrix digit recognition but can also be extended to text recognition tasks, such as text classification and sentiment analysis.

Keywords: computer vision, pattern recognition, optical character recognition, deep learning

Procedia PDF Downloads 56
903 Using Monte Carlo Model for Simulation of Rented Housing in Mashhad, Iran

Authors: Mohammad Rahim Rahnama

Abstract:

The study employs Monte Carlo method for simulation of rented housing in Mashhad second largest city in Iran. A total number of 334 rental residential units in Mashhad, including both apartments and houses (villa), were randomly selected from advertisements placed in Khorasan Newspapers during the months of July and August of 2015. In order to simulate the monthly rent price, the rent index was calculated through combining the mortgage and the rent price. In the next step, the relation between the variables of the floor area and that of the number of bedrooms for each unit, in both apartments and houses(villa), was calculated through multivariate regression using SPSS and was coded in XML. The initial model was called using simulation button in SPSS and was simulated using triangular and binominal algorithms. The findings revealed that the average simulated rental index was 548.5$ per month. Calculating the sensitivity of rental index to a number of bedrooms we found that firstly, 97% of units have three bedrooms, and secondly as the number of bedrooms increases from one to three, for the rent price of less than 200$, the percentage of units having one bedroom decreases from 10% to 0. Contrariwise, for units with the rent price of more than 571.4$, the percentage of bedrooms increases from 37% to 48%. In the light of these findings, it becomes clear that planning to build rental residential units, overseeing the rent prices, and granting subsidies to rental residential units, for apartments with two bedrooms, present a felicitous policy for regulating residential units in Mashhad.

Keywords: Mashhad, Monte Carlo, simulation, rent price, residential unit

Procedia PDF Downloads 248
902 Measurement of Solids Concentration in Hydrocyclone Using ERT: Validation Against CFD

Authors: Vakamalla Teja Reddy, Narasimha Mangadoddy

Abstract:

Hydrocyclones are used to separate particles into different size fractions in the mineral processing, chemical and metallurgical industries. High speed video imaging, Laser Doppler Anemometry (LDA), X-ray and Gamma ray tomography are previously used to measure the two-phase flow characteristics in the cyclone. However, investigation of solids flow characteristics inside the cyclone is often impeded by the nature of the process due to slurry opaqueness and solid metal wall vessels. In this work, a dual-plane high speed Electrical resistance tomography (ERT) is used to measure hydrocyclone internal flow dynamics in situ. Experiments are carried out in 3 inch hydrocyclone for feed solid concentrations varying in the range of 0-50%. ERT data analysis through the optimized FEM mesh size and reconstruction algorithms on air-core and solid concentration tomograms is assessed. Results are presented in terms of the air-core diameter and solids volume fraction contours using Maxwell’s equation for various hydrocyclone operational parameters. It is confirmed by ERT that the air core occupied area and wall solids conductivity levels decreases with increasing the feed solids concentration. Algebraic slip mixture based multi-phase computational fluid dynamics (CFD) model is used to predict the air-core size and the solid concentrations in the hydrocyclone. Validation of air-core size and mean solid volume fractions by ERT measurements with the CFD simulations is attempted.

Keywords: air-core, electrical resistance tomography, hydrocyclone, multi-phase CFD

Procedia PDF Downloads 353
901 Monitoring the Drying and Grinding Process during Production of Celitement through a NIR-Spectroscopy Based Approach

Authors: Carolin Lutz, Jörg Matthes, Patrick Waibel, Ulrich Precht, Krassimir Garbev, Günter Beuchle, Uwe Schweike, Peter Stemmermann, Hubert B. Keller

Abstract:

Online measurement of the product quality is a challenging task in cement production, especially in the production of Celitement, a novel environmentally friendly hydraulic binder. The mineralogy and chemical composition of clinker in ordinary Portland cement production is measured by X-ray diffraction (XRD) and X ray fluorescence (XRF), where only crystalline constituents can be detected. But only a small part of the Celitement components can be measured via XRD, because most constituents have an amorphous structure. This paper describes the development of algorithms suitable for an on-line monitoring of the final processing step of Celitement based on NIR-data. For calibration intermediate products were dried at different temperatures and ground for variable durations. The products were analyzed using XRD and thermogravimetric analyses together with NIR-spectroscopy to investigate the dependency between the drying and the milling processes on one and the NIR-signal on the other side. As a result, different characteristic parameters have been defined. A short overview of the Celitement process and the challenging tasks of the online measurement and evaluation of the product quality will be presented. Subsequently, methods for systematic development of near-infrared calibration models and the determination of the final calibration model will be introduced. The application of the model on experimental data illustrates that NIR-spectroscopy allows for a quick and sufficiently exact determination of crucial process parameters.

Keywords: calibration model, celitement, cementitious material, NIR spectroscopy

Procedia PDF Downloads 485
900 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 102
899 The Efficiency of AFLP and ISSR Markers in Genetic Diversity Estimation and Gene Pool Classification of Iranian Landrace Bread Wheat (Triticum Aestivum L.) Germplasm

Authors: Reza Talebi

Abstract:

Wheat (Triticum aestivum) is one of the most important food staples in Iran. Understanding genetic variability among the landrace wheat germplasm is important for breeding. Landraces endemic to Iran are a genetic resource that is distinct from other wheat germplasm. In this study, 60 Iranian landrace wheat accessions were characterized AFLP and ISSR markers. Twelve AFLP primer pairs detected 128 polymorphic bands among the sixty genotypes. The mean polymorphism rate based on AFLP data was 31%; however, a wide polymorphism range among primer pairs was observed (22–40%). Polymorphic information content (PIC value) calculated to assess the informativeness of each marker ranged from 0.28 to 0.4, with a mean of 0.37. According to AFLP molecular data, cluster analysis grouped the genotypes in five distinct clusters. .ISSR markers generated 68 bands (average of 6 bands per primer), which 31 were polymorphic (45%) across the 60 wheat genotypes. Polymorphism information content (PIC) value for ISSR markers was calculated in the range of 0.14 to 0.48 with an average of 0.33. Based on data achieved by ISSR-PCR, cluster analysis grouped the genotypes in three distinct clusters. Both AFLP and ISSR markers able to showed that high level of genetic diversity in Iranian landrace wheat accessions has maintained a relatively constant level of genetic diversity during last years.

Keywords: wheat, genetic diversity, AFLP, ISSR

Procedia PDF Downloads 424
898 Isotopes Used in Comparing Indigenous and International Walnut (Juglans regia L.) Varieties

Authors: Raluca Popescu, Diana Costinel, Elisabeta-Irina Geana, Oana-Romina Botoran, Roxana-Elena Ionete, Yazan Falah Jadee 'Alabedallat, Mihai Botu

Abstract:

Walnut production is high in Romania, different varieties being cultivated dependent on high yield, disease resistance or quality of produce. Walnuts have a highly nutritional composition, the kernels containing essential fatty acids, where the unsaturated fraction is higher than in other types of nuts, quinones, tannins, minerals. Walnut consumption can lower the cholesterol, improve the arterial function and reduce inflammation. The purpose of this study is to determine and compare the composition of walnuts of indigenous and international varieties all grown in Romania, in order to identify high-quality indigenous varieties. Oil has been extracted from the nuts of 34 varieties, the fatty acids composition and IV (iodine value) being afterwards measured by NMR. Furthermore, δ13C of the extracted oil had been measured by IRMS to find specific isotopic fingerprints that can be used in authenticating the varieties. Chemometrics had been applied to the data in order to identify similarities and differences between the varieties. The total saturated fatty acids content (SFA) varied between n.d. and 23% molar, oleic acid between 17 and 35%, linoleic acid between 38 and 59%, linolenic acid between 8 and 14%, corresponding to iodine values (IV - total amount of unsaturation) ranging from 100 to 135. The varieties separated in four groups according to the fatty acids composition, each group containing an international variety, making possible the classification of the indigenous ones. At both ends of the unsaturation spectrum, international varieties had been found.

Keywords: δ13C-IRMS, fatty acids composition, 1H-NMR, walnut varieties

Procedia PDF Downloads 278
897 Algorithms for Run-Time Task Mapping in NoC-Based Heterogeneous MPSoCs

Authors: M. K. Benhaoua, A. K. Singh, A. E. Benyamina, P. Boulet

Abstract:

Mapping parallelized tasks of applications onto these MPSoCs can be done either at design time (static) or at run-time (dynamic). Static mapping strategies find the best placement of tasks at design-time, and hence, these are not suitable for dynamic workload and seem incapable of runtime resource management. The number of tasks or applications executing in MPSoC platform can exceed the available resources, requiring efficient run-time mapping strategies to meet these constraints. This paper describes a new Spiral Dynamic Task Mapping heuristic for mapping applications onto NoC-based Heterogeneous MPSoC. This heuristic is based on packing strategy and routing Algorithm proposed also in this paper. Heuristic try to map the tasks of an application in a clustering region to reduce the communication overhead between the communicating tasks. The heuristic proposed in this paper attempts to map the tasks of an application that are most related to each other in a spiral manner and to find the best possible path load that minimizes the communication overhead. In this context, we have realized a simulation environment for experimental evaluations to map applications with varying number of tasks onto an 8x8 NoC-based Heterogeneous MPSoCs platform, we demonstrate that the new mapping heuristics with the new modified dijkstra routing algorithm proposed are capable of reducing the total execution time and energy consumption of applications when compared to state-of-the-art run-time mapping heuristics reported in the literature.

Keywords: multiprocessor system on chip, MPSoC, network on chip, NoC, heterogeneous architectures, run-time mapping heuristics, routing algorithm

Procedia PDF Downloads 464
896 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder

Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu

Abstract:

Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.

Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network

Procedia PDF Downloads 118
895 Modeling and Simulation of Ship Structures Using Finite Element Method

Authors: Javid Iqbal, Zhu Shifan

Abstract:

The development in the construction of unconventional ships and the implementation of lightweight materials have shown a large impulse towards finite element (FE) method, making it a general tool for ship design. This paper briefly presents the modeling and analysis techniques of ship structures using FE method for complex boundary conditions which are difficult to analyze by existing Ship Classification Societies rules. During operation, all ships experience complex loading conditions. These loads are general categories into thermal loads, linear static, dynamic and non-linear loads. General strength of the ship structure is analyzed using static FE analysis. FE method is also suitable to consider the local loads generated by ballast tanks and cargo in addition to hydrostatic and hydrodynamic loads. Vibration analysis of a ship structure and its components can be performed using FE method which helps in obtaining the dynamic stability of the ship. FE method has developed better techniques for calculation of natural frequencies and different mode shapes of ship structure to avoid resonance both globally and locally. There is a lot of development towards the ideal design in ship industry over the past few years for solving complex engineering problems by employing the data stored in the FE model. This paper provides an overview of ship modeling methodology for FE analysis and its general application. Historical background, the basic concept of FE, advantages, and disadvantages of FE analysis are also reported along with examples related to hull strength and structural components.

Keywords: dynamic analysis, finite element methods, ship structure, vibration analysis

Procedia PDF Downloads 116