Search results for: automatic mapping
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1970

Search results for: automatic mapping

1070 Mobile Robot Manipulator Kinematics Motion Control Analysis with MATLAB/Simulink

Authors: Wayan Widhiada, Cok Indra Partha, Gusti Ngurah Nitya Santhiarsa

Abstract:

The purpose of this paper is to investigate the sophistication of the use of Proportional Integral and Derivative Control to control the kinematic motion of the mobile robot manipulator. Simulation and experimental methods will be used to investigate the sophistication of PID control to control the mobile robot arm in the collection and placement of several kinds of objects quickly, accurately and correctly. Mathematical modeling will be done by utilizing the integration of Solidworks and MATLAB / Simmechanics software. This method works by converting the physical model file into the xml file. This method is easy, fast and accurate done in modeling and design robotics. The automatic control design of this robot manipulator will be validated in simulations and experimental in control labs as evidence that the mobile robot manipulator gripper control design can achieve the best performance such as the error signal is lower than 5%, small overshoot and get steady signal response as quickly.

Keywords: control analysis, kinematics motion, mobile robot manipulator, performance

Procedia PDF Downloads 401
1069 A GIS Based Composite Land Degradation Assessment and Mapping of Tarkwa Mining Area

Authors: Bernard Kumi-Boateng, Kofi Bonsu

Abstract:

The clearing of vegetation in the Tarkwa Mining Area (TMA) for the purposes of mining, lumbering and development of settlement for the increasing population has caused a large scale denudation of the forest cover and erosion of the top soil thereby degrading the agriculture land. It is, therefore, essential to know the current status of land degradation in TMA so as to facilitate land conservation policy-making. The types of degradation, the extents of the degradations and their various degrees were combined to develop a composite land degradation index to assess the current status of land degradation in TMA using GIS based techniques. The assessment revealed that the most significant types of degradation in TMA were open pit and quarry mining; urbanisation and other construction projects; and surface scraping during land clearing. It was found that 21.62 % of the total area of TMA (353.07 km2) had high degradation index rating. It is recommended that decision makers use this assessment as a reference point for future initiatives that will be taken in order to develop land conservation policy.

Keywords: degradation, GIS, land, mining

Procedia PDF Downloads 348
1068 Morphological Analysis of Manipuri Language: Wahei-Neinarol

Authors: Y. Bablu Singh, B. S. Purkayashtha, Chungkham Yashawanta Singh

Abstract:

Morphological analysis forms the basic foundation in NLP applications including syntax parsing Machine Translation (MT), Information Retrieval (IR) and automatic indexing in all languages. It is the field of the linguistics; it can provide valuable information for computer based linguistics task such as lemmatization and studies of internal structure of the words. Computational Morphology is the application of morphological rules in the field of computational linguistics, and it is the emerging area in AI, which studies the structure of words, which are formed by combining smaller units of linguistics information, called morphemes: the building blocks of words. Morphological analysis provides about semantic and syntactic role in a sentence. It analyzes the Manipuri word forms and produces several grammatical information associated with the words. The Morphological Analyzer for Manipuri has been tested on 3500 Manipuri words in Shakti Standard format (SSF) using Meitei Mayek as source; thereby an accuracy of 80% has been obtained on a manual check.

Keywords: morphological analysis, machine translation, computational morphology, information retrieval, SSF

Procedia PDF Downloads 324
1067 Metamodel for Artefacts in Service Engineering Analysis and Design

Authors: Purnomo Yustianto, Robin Doss

Abstract:

As a process of developing a service system, the term ‘service engineering’ evolves in scope and definition. To achieve an integrated understanding of the process, a general framework and an ontology are required. This paper extends a previously built service engineering framework by exploring metamodels for the framework artefacts based on a foundational ontology and a metamodel landscape. The first part of this paper presents a correlation map between the proposed framework with the ontology as a form of evaluation for the conceptual coverage of the framework. The mapping also serves to characterize the artefacts to be produced for each activity in the framework. The second part describes potential metamodels to be used, from the metamodel landscape, as alternative formats of the framework artefacts. The results suggest that the framework sufficiently covers the ontological concepts, both from general service context and software service context. The metamodel exploration enriches the suggested artefact format from the original eighteen formats to thirty metamodel alternatives.

Keywords: artefact, framework, service, metamodel

Procedia PDF Downloads 202
1066 Automatic API Regression Analyzer and Executor

Authors: Praveena Sridhar, Nihar Devathi, Parikshit Chakraborty

Abstract:

As the software product changes versions across releases, there are changes to the API’s and features and the upgrades become necessary. Hence, it becomes imperative to get the impact of upgrading the dependent components. This tool finds out API changes across two versions and their impact on other API’s followed by execution of the automated regression suites relevant to updates and their impacted areas. This tool has 4 layer architecture, each layer with its own unique pre-assigned capability which it does and sends the required information to next layer. This are the 4 layers. 1) Comparator: Compares the two versions of API. 2) Analyzer: Analyses the API doc and gives the modified class and its dependencies along with implemented interface details. 3) Impact Filter: Find the impact of the modified class on the other API methods. 4) Auto Executer: Based on the output given by Impact Filter, Executor will run the API regression Suite. Tool reads the java doc and extracts the required information of classes, interfaces and enumerations. The extracted information is saved into a data structure which shows the class details and its dependencies along with interfaces and enumerations that are listed in the java doc.

Keywords: automation impact regression, java doc, executor, analyzer, layers

Procedia PDF Downloads 483
1065 Knowledge Representation and Inconsistency Reasoning of Class Diagram Maintenance in Big Data

Authors: Chi-Lun Liu

Abstract:

Requirements modeling and analysis are important in successful information systems' maintenance. Unified Modeling Language (UML) class diagrams are useful standards for modeling information systems. To our best knowledge, there is a lack of a systems development methodology described by the organism metaphor. The core concept of this metaphor is adaptation. Using the knowledge representation and reasoning approach and ontologies to adopt new requirements are emergent in recent years. This paper proposes an organic methodology which is based on constructivism theory. This methodology is a knowledge representation and reasoning approach to analyze new requirements in the class diagrams maintenance. The process and rules in the proposed methodology automatically analyze inconsistencies in the class diagram. In the big data era, developing an automatic tool based on the proposed methodology to analyze large amounts of class diagram data is an important research topic in the future.

Keywords: knowledge representation, reasoning, ontology, class diagram, software engineering

Procedia PDF Downloads 238
1064 Machine Learning Automatic Detection on Twitter Cyberbullying

Authors: Raghad A. Altowairgi

Abstract:

With the wide spread of social media platforms, young people tend to use them extensively as the first means of communication due to their ease and modernity. But these platforms often create a fertile ground for bullies to practice their aggressive behavior against their victims. Platform usage cannot be reduced, but intelligent mechanisms can be implemented to reduce the abuse. This is where machine learning comes in. Understanding and classifying text can be helpful in order to minimize the act of cyberbullying. Artificial intelligence techniques have expanded to formulate an applied tool to address the phenomenon of cyberbullying. In this research, machine learning models are built to classify text into two classes; cyberbullying and non-cyberbullying. After preprocessing the data in 4 stages; removing characters that do not provide meaningful information to the models, tokenization, removing stop words, and lowering text. BoW and TF-IDF are used as the main features for the five classifiers, which are; logistic regression, Naïve Bayes, Random Forest, XGboost, and Catboost classifiers. Each of them scores 92%, 90%, 92%, 91%, 86% respectively.

Keywords: cyberbullying, machine learning, Bag-of-Words, term frequency-inverse document frequency, natural language processing, Catboost

Procedia PDF Downloads 125
1063 Rapid Design Approach for Electric Long-Range Drones

Authors: Adrian Sauer, Lorenz Einberger, Florian Hilpert

Abstract:

The advancements and technical innovations in the field of electric unmanned aviation over the past years opened the third dimension in areas like surveillance, logistics, and mobility for a wide range of private and commercial users. Researchers and companies are faced with the task of integrating their technology into airborne platforms. Especially start-ups and researchers require unmanned aerial vehicles (UAV), which can be quickly developed for specific use cases without spending significant time and money. This paper shows a design approach for the rapid development of a lightweight automatic separate-lift-thrust (SLT) electric vertical take-off and landing (eVTOL) UAV prototype, which is able to fulfill basic transportation as well as surveillance missions. The design approach does not require expensive or time-consuming design loop software. Thereby developers can easily understand, adapt, and adjust the presented method for their own project. The approach is mainly focused on crucial design aspects such as aerofoil, tuning, and powertrain.

Keywords: aerofoil, drones, rapid prototyping, powertrain

Procedia PDF Downloads 69
1062 TMIF: Transformer-Based Multi-Modal Interactive Fusion for Rumor Detection

Authors: Jiandong Lv, Xingang Wang, Cuiling Shao

Abstract:

The rapid development of social media platforms has made it one of the important news sources. While it provides people with convenient real-time communication channels, fake news and rumors are also spread rapidly through social media platforms, misleading the public and even causing bad social impact in view of the slow speed and poor consistency of artificial rumor detection. We propose an end-to-end rumor detection model-TIMF, which captures the dependencies between multimodal data based on the interactive attention mechanism, uses a transformer for cross-modal feature sequence mapping and combines hybrid fusion strategies to obtain decision results. This paper verifies two multi-modal rumor detection datasets and proves the superior performance and early detection performance of the proposed model.

Keywords: hybrid fusion, multimodal fusion, rumor detection, social media, transformer

Procedia PDF Downloads 233
1061 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 124
1060 ViraPart: A Text Refinement Framework for Automatic Speech Recognition and Natural Language Processing Tasks in Persian

Authors: Narges Farokhshad, Milad Molazadeh, Saman Jamalabbasi, Hamed Babaei Giglou, Saeed Bibak

Abstract:

The Persian language is an inflectional subject-object-verb language. This fact makes Persian a more uncertain language. However, using techniques such as Zero-Width Non-Joiner (ZWNJ) recognition, punctuation restoration, and Persian Ezafe construction will lead us to a more understandable and precise language. In most of the works in Persian, these techniques are addressed individually. Despite that, we believe that for text refinement in Persian, all of these tasks are necessary. In this work, we proposed a ViraPart framework that uses embedded ParsBERT in its core for text clarifications. First, used the BERT variant for Persian followed by a classifier layer for classification procedures. Next, we combined models outputs to output cleartext. In the end, the proposed model for ZWNJ recognition, punctuation restoration, and Persian Ezafe construction performs the averaged F1 macro scores of 96.90%, 92.13%, and 98.50%, respectively. Experimental results show that our proposed approach is very effective in text refinement for the Persian language.

Keywords: Persian Ezafe, punctuation, ZWNJ, NLP, ParsBERT, transformers

Procedia PDF Downloads 209
1059 Automatic Classification of the Stand-to-Sit Phase in the TUG Test Using Machine Learning

Authors: Yasmine Abu Adla, Racha Soubra, Milana Kasab, Mohamad O. Diab, Aly Chkeir

Abstract:

Over the past several years, researchers have shown a great interest in assessing the mobility of elderly people to measure their functional status. Usually, such an assessment is done by conducting tests that require the subject to walk a certain distance, turn around, and finally sit back down. Consequently, this study aims to provide an at home monitoring system to assess the patient’s status continuously. Thus, we proposed a technique to automatically detect when a subject sits down while walking at home. In this study, we utilized a Doppler radar system to capture the motion of the subjects. More than 20 features were extracted from the radar signals, out of which 11 were chosen based on their intraclass correlation coefficient (ICC > 0.75). Accordingly, the sequential floating forward selection wrapper was applied to further narrow down the final feature vector. Finally, 5 features were introduced to the linear discriminant analysis classifier, and an accuracy of 93.75% was achieved as well as a precision and recall of 95% and 90%, respectively.

Keywords: Doppler radar system, stand-to-sit phase, TUG test, machine learning, classification

Procedia PDF Downloads 157
1058 Using Wearable Technology to Monitor Workers’ Stress for Construction Safety: A Conceptual Framework

Authors: Namhun Lee, Seong Jin Kim

Abstract:

The construction industry represents one of the largest industries in the United States, yet it continues to face several occupational health and safety challenges. Many workers on construction sites are suffering from extended exposure to stressful situations such as poor and hazardous work environments and task complexity. Stress can be commonly defined as a feeling of emotional or physical tension, which can easily impact construction safety and result in a higher rate of job-related injuries in the construction industry. Physiological signals transmitted from wearable biosensors can be used to detect excessive stress. Therefore, workers’ stress should be detected and mitigated to prevent any type of serious incident or accident proactively. By doing this, construction productivity, as well as job satisfaction, would also be improved in the construction industry. To establish a foundation in this field of research, a conceptual framework for using wearable technology for construction safety has been developed for continuous and automatic monitoring of worker’s stress. The conceptual framework will serve as a foothold in future studies on the application of wearable technology for construction safety.

Keywords: construction safety, occupational stress, stress monitoring, wearable biosensors

Procedia PDF Downloads 154
1057 Natural Hazards and Their Costs in Albanian Part of Ohrid Graben

Authors: Mentor Sulollari

Abstract:

Albania, according to (UNU-EHS) United Nations University, Institute for Environment and Human Security studies for 2015, is listed as the number one country in Europe for the possibility to be caught by natural catastrophes. This is conditioned by unstudied human activity, which has seriously damaged the environment. Albanian part of Ohrid graben that lies in Southeast of Albania, is endangered by landslides and floods, as a result of uncontrolled urban development and low level of investment in infrastructure, rugged terrain in its western part and capricious climate caused by global warming. To be dealt with natural disasters, which cause casualties and material damage, it is important to study them in order to anticipate and reduce damages in future. As part of this study is the construction of natural hazards map, which show us where they are distributed, and which are the vulnerable areas. This article will also be dealing with socio-economic and environmental costs of those events and what are the measures to be taken to reduce them.

Keywords: flooding, landslides, natural catastrophes mapping, Pogradec, lake Ohrid, Albanian part of Ohrid graben

Procedia PDF Downloads 291
1056 Early Requirement Engineering for Design of Learner Centric Dynamic LMS

Authors: Kausik Halder, Nabendu Chaki, Ranjan Dasgupta

Abstract:

We present a modelling framework that supports the engineering of early requirements specifications for design of learner centric dynamic Learning Management System. The framework is based on i* modelling tool and Means End Analysis, that adopts primitive concepts for modelling early requirements (such as actor, goal, and strategic dependency). We show how pedagogical and computational requirements for designing a learner centric Learning Management system can be adapted for the automatic early requirement engineering specifications. Finally, we presented a model on a Learner Quanta based adaptive Courseware. Our early requirement analysis shows that how means end analysis reveals gaps and inconsistencies in early requirements specifications that are by no means trivial to discover without the help of formal analysis tool.

Keywords: adaptive courseware, early requirement engineering, means end analysis, organizational modelling, requirement modelling

Procedia PDF Downloads 494
1055 Preliminary Seismic Hazard Mapping of Papua New Guinea

Authors: Hadi Ghasemi, Mark Leonard, Spiliopoulos Spiro, Phil Cummins, Mathew Moihoi, Felix Taranu, Eric Buri, Chris Mckee

Abstract:

In this study the level of seismic hazard in terms of Peak Ground Acceleration (PGA) was calculated for return period of 475 years, using modeled seismic sources and assigned ground-motion equations. The calculations were performed for bedrock site conditions (Vs30=760 m/s). From the results it is evident that the seismic hazard reaches its maximum level (i.e. PGA≈1g for 475 yr return period) at the Huon Peninsula and southern New Britain regions. Disaggregation analysis revealed that moderate to large earthquakes occurring along the New Britain Trench mainly control the level of hazard at these locations. The open-source computer program OpenQuake developed by Global Earthquake Model foundation was used for the seismic hazard computations. It should be emphasized that the presented results are still preliminary and should not be interpreted as our final assessment of seismic hazard in PNG.

Keywords: probabilistic seismic hazard assessment, Papua New Guinea, building code, OpenQuake

Procedia PDF Downloads 551
1054 Mapping New Technologies for Sustainability along the Fashion Supply Chain

Authors: Hilde Heim

Abstract:

The textile industry is known for its swift adoption of innovations in fashion technology (Fash-Tech). The industry is also known for its harmful effects on the environment. Opportunely, Fash-Tech is expected to facilitate the turn towards more sustainable practice. However, although several technologies have the potential for advancing sustainable practice, many industry players, whether large or small, are confused and misinformed about Fash-Tech adoption, application, and impact. Through a visual poster presentation, this project aims to map global fashion innovations along the supply chain from fibre production to waste management, thus providing a clearer picture of numbers, scale, and adoption. While the project aims to identify Fash-Tech effectiveness in reaching sustainability goals, it also identifies areas of congestion as well as insufficiency in the accessibility of Fash-Tech. This project intends to help inform future decisions in business, investment, and policy for the advancement of sustainable practice.

Keywords: fashion technology, sustainability, supply chain, enterprise management

Procedia PDF Downloads 237
1053 A Collaborative Platform for Multilingual Ontology Development

Authors: Ahmed Tawfik, Fausto Giunchiglia, Vincenzo Maltese

Abstract:

Ontologies provide a common understanding of a specific domain of interest that can be communicated between people and used as background knowledge for automated reasoning in a wide range of applications. In this paper we address the design of multilingual ontologies following well-defined knowledge engineering methodologies with the support of novel collaborative development approaches. In particular, we present a collaborative platform which allows ontologies to be developed incrementally in multiple languages. This is made possible via an appropriate mapping between language independent concepts and one lexicalization per language (or a lexical gap in case such lexicalization does not exist). The collaborative platform has been designed to support the development of the Universal Knowledge Core, a multilingual ontology currently in English, Italian, Chinese, Mongolian, Hindi, and Bangladeshi. Its design follows a workflow-based development methodology that models resources as a set of collaborative objects and assigns customizable workflows to build and maintain each collaborative object in a community driven manner, with extensive support of modern web 2.0 social and collaborative features.

Keywords: knowledge diversity, knowledge representation, ontology, development

Procedia PDF Downloads 386
1052 Determination of Water Pollution and Water Quality with Decision Trees

Authors: Çiğdem Bakır, Mecit Yüzkat

Abstract:

With the increasing emphasis on water quality worldwide, the search for and expanding the market for new and intelligent monitoring systems has increased. The current method is the laboratory process, where samples are taken from bodies of water, and tests are carried out in laboratories. This method is time-consuming, a waste of manpower, and uneconomical. To solve this problem, we used machine learning methods to detect water pollution in our study. We created decision trees with the Orange3 software we used in our study and tried to determine all the factors that cause water pollution. An automatic prediction model based on water quality was developed by taking many model inputs such as water temperature, pH, transparency, conductivity, dissolved oxygen, and ammonia nitrogen with machine learning methods. The proposed approach consists of three stages: preprocessing of the data used, feature detection, and classification. We tried to determine the success of our study with different accuracy metrics and the results. We presented it comparatively. In addition, we achieved approximately 98% success with the decision tree.

Keywords: decision tree, water quality, water pollution, machine learning

Procedia PDF Downloads 77
1051 The Appropriation of Education Policy on Information and Communication Technology in South African Schools

Authors: T. Vandeyar

Abstract:

The purpose of this study is to explore how Government policy on ICT influences teaching and learning in South African schools. An instrumental case study using backward mapping principles as a strategy of inquiry was used. Utilizing a social constructivist lens and guided by a theoretical framework of a sociocultural approach to policy analysis, this exploratory qualitative research study set out to investigate how teachers appropriate government policy on ICT in South African schools. Three major findings emanated from this study. First, although teachers were ignorant of the national e-education policy their professionalism and agency were key in formulating and implementing an e-education policy in practice. Second, teachers repositioned themselves not as recipients or reactors of the e-education policy but as social and cultural actors of policy appropriation and formulation. Third, the lack of systemic support to teachers catalyzed improved school and teacher collaborations, teachers became drivers of ICT integration through collaboration, innovation, institutional practice and institutional leadership.

Keywords: ICT, teachers as change agents, practice as policy, teacher's beliefs, teacher's attitudes

Procedia PDF Downloads 472
1050 Digital Humanities in The US/Mexico Borderlands: Activism, Literature, and Border Crossers

Authors: Martin Camps

Abstract:

The two-thousand-mile border that divides the United States and Mexico is a “contact zone” of cultural friction and unbalanced power relations as defined by Mary Louise Pratt. The interest of this paper is to analyze digital platforms created to address the study and comprehension of the borderlands with pedagogical and research reasons. The paper explores ways to engage students in archival and analytical practices to build a repository of resources, links, and digital tools and consider how to adapt them to the study of the borderlands. Sites such as “Torn Apart / Separados,” “Digital Borderlands,” “Borderlands Archives Cartography,” and “Juaritos Literario” show visualizations, mapping, and access to materials and marginal literature on the border phenomenon. Analyzing these projects contributes to highlighting digital projects and the study of the border and how to engage in activism via the study of literature and the representation of a human tragedy that underscores the divisions and biopolitics imposed on the Global South and imagine the digital border futures.

Keywords: borderlands, digital humanities, activism, border literature

Procedia PDF Downloads 74
1049 Automatic Lead Qualification with Opinion Mining in Customer Relationship Management Projects

Authors: Victor Radich, Tania Basso, Regina Moraes

Abstract:

Lead qualification is one of the main procedures in Customer Relationship Management (CRM) projects. Its main goal is to identify potential consumers who have the ideal characteristics to establish a profitable and long-term relationship with a certain organization. Social networks can be an important source of data for identifying and qualifying leads since interest in specific products or services can be identified from the users’ expressed feelings of (dis)satisfaction. In this context, this work proposes the use of machine learning techniques and sentiment analysis as an extra step in the lead qualification process in order to improve it. In addition to machine learning models, sentiment analysis or opinion mining can be used to understand the evaluation that the user makes of a particular service, product, or brand. The results obtained so far have shown that it is possible to extract data from social networks and combine the techniques for a more complete classification.

Keywords: lead qualification, sentiment analysis, opinion mining, machine learning, CRM, lead scoring

Procedia PDF Downloads 79
1048 Assessing Relationships between Glandularity and Gray Level by Using Breast Phantoms

Authors: Yun-Xuan Tang, Pei-Yuan Liu, Kun-Mu Lu, Min-Tsung Tseng, Liang-Kuang Chen, Yuh-Feng Tsai, Ching-Wen Lee, Jay Wu

Abstract:

Breast cancer is predominant of malignant tumors in females. The increase in the glandular density increases the risk of breast cancer. BI-RADS is a frequently used density indicator in mammography; however, it significantly overestimates the glandularity. Therefore, it is very important to accurately and quantitatively assess the glandularity by mammography. In this study, 20%, 30% and 50% glandularity phantoms were exposed using a mammography machine at 28, 30 and 31 kVp, and 30, 55, 80 and 105 mAs, respectively. The regions of interest (ROIs) were drawn to assess the gray level. The relationship between the glandularity and gray level under various compression thicknesses, kVp, and mAs was established by the multivariable linear regression. A phantom verification was performed with automatic exposure control (AEC). The regression equation was obtained with an R-square value of 0.928. The average gray levels of the verification phantom were 8708, 8660 and 8434 for 0.952, 0.963 and 0.985 g/cm3, respectively. The percent differences of glandularity to the regression equation were 3.24%, 2.75% and 13.7%. We concluded that the proposed method could be clinically applied in mammography to improve the glandularity estimation and further increase the importance of breast cancer screening.

Keywords: mammography, glandularity, gray value, BI-RADS

Procedia PDF Downloads 486
1047 Analysing Industry Clustering to Develop Competitive Advantage for Wualai Silver Handicraft

Authors: Khanita Tumphasuwan

Abstract:

The Wualai community of Northern Thailand represents important intellectual and social capital and their silver handicraft products are desirable tourist souvenirs within Chiang Mai Province. This community has been in danger of losing this social and intellectual capital due to the application of an improper tool, the Scottish Enterprise model of clustering. This research aims to analyze and increase its competitive advantages for preventing the loss of social and intellectual capital. To improve the Wualai’s competitive advantage, analysis is undertaken using a Porterian cluster approach, including the diamond model, five forces model and cluster mapping. Research results suggest that utilizing the community’s Buddhist beliefs can foster collaboration between community members and is the only way to improve cluster effectiveness, increase competitive advantage, and in turn conserve the Wualai community.

Keywords: industry clustering, silver handicraft, competitive advantage, intellectual capital, social capital

Procedia PDF Downloads 563
1046 Implementation of IWA-ASM1 Model for Simulating the Wastewater Treatment Plant of Beja by GPS-X 5.1

Authors: Fezzani Boubaker

Abstract:

The modified activated sludge model (ASM1 or Mantis) is a generic structured model and a common platform for dynamic simulation of varieties of aerobic processes for optimization and upgrading of existing plants and for new facilities design. In this study, the modified ASM1 included in the GPS-X software was used to simulate the wastewater treatment plant (WWTP) of Beja treating domestic sewage mixed with baker‘s yeast factory effluent. The results of daily measurements and operating records were used to calibrate the model. A sensitivity and an automatic optimization analysis were conducted to determine the most sensitive and optimal parameters. The results indicated that the ASM1 model could simulate with good accuracy: the COD concentration of effluents from the WWTP of Beja for all months of the year 2012. In addition, it prevents the disruption observed at the output of the plant by injecting the baker‘s yeast factory effluent at high concentrations varied between 20 and 80 g/l.

Keywords: ASM1, activated sludge, baker’s yeast effluent, modelling, simulation, GPS-X 5.1 software

Procedia PDF Downloads 339
1045 Health Percentage Evaluation for Satellite Electrical Power System Based on Linear Stresses Accumulation Damage Theory

Authors: Lin Wenli, Fu Linchun, Zhang Yi, Wu Ming

Abstract:

To meet the demands of long-life and high-intelligence for satellites, the electrical power system should be provided with self-health condition evaluation capability. Any over-stress events in operations should be recorded. Based on Linear stresses accumulation damage theory, accumulative damage analysis was performed on thermal-mechanical-electrical united stresses for three components including the solar array, the batteries and the power conditioning unit. Then an overall health percentage evaluation model for satellite electrical power system was built. To obtain the accurate quantity for system health percentage, an automatic feedback closed-loop correction method for all coefficients in the evaluation model was present. The evaluation outputs could be referred as taking earlier fault-forecast and interventions for Ground Control Center or Satellites self.

Keywords: satellite electrical power system, health percentage, linear stresses accumulation damage, evaluation model

Procedia PDF Downloads 405
1044 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning

Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul

Abstract:

In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.

Keywords: electrocardiogram, dictionary learning, sparse coding, classification

Procedia PDF Downloads 378
1043 A Real Time Expert System for Decision Support in Nuclear Power Plants

Authors: Andressa dos Santos Nicolau, João P. da S.C Algusto, Claudio Márcio do N. A. Pereira, Roberto Schirru

Abstract:

In case of abnormal situations, the nuclear power plant (NPP) operators must follow written procedures to check the condition of the plant and to classify the type of emergency. In this paper, we proposed a Real Time Expert System in order to improve operator’s performance in case of transient or accident with reactor shutdown. The expert system’s knowledge is based on the sequence of events (SoE) of known accident and two emergency procedures of the Brazilian Pressurized Water Reactor (PWR) NPP and uses two kinds of knowledge representation: rule and logic trees. The results show that the system was able to classify the response of the automatic protection systems, as well as to evaluate the conditions of the plant, diagnosing the type of occurrence, recovery procedure to be followed, indicating the shutdown root cause, and classifying the emergency level.

Keywords: emergence procedure, expert system, operator support, PWR nuclear power plant

Procedia PDF Downloads 327
1042 Application of UAS in Forest Firefighting for Detecting Ignitions and 3D Fuel Volume Estimation

Authors: Artur Krukowski, Emmanouela Vogiatzaki

Abstract:

The article presents results from the AF3 project “Advanced Forest Fire Fighting” focused on Unmanned Aircraft Systems (UAS)-based 3D surveillance and 3D area mapping using high-resolution photogrammetric methods from multispectral imaging, also taking advantage of the 3D scanning techniques from the SCAN4RECO project. We also present a proprietary embedded sensor system used for the detection of fire ignitions in the forest using near-infrared based scanner with weight and form factors allowing it to be easily deployed on standard commercial micro-UAVs, such as DJI Inspire or Mavic. Results from real-life pilot trials in Greece, Spain, and Israel demonstrated added-value in the use of UAS for precise and reliable detection of forest fires, as well as high-resolution 3D aerial modeling for accurate quantification of human resources and equipment required for firefighting.

Keywords: forest wildfires, surveillance, fuel volume estimation, firefighting, ignition detectors, 3D modelling, UAV

Procedia PDF Downloads 138
1041 Automatic Extraction of Arbitrarily Shaped Buildings from VHR Satellite Imagery

Authors: Evans Belly, Imdad Rizvi, M. M. Kadam

Abstract:

Satellite imagery is one of the emerging technologies which are extensively utilized in various applications such as detection/extraction of man-made structures, monitoring of sensitive areas, creating graphic maps etc. The main approach here is the automated detection of buildings from very high resolution (VHR) optical satellite images. Initially, the shadow, the building and the non-building regions (roads, vegetation etc.) are investigated wherein building extraction is mainly focused. Once all the landscape is collected a trimming process is done so as to eliminate the landscapes that may occur due to non-building objects. Finally the label method is used to extract the building regions. The label method may be altered for efficient building extraction. The images used for the analysis are the ones which are extracted from the sensors having resolution less than 1 meter (VHR). This method provides an efficient way to produce good results. The additional overhead of mid processing is eliminated without compromising the quality of the output to ease the processing steps required and time consumed.

Keywords: building detection, shadow detection, landscape generation, label, partitioning, very high resolution (VHR) satellite imagery

Procedia PDF Downloads 310