Search results for: disturbance tracking algorithm
1252 Automatic Method for Exudates and Hemorrhages Detection from Fundus Retinal Images
Authors: A. Biran, P. Sobhe Bidari, K. Raahemifar
Abstract:
Diabetic Retinopathy (DR) is an eye disease that leads to blindness. The earliest signs of DR are the appearance of red and yellow lesions on the retina called hemorrhages and exudates. Early diagnosis of DR prevents from blindness; hence, many automated algorithms have been proposed to extract hemorrhages and exudates. In this paper, an automated algorithm is presented to extract hemorrhages and exudates separately from retinal fundus images using different image processing techniques including Circular Hough Transform (CHT), Contrast Limited Adaptive Histogram Equalization (CLAHE), Gabor filter and thresholding. Since Optic Disc is the same color as the exudates, it is first localized and detected. The presented method has been tested on fundus images from Structured Analysis of the Retina (STARE) and Digital Retinal Images for Vessel Extraction (DRIVE) databases by using MATLAB codes. The results show that this method is perfectly capable of detecting hard exudates and the highly probable soft exudates. It is also capable of detecting the hemorrhages and distinguishing them from blood vessels.Keywords: diabetic retinopathy, fundus, CHT, exudates, hemorrhages
Procedia PDF Downloads 2721251 Forced Vibration of a Planar Curved Beam on Pasternak Foundation
Authors: Akif Kutlu, Merve Ermis, Nihal Eratlı, Mehmet H. Omurtag
Abstract:
The objective of this study is to investigate the forced vibration analysis of a planar curved beam lying on elastic foundation by using the mixed finite element method. The finite element formulation is based on the Timoshenko beam theory. In order to solve the problems in frequency domain, the element matrices of two nodded curvilinear elements are transformed into Laplace space. The results are transformed back to the time domain by the well-known numerical Modified Durbin’s transformation algorithm. First, the presented finite element formulation is verified through the forced vibration analysis of a planar curved Timoshenko beam resting on Winkler foundation and the finite element results are compared with the results available in the literature. Then, the forced vibration analysis of a planar curved beam resting on Winkler-Pasternak foundation is conducted.Keywords: curved beam, dynamic analysis, elastic foundation, finite element method
Procedia PDF Downloads 3451250 PWM Based Control of Dstatcom for Voltage Sag, Swell Mitigation in Distribution Systems
Authors: A. Assif
Abstract:
This paper presents the modeling of a prototype distribution static compensator (D-STATCOM) for voltage sag and swell mitigation in an unbalanced distribution system. Here the concept that an inverter can be used as generalized impedance converter to realize either inductive or capacitive reactance has been used to mitigate power quality issues of distribution networks. The D-STATCOM is here supposed to replace the widely used StaticVar Compensator (SVC). The scheme is based on the Voltage Source Converter (VSC) principle. In this model PWM based control scheme has been implemented to control the electronic valves of VSC. Phase shift control Algorithm method is used for converter control. The D-STATCOM injects a current into the system to mitigate the voltage sags. In this paper the modeling of D¬STATCOM has been designed using MATLAB SIMULINIC. Accordingly, simulations are first carried out to illustrate the use of D-STATCOM in mitigating voltage sag in a distribution system. Simulation results prove that the D-STATCOM is capable of mitigating voltage sag as well as improving power quality of a system.Keywords: D-STATCOM, voltage sag, voltage source converter (VSC), phase shift control
Procedia PDF Downloads 3431249 Land Use Change Detection Using Remote Sensing and GIS
Authors: Naser Ahmadi Sani, Karim Solaimani, Lida Razaghnia, Jalal Zandi
Abstract:
In recent decades, rapid and incorrect changes in land-use have been associated with consequences such as natural resources degradation and environmental pollution. Detecting changes in land-use is one of the tools for natural resource management and assessment of changes in ecosystems. The target of this research is studying the land-use changes in Haraz basin with an area of 677000 hectares in a 15 years period (1996 to 2011) using LANDSAT data. Therefore, the quality of the images was first evaluated. Various enhancement methods for creating synthetic bonds were used in the analysis. Separate training sites were selected for each image. Then the images of each period were classified in 9 classes using supervised classification method and the maximum likelihood algorithm. Finally, the changes were extracted in GIS environment. The results showed that these changes are an alarm for the HARAZ basin status in future. The reason is that 27% of the area has been changed, which is related to changing the range lands to bare land and dry farming and also changing the dense forest to sparse forest, horticulture, farming land and residential area.Keywords: Haraz basin, change detection, land-use, satellite data
Procedia PDF Downloads 4151248 DIAL Measurements of Vertical Distribution of Ozone at the Siberian Lidar Station in Tomsk
Authors: Oleg A. Romanovskii, Vladimir D. Burlakov, Sergey I. Dolgii, Olga V. Kharchenko, Alexey A. Nevzorov, Alexey V. Nevzorov
Abstract:
The paper presents the results of DIAL measurements of the vertical ozone distribution. The ozone lidar operate as part of the measurement complex at Siberian Lidar Station (SLS) of V.E. Zuev Institute of Atmospheric Optics SB RAS, Tomsk (56.5ºN; 85.0ºE) and designed for study of the vertical ozone distribution in the upper troposphere–lower stratosphere. Most suitable wavelengths for measurements of ozone profiles are selected. We present an algorithm for retrieval of vertical distribution of ozone with temperature and aerosol correction during DIAL lidar sounding of the atmosphere. The temperature correction of ozone absorption coefficients is introduced in the software to reduce the retrieval errors. Results of lidar measurement at wavelengths of 299 and 341 nm agree with model estimates, which point to acceptable accuracy of ozone sounding in the 6–18 km altitude range.Keywords: lidar, ozone distribution, atmosphere, DIAL
Procedia PDF Downloads 4971247 Recursive Doubly Complementary Filter Design Using Particle Swarm Optimization
Authors: Ju-Hong Lee, Ding-Chen Chung
Abstract:
This paper deals with the optimal design of recursive doubly complementary (DC) digital filter design using a metaheuristic based optimization technique. Based on the theory of DC digital filters using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the phase response errors of the designed DAFs. To deal with the stability of the recursive DC filters during the design process, we can either impose some necessary constraints on the phases of the recursive DAFs. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a population based stochastic optimization approach. The resulting DC digital filters can possess satisfactory frequency response. Simulation results are presented for illustration and comparison.Keywords: doubly complementary, digital all-pass filter, weighted least squares algorithm, particle swarm optimization
Procedia PDF Downloads 6881246 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1371245 Video Foreground Detection Based on Adaptive Mixture Gaussian Model for Video Surveillance Systems
Authors: M. A. Alavianmehr, A. Tashk, A. Sodagaran
Abstract:
Modeling background and moving objects are significant techniques for video surveillance and other video processing applications. This paper presents a foreground detection algorithm that is robust against illumination changes and noise based on adaptive mixture Gaussian model (GMM), and provides a novel and practical choice for intelligent video surveillance systems using static cameras. In the previous methods, the image of still objects (background image) is not significant. On the contrary, this method is based on forming a meticulous background image and exploiting it for separating moving objects from their background. The background image is specified either manually, by taking an image without vehicles, or is detected in real-time by forming a mathematical or exponential average of successive images. The proposed scheme can offer low image degradation. The simulation results demonstrate high degree of performance for the proposed method.Keywords: image processing, background models, video surveillance, foreground detection, Gaussian mixture model
Procedia PDF Downloads 5161244 Bi-Component Particle Segregation Studies in a Spiral Concentrator Using Experimental and CFD Techniques
Authors: Prudhvinath Reddy Ankireddy, Narasimha Mangadoddy
Abstract:
Spiral concentrators are commonly used in various industries, including mineral and coal processing, to efficiently separate materials based on their density and size. In these concentrators, a mixture of solid particles and fluid (usually water) is introduced as feed at the top of a spiral channel. As the mixture flows down the spiral, centrifugal and gravitational forces act on the particles, causing them to stratify based on their density and size. Spiral flows exhibit complex fluid dynamics, and interactions involve multiple phases and components in the process. Understanding the behavior of these phases within the spiral concentrator is crucial for achieving efficient separation. An experimental bi-component particle interaction study is conducted in this work utilizing magnetite (heavier density) and silica (lighter density) with different proportions processed in the spiral concentrator. The observation separation reveals that denser particles accumulate towards the inner region of the spiral trough, while a significant concentration of lighter particles are found close to the outer edge. The 5th turn of the spiral trough is partitioned into five zones to achieve a comprehensive distribution analysis of bicomponent particle segregation. Samples are then gathered from these individual streams using an in-house sample collector, and subsequent analysis is conducted to assess component segregation. Along the trough, there was a decline in the concentration of coarser particles, accompanied by an increase in the concentration of lighter particles. The segregation pattern indicates that the heavier coarse component accumulates in the inner zone, whereas the lighter fine component collects in the outer zone. The middle zone primarily consists of heavier fine particles and lighter coarse particles. The zone-wise results reveal that there is a significant fraction of segregation occurs in inner and middle zones. Finer magnetite and silica particles predominantly accumulate in outer zones with the smallest fraction of segregation. Additionally, numerical simulations are also carried out using the computational fluid dynamics (CFD) model based on the volume of fluid (VOF) approach incorporating the RSM turbulence model. The discrete phase model (DPM) is employed for particle tracking, thereby understanding the particle segregation of magnetite and silica along the spiral trough.Keywords: spiral concentrator, bi-component particle segregation, computational fluid dynamics, discrete phase model
Procedia PDF Downloads 671243 Numerical Analysis on Triceratops Restraining System: Failure Conditions of Tethers
Authors: Srinivasan Chandrasekaran, Manda Hari Venkata Ramachandra Rao
Abstract:
Increase in the oil and gas exploration in ultra deep-water demands an adaptive structural form of the platform. Triceratops has superior motion characteristics compared to that of the Tension Leg Platform and Single Point Anchor Reservoir platforms, which is well established in the literature. Buoyant legs that support the deck are position-restrained to the sea bed using tethers with high axial pretension. Environmental forces that act on the platform induce dynamic tension variations in the tethers, causing the failure of tethers. The present study investigates the dynamic response behavior of the restraining system of the platform under the failure of a single tether of each buoyant leg in high sea states. Using the rain-flow counting algorithm and the Goodman diagram, fatigue damage caused to the tethers is estimated, and the fatigue life is predicted. Results shows that under failure conditions, the fatigue life of the remaining tethers is quite alarmingly low.Keywords: fatigue life, pm spectrum, rain flow counting, triceratops, failure analysis
Procedia PDF Downloads 1351242 A Robust Optimization Model for the Single-Depot Capacitated Location-Routing Problem
Authors: Abdolsalam Ghaderi
Abstract:
In this paper, the single-depot capacitated location-routing problem under uncertainty is presented. The problem aims to find the optimal location of a single depot and the routing of vehicles to serve the customers when the parameters may change under different circumstances. This problem has many applications, especially in the area of supply chain management and distribution systems. To get closer to real-world situations, travel time of vehicles, the fixed cost of vehicles usage and customers’ demand are considered as a source of uncertainty. A combined approach including robust optimization and stochastic programming was presented to deal with the uncertainty in the problem at hand. For this purpose, a mixed integer programming model is developed and a heuristic algorithm based on Variable Neighborhood Search(VNS) is presented to solve the model. Finally, the computational results are presented and future research directions are discussed.Keywords: location-routing problem, robust optimization, stochastic programming, variable neighborhood search
Procedia PDF Downloads 2701241 Resilience-Based Emergency Bridge Inspection Routing and Repair Scheduling under Uncertainty
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway network systems play a vital role in disaster response for disaster-damaged areas. Damaged bridges in such network systems can impede disaster response by disrupting transportation of rescue teams or humanitarian supplies. Therefore, emergency inspection and repair of bridges to quickly collect damage information of bridges and recover the functionality of highway networks is of paramount importance to disaster response. A widely used measure of a network’s capability to recover from disasters is resilience. To enhance highway network resilience, plenty of studies have developed various repair scheduling methods for the prioritization of bridge-repair tasks. These methods assume that repair activities are performed after the damage to a highway network is fully understood via inspection, although inspecting all bridges in a regional highway network may take days, leading to the significant delay in repairing bridges. In reality, emergency repair activities can be commenced as soon as the damage data of some bridges that are crucial to emergency response are obtained. Given that emergency bridge inspection and repair (EBIR) activities are executed simultaneously in the response phase, the real-time interactions between these activities can occur – the blockage of highways due to repair activities can affect inspection routes which in turn have an impact on emergency repair scheduling by providing real-time information on bridge damages. However, the impact of such interactions on the optimal emergency inspection routes (EIR) and emergency repair schedules (ERS) has not been discussed in prior studies. To overcome the aforementioned deficiencies, this study develops a routing and scheduling model for EBIR while accounting for real-time inspection-repair interactions to maximize highway network resilience. A stochastic, time-dependent integer program is proposed for the complex and real-time interacting EBIR problem given multiple inspection and repair teams at locations as set post-disaster. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. Computational tests are performed using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that the simultaneous implementation of bridge inspection and repair activities can significantly improve the highway network resilience. Moreover, the deployment of inspection and repair teams should match each other, and the network resilience will not be improved once the unilateral increase in inspection teams or repair teams exceeds a certain level. This study contributes to both knowledge and practice. First, the developed mathematical model makes it possible for capturing the impact of real-time inspection-repair interactions on inspection routing and repair scheduling and efficiently deriving optimal EIR and ERS on a large and complex highway network. Moreover, this study contributes to the organizational dimension of highway network resilience by providing optimal strategies for highway bridge management. With the decision support tool, disaster managers are able to identify the most critical bridges for disaster management and make decisions on proper inspection and repair strategies to improve highway network resilience.Keywords: disaster management, emergency bridge inspection and repair, highway network, resilience, uncertainty
Procedia PDF Downloads 1091240 Detecting and Secluding Route Modifiers by Neural Network Approach in Wireless Sensor Networks
Authors: C. N. Vanitha, M. Usha
Abstract:
In a real world scenario, the viability of the sensor networks has been proved by standardizing the technologies. Wireless sensor networks are vulnerable to both electronic and physical security breaches because of their deployment in remote, distributed, and inaccessible locations. The compromised sensor nodes send malicious data to the base station, and thus, the total network effectiveness will possibly be compromised. To detect and seclude the Route modifiers, a neural network based Pattern Learning predictor (PLP) is presented. This algorithm senses data at any node on present and previous patterns obtained from the en-route nodes. The eminence of any node is upgraded by their predicted and reported patterns. This paper propounds a solution not only to detect the route modifiers, but also to seclude the malevolent nodes from the network. The simulation result proves the effective performance of the network by the presented methodology in terms of energy level, routing and various network conditions.Keywords: neural networks, pattern learning, security, wireless sensor networks
Procedia PDF Downloads 4041239 Data Analytics in Hospitality Industry
Authors: Tammy Wee, Detlev Remy, Arif Perdana
Abstract:
In the recent years, data analytics has become the buzzword in the hospitality industry. The hospitality industry is another example of a data-rich industry that has yet fully benefited from the insights of data analytics. Effective use of data analytics can change how hotels operate, market and position themselves competitively in the hospitality industry. However, at the moment, the data obtained by individual hotels remain under-utilized. This research is a preliminary research on data analytics in the hospitality industry, using an in-depth face-to-face interview on one hotel as a start to a multi-level research. The main case study of this research, hotel A, is a chain brand of international hotel that has been systematically gathering and collecting data on its own customer for the past five years. The data collection points begin from the moment a guest book a room until the guest leave the hotel premises, which includes room reservation, spa booking, and catering. Although hotel A has been gathering data intelligence on its customer for some time, they have yet utilized the data to its fullest potential, and they are aware of their limitation as well as the potential of data analytics. Currently, the utilization of data analytics in hotel A is limited in the area of customer service improvement, namely to enhance the personalization of service for each individual customer. Hotel A is able to utilize the data to improve and enhance their service which in turn, encourage repeated customers. According to hotel A, 50% of their guests returned to their hotel, and 70% extended nights because of the personalized service. Apart from using the data analytics for enhancing customer service, hotel A also uses the data in marketing. Hotel A uses the data analytics to predict or forecast the change in consumer behavior and demand, by tracking their guest’s booking preference, payment preference and demand shift between properties. However, hotel A admitted that the data they have been collecting was not fully utilized due to two challenges. The first challenge of using data analytics in hotel A is the data is not clean. At the moment, the data collection of one guest profile is meaningful only for one department in the hotel but meaningless for another department. Cleaning up the data and getting standards correctly for usage by different departments are some of the main concerns of hotel A. The second challenge of using data analytics in hotel A is the non-integral internal system. At the moment, the internal system used by hotel A do not integrate with each other well, limiting the ability to collect data systematically. Hotel A is considering another system to replace the current one for more comprehensive data collection. Hotel proprietors recognized the potential of data analytics as reported in this research, however, the current challenges of implementing a system to collect data come with a cost. This research has identified the current utilization of data analytics and the challenges faced when it comes to implementing data analytics.Keywords: data analytics, hospitality industry, customer relationship management, hotel marketing
Procedia PDF Downloads 1801238 A Fuzzy Logic Based Health Assesment Platform
Authors: J. Al-Dmour, A. Sagahyroon, A. Al-Ali, S. Abusnana
Abstract:
Radio Frequency Based Identification Systems have emerged as one of the possible valuable solutions that can be utilized in healthcare systems. Nowadays, RFID tags are available with built-in human vital signs sensors such as Body Temperature, Blood Pressure, Heart Rate, Blood Sugar level and Oxygen Saturation in Blood. This work proposes the design, implementation, and testing of an integrated mobile RFID-based health care system. The system consists of a wireless mobile vital signs data acquisition unit (RFID-DAQ) integrated with a fuzzy-logic–based software algorithm to monitor and assess patients conditions. The system is implemented and tested in ‘Rashid Center for Diabetes and Research’, Ajman, UAE. System testing results are compared with the Modified Early Warning System (MEWS) that is currently used in practice. We demonstrate that the proposed and implemented system exhibits an accuracy level that is comparable and sometimes better than the widely adopted MEWS system.Keywords: healthcare, fuzzy logic, MEWS, RFID
Procedia PDF Downloads 3481237 Comparison of Loosely Coupled and Tightly Coupled INS/GNSS Architecture for Guided Rocket Navigation System
Authors: Rahmat Purwoko, Bambang Riyanto Trilaksono
Abstract:
This paper gives comparison of INS/GNSS architecture namely Loosely Coupled and Tightly Coupled using Hardware in the Loop Simulation in Guided Missile RKX-200 rocket model. INS/GNSS Tightly Coupled architecture requires pseudo-range, pseudo-range rate, and position and velocity of each satellite in constellation from GPS (Global Positioning System) measurement. The Loosely Coupled architecture use estimated position and velocity from GNSS receiver. INS/GNSS architecture also requires angular rate and specific force measurement from IMU (Inertial Measurement Unit). Loosely Coupled arhitecture designed using 15 states Kalman Filter and Tightly Coupled designed using 17 states Kalman Filter. Integration algorithm calculation using ECEF frame. Navigation System implemented Zedboard All Programmable SoC.Keywords: kalman filter, loosely coupled, navigation system, tightly coupled
Procedia PDF Downloads 3091236 Hierarchical Tree Long Short-Term Memory for Sentence Representations
Authors: Xiuying Wang, Changliang Li, Bo Xu
Abstract:
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis
Procedia PDF Downloads 3491235 The Customization of 3D Last Form Design Based on Weighted Blending
Authors: Shih-Wen Hsiao, Chu-Hsuan Lee, Rong-Qi Chen
Abstract:
When it comes to last, it is regarded as the critical foundation of shoe design and development. Not only the last relates to the comfort of shoes wearing but also it aids the production of shoe styling and manufacturing. In order to enhance the efficiency and application of last development, a computer aided methodology for customized last form designs is proposed in this study. The reverse engineering is mainly applied to the process of scanning for the last form. Then the minimum energy is used for the revision of surface continuity, the surface of the last is reconstructed with the feature curves of the scanned last. When the surface of a last is reconstructed, based on the foundation of the proposed last form reconstruction module, the weighted arithmetic mean method is applied to the calculation on the shape morphing which differs from the grading for the control mesh of last, and the algorithm of subdivision is used to create the surface of last mesh, thus the feet-fitting 3D last form of different sizes is generated from its original form feature with functions remained. Finally, the practicability of the proposed methodology is verified through later case studies.Keywords: 3D last design, customization, reverse engineering, weighted morphing, shape blending
Procedia PDF Downloads 3401234 Amharic Text News Classification Using Supervised Learning
Authors: Misrak Assefa
Abstract:
The Amharic language is the second most widely spoken Semitic language in the world. There are several new overloaded on the web. Searching some useful documents from the web on a specific topic, which is written in the Amharic language, is a challenging task. Hence, document categorization is required for managing and filtering important information. In the classification of Amharic text news, there is still a gap in the domain of information that needs to be launch. This study attempts to design an automatic Amharic news classification using a supervised learning mechanism on four un-touch classes. To achieve this research, 4,182 news articles were used. Naive Bayes (NB) and Decision tree (j48) algorithms were used to classify the given Amharic dataset. In this paper, k-fold cross-validation is used to estimate the accuracy of the classifier. As a result, it shows those algorithms can be applicable in Amharic news categorization. The best average accuracy result is achieved by j48 decision tree and naïve Bayes is 95.2345 %, and 94.6245 % respectively using three categories. This research indicated that a typical decision tree algorithm is more applicable to Amharic news categorization.Keywords: text categorization, supervised machine learning, naive Bayes, decision tree
Procedia PDF Downloads 2111233 A Model to Assess Sustainability Using Multi-Criteria Analysis and Geographic Information Systems: A Case Study
Authors: Antonio Boggia, Luisa Paolotti, Gianluca Massei, Lucia Rocchi, Elaine Pace, Maria Attard
Abstract:
The aim of this paper is to present a methodology and a computer model for sustainability assessment based on the integration of Multi-criteria Decision Analysis (MCDA) with a Geographic Information System (GIS). It presents the result of a study for the implementation of a model for measuring sustainability to address the policy actions for the improvement of sustainability at territory level. The aim is to rank areas in order to understand the specific technical and/or financial support that is required to develop sustainable growth. Assessing sustainable development is a multidimensional problem: economic, social and environmental aspects have to be taken into account at the same time. The tool for a multidimensional representation is a proper set of indicators. The set of indicators must be integrated into a model, that is an assessment methodology, to be used for measuring sustainability. The model, developed by the Environmental Laboratory of the University of Perugia, is called GeoUmbriaSUIT. It is a calculation procedure developed as a plugin working in the open-source GIS software QuantumGIS. The multi-criteria method used within GeoUmbriaSUIT is the algorithm TOPSIS (Technique for Order Preference by Similarity to Ideal Design), which defines a ranking based on the distance from the worst point and the closeness to an ideal point, for each of the criteria used. For the sustainability assessment procedure, GeoUmbriaSUIT uses a geographic vector file where the graphic data represent the study area and the single evaluation units within it (the alternatives, e.g. the regions of a country, or the municipalities of a region), while the alphanumeric data (attribute table), describe the environmental, economic and social aspects related to the evaluation units by means of a set of indicators (criteria). The use of the algorithm available in the plugin allows to treat individually the indicators representing the three dimensions of sustainability, and to compute three different indices: environmental index, economic index and social index. The graphic output of the model allows for an integrated assessment of the three dimensions, avoiding aggregation. The presence of separate indices and graphic output make GeoUmbriaSUIT a readable and transparent tool, since it doesn’t produce an aggregate index of sustainability as final result of the calculations, which is often cryptic and difficult to interpret. In addition, it is possible to develop a “back analysis”, able to explain the positions obtained by the alternatives in the ranking, based on the criteria used. The case study presented is an assessment of the level of sustainability in the six regions of Malta, an island state in the middle of the Mediterranean Sea and the southernmost member of the European Union. The results show that the integration of MCDA-GIS is an adequate approach for sustainability assessment. In particular, the implemented model is able to provide easy to understand results. This is a very important condition for a sound decision support tool, since most of the time decision makers are not experts and need understandable output. In addition, the evaluation path is traceable and transparent.Keywords: GIS, multi-criteria analysis, sustainability assessment, sustainable development
Procedia PDF Downloads 2891232 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 1451231 DC Bus Voltage Ripple Control of Photo Voltaic Inverter in Low Voltage Ride-Trough Operation
Authors: Afshin Kadri
Abstract:
Using Renewable Energy Resources (RES) as a type of DG unit is developing in distribution systems. The connection of these generation units to existing AC distribution systems changes the structure and some of the operational aspects of these grids. Most of the RES requires to power electronic-based interfaces for connection to AC systems. These interfaces consist of at least one DC/AC conversion unit. Nowadays, grid-connected inverters must have the required feature to support the grid under sag voltage conditions. There are two curves in these conditions that show the magnitude of the reactive component of current as a function of voltage drop value and the required minimum time value, which must be connected to the grid. This feature is named low voltage ride-through (LVRT). Implementing this feature causes problems in the operation of the inverter that increases the amplitude of high-frequency components of the injected current and working out of maximum power point in the photovoltaic panel connected inverters are some of them. The important phenomenon in these conditions is ripples in the DC bus voltage that affects the operation of the inverter directly and indirectly. The losses of DC bus capacitors which are electrolytic capacitors, cause increasing their temperature and decreasing its lifespan. In addition, if the inverter is connected to the photovoltaic panels directly and has the duty of maximum power point tracking, these ripples cause oscillations around the operating point and decrease the generating energy. Using a bidirectional converter in the DC bus, which works as a buck and boost converter and transfers the ripples to its DC bus, is the traditional method to eliminate these ripples. In spite of eliminating the ripples in the DC bus, this method cannot solve the problem of reliability because it uses an electrolytic capacitor in its DC bus. In this work, a control method is proposed which uses the bidirectional converter as the fourth leg of the inverter and eliminates the DC bus ripples using an injection of unbalanced currents into the grid. Moreover, the proposed method works based on constant power control. In this way, in addition, to supporting the amplitude of grid voltage, it stabilizes its frequency by injecting active power. Also, the proposed method can eliminate the DC bus ripples in deep voltage drops, which cause increasing the amplitude of the reference current more than the nominal current of the inverter. The amplitude of the injected current for the faulty phases in these conditions is kept at the nominal value and its phase, together with the phase and amplitude of the other phases, are adjusted, which at the end, the ripples in the DC bus are eliminated, however, the generated power decreases.Keywords: renewable energy resources, voltage drop value, DC bus ripples, bidirectional converter
Procedia PDF Downloads 761230 An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Authors: Mayank Gupta, Siba Prasad Samal, Vasu Kakkirala
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.Keywords: chromium, lightweight engine, mobile computing, Naive Bayes, Tizen, web browser, webpage classification
Procedia PDF Downloads 1631229 The Fit of the Partial Pair Distribution Functions of BaMnFeF7 Fluoride Glass Using the Buckingham Potential by the Hybrid RMC Simulation
Authors: Sidi Mohamed Mesli, Mohamed Habchi, Arslane Boudghene Stambouli, Rafik Benallal
Abstract:
The BaMnMF7 (M=Fe,V, transition metal fluoride glass, assuming isomorphous replacement) have been structurally studied through the simultaneous simulation of their neutron diffraction patterns by reverse Monte Carlo (RMC) and by the Hybrid Reverse Monte Carlo (HRMC) analysis. This last is applied to remedy the problem of the artificial satellite peaks that appear in the partial pair distribution functions (PDFs) by the RMC simulation. The HRMC simulation is an extension of the RMC algorithm, which introduces an energy penalty term (potential) in acceptance criteria. The idea of this work is to apply the Buckingham potential at the title glass by ignoring the van der Waals terms, in order to make a fit of the partial pair distribution functions and give the most possible realistic features. When displaying the partial PDFs, we suggest that the Buckingham potential is useful to describe average correlations especially in similar interactions.Keywords: fluoride glasses, RMC simulation, hybrid RMC simulation, Buckingham potential, partial pair distribution functions
Procedia PDF Downloads 5031228 Philippine Site Suitability Analysis for Biomass, Hydro, Solar, and Wind Renewable Energy Development Using Geographic Information System Tools
Authors: Jara Kaye S. Villanueva, M. Rosario Concepcion O. Ang
Abstract:
For the past few years, Philippines has depended most of its energy source on oil, coal, and fossil fuel. According to the Department of Energy (DOE), the dominance of coal in the energy mix will continue until the year 2020. The expanding energy needs in the country have led to increasing efforts to promote and develop renewable energy. This research is a part of the government initiative in preparation for renewable energy development and expansion in the country. The Philippine Renewable Energy Resource Mapping from Light Detection and Ranging (LiDAR) Surveys is a three-year government project which aims to assess and quantify the renewable energy potential of the country and to put them into usable maps. This study focuses on the site suitability analysis of the four renewable energy sources – biomass (coconut, corn, rice, and sugarcane), hydro, solar, and wind energy. The site assessment is a key component in determining and assessing the most suitable locations for the construction of renewable energy power plants. This method maximizes the use of both the technical methods in resource assessment, as well as taking into account the environmental, social, and accessibility aspect in identifying potential sites by utilizing and integrating two different methods: the Multi-Criteria Decision Analysis (MCDA) method and Geographic Information System (GIS) tools. For the MCDA, Analytical Hierarchy Processing (AHP) is employed to determine the parameters needed for the suitability analysis. To structure these site suitability parameters, various experts from different fields were consulted – scientists, policy makers, environmentalists, and industrialists. The need to have a well-represented group of people to consult with is relevant to avoid bias in the output parameter of hierarchy levels and weight matrices. AHP pairwise matrix computation is utilized to derive weights per level out of the expert’s gathered feedback. Whereas from the threshold values derived from related literature, international studies, and government laws, the output values were then consulted with energy specialists from the DOE. Geospatial analysis using GIS tools translate this decision support outputs into visual maps. Particularly, this study uses Euclidean distance to compute for the distance values of each parameter, Fuzzy Membership algorithm which normalizes the output from the Euclidean Distance, and the Weighted Overlay tool for the aggregation of the layers. Using the Natural Breaks algorithm, the suitability ratings of each of the map are classified into 5 discrete categories of suitability index: (1) not suitable (2) least suitable, (3) suitable, (4) moderately suitable, and (5) highly suitable. In this method, the classes are grouped based on the best groups similar values wherein each subdivision are set from the rest based on the big difference in boundary values. Results show that in the entire Philippine area of responsibility, biomass has the highest suitability rating with rice as the most suitable at 75.76% suitability percentage, whereas wind has the least suitability percentage with score 10.28%. Solar and Hydro fall in the middle of the two, with suitability values 28.77% and 21.27%.Keywords: site suitability, biomass energy, hydro energy, solar energy, wind energy, GIS
Procedia PDF Downloads 1491227 Development of Quasi Real-Time Comprehensive System for Earthquake Disaster
Authors: Zhi Liu, Hui Jiang, Jin Li, Kunhao Chen, Langfang Zhang
Abstract:
Fast acquisition of the seismic information and accurate assessment of the earthquake disaster is the key problem for emergency rescue after a destructive earthquake. In order to meet the requirements of the earthquake emergency response and rescue for the cities and counties, a quasi real-time comprehensive evaluation system for earthquake disaster is developed. Based on monitoring data of Micro-Electro-Mechanical Systems (MEMS) strong motion network, structure database of a county area and the real-time disaster information by the mobile terminal after an earthquake, fragility analysis method and dynamic correction algorithm are synthetically obtained in the developed system. Real-time evaluation of the seismic disaster in the county region is finally realized to provide scientific basis for seismic emergency command, rescue and assistant decision.Keywords: quasi real-time, earthquake disaster data collection, MEMS accelerometer, dynamic correction, comprehensive evaluation
Procedia PDF Downloads 2131226 Optimal Maintenance Policy for a Three-Unit System
Authors: A. Abbou, V. Makis, N. Salari
Abstract:
We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.Keywords: reliability, maintenance optimization, Markov decision process, heuristics
Procedia PDF Downloads 2191225 Comparison Between Genetic Algorithms and Particle Swarm Optimization Optimized Proportional Integral Derirative and PSS for Single Machine Infinite System
Authors: Benalia Nadia, Zerzouri Nora, Ben Si Ali Nadia
Abstract:
Abstract: Among the many different modern heuristic optimization methods, genetic algorithms (GA) and the particle swarm optimization (PSO) technique have been attracting a lot of interest. The GA has gained popularity in academia and business mostly because to its simplicity, ability to solve highly nonlinear mixed integer optimization problems that are typical of complex engineering systems, and intuitiveness. The mechanics of the PSO methodology, a relatively recent heuristic search tool, are modeled after the swarming or cooperative behavior of biological groups. It is suitable to compare the performance of the two techniques since they both aim to solve a particular objective function but make use of distinct computing methods. In this article, PSO and GA optimization approaches are used for the parameter tuning of the power system stabilizer and Proportional integral derivative regulator. Load angle and rotor speed variations in the single machine infinite bus bar system is used to measure the performance of the suggested solution.Keywords: SMIB, genetic algorithm, PSO, transient stability, power system stabilizer, PID
Procedia PDF Downloads 841224 Modified Naive Bayes-Based Prediction Modeling for Crop Yield Prediction
Authors: Kefaya Qaddoum
Abstract:
Most of greenhouse growers desire a determined amount of yields in order to accurately meet market requirements. The purpose of this paper is to model a simple but often satisfactory supervised classification method. The original naive Bayes have a serious weakness, which is producing redundant predictors. In this paper, utilized regularization technique was used to obtain a computationally efficient classifier based on naive Bayes. The suggested construction, utilized L1-penalty, is capable of clearing redundant predictors, where a modification of the LARS algorithm is devised to solve this problem, making this method applicable to a wide range of data. In the experimental section, a study conducted to examine the effect of redundant and irrelevant predictors, and test the method on WSG data set for tomato yields, where there are many more predictors than data, and the urge need to predict weekly yield is the goal of this approach. Finally, the modified approach is compared with several naive Bayes variants and other classification algorithms (SVM and kNN), and is shown to be fairly good.Keywords: tomato yield prediction, naive Bayes, redundancy, WSG
Procedia PDF Downloads 2371223 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition
Authors: H. Mousavi, M. Sharifi, H. Pourvaziri
Abstract:
Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation
Procedia PDF Downloads 412