Search results for: node classification
1425 Simplified 3R2C Building Thermal Network Model: A Case Study
Authors: S. M. Mahbobur Rahman
Abstract:
Whole building energy simulation models are widely used for predicting future energy consumption, performance diagnosis and optimum control. Black box building energy modeling approach has been heavily studied in the past decade. The thermal response of a building can also be modeled using a network of interconnected resistors (R) and capacitors (C) at each node called R-C network. In this study, a model building, Case 600, as described in the “Standard Method of Test for the Evaluation of Building Energy Analysis Computer Program”, ASHRAE standard 140, is studied along with a 3R2C thermal network model and the ASHRAE clear sky solar radiation model. Although building an energy model involves two important parts of building component i.e., the envelope and internal mass, the effect of building internal mass is not considered in this study. All the characteristic parameters of the building envelope are evaluated as on Case 600. Finally, monthly building energy consumption from the thermal network model is compared with a simple-box energy model within reasonable accuracy. From the results, 0.6-9.4% variation of monthly energy consumption is observed because of the south-facing windows.Keywords: ASHRAE case study, clear sky solar radiation model, energy modeling, thermal network model
Procedia PDF Downloads 1461424 Using Hidden Markov Chain for Improving the Dependability of Safety-Critical Wireless Sensor Networks
Authors: Issam Alnader, Aboubaker Lasebae, Rand Raheem
Abstract:
Wireless sensor networks (WSNs) are distributed network systems used in a wide range of applications, including safety-critical systems. The latter provide critical services, often concerned with human life or assets. Therefore, ensuring the dependability requirements of Safety critical systems is of paramount importance. The purpose of this paper is to utilize the Hidden Markov Model (HMM) to elongate the service availability of WSNs by increasing the time it takes a node to become obsolete via optimal load balancing. We propose an HMM algorithm that, given a WSN, analyses and predicts undesirable situations, notably, nodes dying unexpectedly or prematurely. We apply this technique to improve on C. Lius’ algorithm, a scheduling-based algorithm which has served to improve the lifetime of WSNs. Our experiments show that our HMM technique improves the lifetime of the network, achieved by detecting nodes that die early and rebalancing their load. Our technique can also be used for diagnosis and provide maintenance warnings to WSN system administrators. Finally, our technique can be used to improve algorithms other than C. Liu’s.Keywords: wireless sensor networks, IoT, dependability of safety WSNs, energy conservation, sleep awake schedule
Procedia PDF Downloads 1001423 A Kruskal Based Heuxistic for the Application of Spanning Tree
Authors: Anjan Naidu
Abstract:
In this paper we first discuss the minimum spanning tree, then we use the Kruskal algorithm to obtain minimum spanning tree. Based on Kruskal algorithm we propose Kruskal algorithm to apply an application to find minimum cost applying the concept of spanning tree.Keywords: Minimum Spanning tree, algorithm, Heuxistic, application, classification of Sub 97K90
Procedia PDF Downloads 4441422 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 3251421 Data Clustering in Wireless Sensor Network Implemented on Self-Organization Feature Map (SOFM) Neural Network
Authors: Krishan Kumar, Mohit Mittal, Pramod Kumar
Abstract:
Wireless sensor network is one of the most promising communication networks for monitoring remote environmental areas. In this network, all the sensor nodes are communicated with each other via radio signals. The sensor nodes have capability of sensing, data storage and processing. The sensor nodes collect the information through neighboring nodes to particular node. The data collection and processing is done by data aggregation techniques. For the data aggregation in sensor network, clustering technique is implemented in the sensor network by implementing self-organizing feature map (SOFM) neural network. Some of the sensor nodes are selected as cluster head nodes. The information aggregated to cluster head nodes from non-cluster head nodes and then this information is transferred to base station (or sink nodes). The aim of this paper is to manage the huge amount of data with the help of SOM neural network. Clustered data is selected to transfer to base station instead of whole information aggregated at cluster head nodes. This reduces the battery consumption over the huge data management. The network lifetime is enhanced at a greater extent.Keywords: artificial neural network, data clustering, self organization feature map, wireless sensor network
Procedia PDF Downloads 5171420 Human Gait Recognition Using Moment with Fuzzy
Authors: Jyoti Bharti, Navneet Manjhi, M. K.Gupta, Bimi Jain
Abstract:
A reliable gait features are required to extract the gait sequences from an images. In this paper suggested a simple method for gait identification which is based on moments. Moment values are extracted on different number of frames of gray scale and silhouette images of CASIA database. These moment values are considered as feature values. Fuzzy logic and nearest neighbour classifier are used for classification. Both achieved higher recognition.Keywords: gait, fuzzy logic, nearest neighbour, recognition rate, moments
Procedia PDF Downloads 7581419 Towards Reliable Mobile Cloud Computing
Authors: Khaled Darwish, Islam El Madahh, Hoda Mohamed, Hadia El Hennawy
Abstract:
Cloud computing has been one of the fastest growing parts in IT industry mainly in the context of the future of the web where computing, communication, and storage services are main services provided for Internet users. Mobile Cloud Computing (MCC) is gaining stream which can be used to extend cloud computing functions, services and results to the world of future mobile applications and enables delivery of a large variety of cloud application to billions of smartphones and wearable devices. This paper describes reliability for MCC by determining the ability of a system or component to function correctly under stated conditions for a specified period of time to be able to deal with the estimation and management of high levels of lifetime engineering uncertainty and risks of failure. The assessment procedures consists of determine Mean Time between Failures (MTBF), Mean Time to Failure (MTTF), and availability percentages for main components in both cloud computing and MCC structures applied on single node OpenStack installation to analyze its performance with different settings governing the behavior of participants. Additionally, we presented several factors have a significant impact on rates of change overall cloud system reliability should be taken into account in order to deliver highly available cloud computing services for mobile consumers.Keywords: cloud computing, mobile cloud computing, reliability, availability, OpenStack
Procedia PDF Downloads 3981418 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies
Authors: Elżbieta Turska
Abstract:
Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.Keywords: mood disorders, adolescents, family, artificial intelligence
Procedia PDF Downloads 1011417 Detecting Covid-19 Fake News Using Deep Learning Technique
Authors: AnjalI A. Prasad
Abstract:
Nowadays, social media played an important role in spreading misinformation or fake news. This study analyzes the fake news related to the COVID-19 pandemic spread in social media. This paper aims at evaluating and comparing different approaches that are used to mitigate this issue, including popular deep learning approaches, such as CNN, RNN, LSTM, and BERT algorithm for classification. To evaluate models’ performance, we used accuracy, precision, recall, and F1-score as the evaluation metrics. And finally, compare which algorithm shows better result among the four algorithms.Keywords: BERT, CNN, LSTM, RNN
Procedia PDF Downloads 2061416 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 71415 Design of a Backlight Hyperspectral Imaging System for Enhancing Image Quality in Artificial Vision Food Packaging Online Inspections
Authors: Ferran Paulí Pla, Pere Palacín Farré, Albert Fornells Herrera, Pol Toldrà Fernández
Abstract:
Poor image acquisition is limiting the promising growth of industrial vision in food control. In recent years, the food industry has witnessed a significant increase in the implementation of automation in quality control through artificial vision, a trend that continues to grow. During the packaging process, some defects may appear, compromising the proper sealing of the products and diminishing their shelf life, sanitary conditions and overall properties. While failure to detect a defective product leads to major losses, food producers also aim to minimize over-rejection to avoid unnecessary waste. Thus, accuracy in the evaluation of the products is crucial, and, given the large production volumes, even small improvements have a significant impact. Recently, efforts have been focused on maximizing the performance of classification neural networks; nevertheless, their performance is limited by the quality of the input data. Monochrome linear backlight systems are most commonly used for online inspections of food packaging thermo-sealing zones. These simple acquisition systems fit the high cadence of the production lines imposed by the market demand. Nevertheless, they provide a limited amount of data, which negatively impacts classification algorithm training. A desired situation would be one where data quality is maximized in terms of obtaining the key information to detect defects while maintaining a fast working pace. This work presents a backlight hyperspectral imaging system designed and implemented replicating an industrial environment to better understand the relationship between visual data quality and spectral illumination range for a variety of packed food products. Furthermore, results led to the identification of advantageous spectral bands that significantly enhance image quality, providing clearer detection of defects.Keywords: artificial vision, food packaging, hyperspectral imaging, image acquisition, quality control
Procedia PDF Downloads 231414 Assessing the Utility of Unmanned Aerial Vehicle-Borne Hyperspectral Image and Photogrammetry Derived 3D Data for Wetland Species Distribution Quick Mapping
Authors: Qiaosi Li, Frankie Kwan Kit Wong, Tung Fung
Abstract:
Lightweight unmanned aerial vehicle (UAV) loading with novel sensors offers a low cost approach for data acquisition in complex environment. This study established a framework for applying UAV system in complex environment quick mapping and assessed the performance of UAV-based hyperspectral image and digital surface model (DSM) derived from photogrammetric point clouds for 13 species classification in wetland area Mai Po Inner Deep Bay Ramsar Site, Hong Kong. The study area was part of shallow bay with flat terrain and the major species including reedbed and four mangroves: Kandelia obovata, Aegiceras corniculatum, Acrostichum auerum and Acanthus ilicifolius. Other species involved in various graminaceous plants, tarbor, shrub and invasive species Mikania micrantha. In particular, invasive species climbed up to the mangrove canopy caused damage and morphology change which might increase species distinguishing difficulty. Hyperspectral images were acquired by Headwall Nano sensor with spectral range from 400nm to 1000nm and 0.06m spatial resolution image. A sequence of multi-view RGB images was captured with 0.02m spatial resolution and 75% overlap. Hyperspectral image was corrected for radiative and geometric distortion while high resolution RGB images were matched to generate maximum dense point clouds. Furtherly, a 5 cm grid digital surface model (DSM) was derived from dense point clouds. Multiple feature reduction methods were compared to identify the efficient method and to explore the significant spectral bands in distinguishing different species. Examined methods including stepwise discriminant analysis (DA), support vector machine (SVM) and minimum noise fraction (MNF) transformation. Subsequently, spectral subsets composed of the first 20 most importance bands extracted by SVM, DA and MNF, and multi-source subsets adding extra DSM to 20 spectrum bands were served as input in maximum likelihood classifier (MLC) and SVM classifier to compare the classification result. Classification results showed that feature reduction methods from best to worst are MNF transformation, DA and SVM. MNF transformation accuracy was even higher than all bands input result. Selected bands frequently laid along the green peak, red edge and near infrared. Additionally, DA found that chlorophyll absorption red band and yellow band were also important for species classification. In terms of 3D data, DSM enhanced the discriminant capacity among low plants, arbor and mangrove. Meanwhile, DSM largely reduced misclassification due to the shadow effect and morphological variation of inter-species. In respect to classifier, nonparametric SVM outperformed than MLC for high dimension and multi-source data in this study. SVM classifier tended to produce higher overall accuracy and reduce scattered patches although it costs more time than MLC. The best result was obtained by combining MNF components and DSM in SVM classifier. This study offered a precision species distribution survey solution for inaccessible wetland area with low cost of time and labour. In addition, findings relevant to the positive effect of DSM as well as spectral feature identification indicated that the utility of UAV-borne hyperspectral and photogrammetry deriving 3D data is promising in further research on wetland species such as bio-parameters modelling and biological invasion monitoring.Keywords: digital surface model (DSM), feature reduction, hyperspectral, photogrammetric point cloud, species mapping, unmanned aerial vehicle (UAV)
Procedia PDF Downloads 2571413 A Virtual Grid Based Energy Efficient Data Gathering Scheme for Heterogeneous Sensor Networks
Authors: Siddhartha Chauhan, Nitin Kumar Kotania
Abstract:
Traditional Wireless Sensor Networks (WSNs) generally use static sinks to collect data from the sensor nodes via multiple forwarding. Therefore, network suffers with some problems like long message relay time, bottle neck problem which reduces the performance of the network. Many approaches have been proposed to prevent this problem with the help of mobile sink to collect the data from the sensor nodes, but these approaches still suffer from the buffer overflow problem due to limited memory size of sensor nodes. This paper proposes an energy efficient scheme for data gathering which overcomes the buffer overflow problem. The proposed scheme creates virtual grid structure of heterogeneous nodes. Scheme has been designed for sensor nodes having variable sensing rate. Every node finds out its buffer overflow time and on the basis of this cluster heads are elected. A controlled traversing approach is used by the proposed scheme in order to transmit data to sink. The effectiveness of the proposed scheme is verified by simulation.Keywords: buffer overflow problem, mobile sink, virtual grid, wireless sensor networks
Procedia PDF Downloads 3911412 Process Safety Evaluation of a Nuclear Power Plant through Virtual Process Hazard Analysis Using Hazard and Operability Technique
Authors: Elysa V. Largo, Lormaine Anne A. Branzuela, Julie Marisol D. Pagalilauan, Neil C. Concibido, Monet Concepcion M. Detras
Abstract:
The energy demand in the country is increasing; thus, nuclear energy is recently mandated to add to the energy mix. The Philippines has the Bataan Nuclear Power Plant (BNPP), which can be a source of nuclear energy; however, it has not been operated since the completion of its construction. Thus, evaluating the safety of BNPP is vital. This study explored the possible deviations that may occur in the operation of a nuclear power plant with a pressurized water reactor, which is similar to BNPP, through a virtual process hazard analysis (PHA) using the hazard and operability (HAZOP) technique. Temperature, pressure, and flow were used as parameters. A total of 86 causes of various deviations were identified, wherein the primary system and line from reactor coolant pump to reactor vessel are the most critical system and node, respectively. A total of 348 scenarios were determined. The critical events are radioactive leaks due to nuclear meltdown and sump overflow that could lead to multiple worker fatalities, one or more public fatalities, and environmental remediation. There were existing safeguards identified; however, further recommendations were provided to have additional and supplemental barriers to reduce the risk.Keywords: PSM, PHA, HAZOP, nuclear power plant
Procedia PDF Downloads 1541411 Optimal Design of Composite Cylindrical Shell Based on Nonlinear Finite Element Analysis
Authors: Haider M. Alsaeq
Abstract:
The present research is an attempt to figure out the best configuration of composite cylindrical shells of the sandwich type, i.e. the lightest design of such shells required to sustain a certain load over a certain area. The optimization is based on elastic-plastic geometrically nonlinear incremental-iterative finite element analysis. The nine-node degenerated curved shell element is used in which five degrees of freedom are specified at each nodal point, with a layered model. The formulation of the geometrical nonlinearity problem is carried out using the well-known total Lagrangian principle. For the structural optimization problem, which is dealt with as a constrained nonlinear optimization, the so-called Modified Hooke and Jeeves method is employed by considering the weight of the shell as the objective function with stress and geometrical constraints. It was concluded that the optimum design of composite sandwich cylindrical shell that have a rigid polyurethane foam core and steel facing occurs when the area covered by the shell becomes almost square with a ratio of core thickness to facing thickness lies between 45 and 49, while the optimum height to length ration varies from 0.03 to 0.08 depending on the aspect ratio of the shell and its boundary conditions.Keywords: composite structure, cylindrical shell, optimization, non-linear analysis, finite element
Procedia PDF Downloads 3911410 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal
Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle
Abstract:
Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis
Procedia PDF Downloads 3531409 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 1451408 Fault Diagnosis of Manufacturing Systems Using AntTreeStoch with Parameter Optimization by ACO
Authors: Ouahab Kadri, Leila Hayet Mouss
Abstract:
In this paper, we present three diagnostic modules for complex and dynamic systems. These modules are based on three ant colony algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. We chose these algorithms for their simplicity and their wide application range. However, we cannot use these algorithms in their basement forms as they have several limitations. To use these algorithms in a diagnostic system, we have proposed three variants. We have tested these algorithms on datasets issued from two industrial systems, which are clinkering system and pasteurization system.Keywords: ant colony algorithms, complex and dynamic systems, diagnosis, classification, optimization
Procedia PDF Downloads 2991407 Vertical and Horizantal Distribution Patterns of Major and Trace Elements: Surface and Subsurface Sediments of Endhorheic Lake Acigol Basin, Denizli Turkey
Authors: M. Budakoglu, M. Karaman
Abstract:
Lake Acıgöl is located in area with limited influences from urban and industrial pollution sources, there is nevertheless a need to understand all potential lithological and anthropogenic sources of priority contaminants in this closed basin. This study discusses vertical and horizontal distribution pattern of major, trace elements of recent lake sediments to better understand their current geochemical analog with lithological units in the Lake Acıgöl basin. This study also provides reliable background levels for the region by the detailed surfaced lithological units data. The detail results of surface, subsurface and shallow core sediments from these relatively unperturbed ecosystems, highlight its importance as conservation area, despite the high-scale industrial salt production activity. While P2O5/TiO2 versus MgO/CaO classification diagram indicate magmatic and sedimentary origin of lake sediment, Log(SiO2/Al2O3) versus Log(Na2O/K2O) classification diagrams express lithological assemblages of shale, iron-shale, vacke and arkose. The plot between TiO2 vs. SiO2 and P2O5/TiO2 vs. MgO/CaO also supports the origin of the primary magma source. The average compositions of the 20 different lithological units used as a proxy for geochemical background in the study area. As expected from weathered rock materials, there is a large variation in the major element content for all analyzed lake samples. The A-CN-K and A-CNK-FM ternary diagrams were used to deduce weathering trends. Surface and subsurface sediments display an intense weathering history according to these ternary diagrams. The most of the sediments samples plot around UCC and TTG, suggesting a low to moderate weathering history for the provenance. The sediments plot in a region clearly suggesting relative similar contents in Al2O3, CaO, Na2O, and K2O from those of lithological samples.Keywords: Lake Acıgöl, recent lake sediment, geochemical speciation of major and trace elements, heavy metals, Denizli, Turkey
Procedia PDF Downloads 4111406 Signal Strength Based Multipath Routing for Mobile Ad Hoc Networks
Authors: Chothmal
Abstract:
In this paper, we present a route discovery process which uses the signal strength on a link as a parameter of its inclusion in the route discovery method. The proposed signal-to-interference and noise ratio (SINR) based multipath reactive routing protocol is named as SINR-MP protocol. The proposed SINR-MP routing protocols has two following two features: a) SINR-MP protocol selects routes based on the SINR of the links during the route discovery process therefore it select the routes which has long lifetime and low frame error rate for data transmission, and b) SINR-MP protocols route discovery process is multipath which discovers more than one SINR based route between a given source destination pair. The multiple routes selected by our SINR-MP protocol are node-disjoint in nature which increases their robustness against link failures, as failure of one route will not affect the other route. The secondary route is very useful in situations where the primary route is broken because we can now use the secondary route without causing a new route discovery process. Due to this, the network overhead caused by a route discovery process is avoided. This increases the network performance greatly. The proposed SINR-MP routing protocol is implemented in the trail version of network simulator called Qualnet.Keywords: ad hoc networks, quality of service, video streaming, H.264/SVC, multiple routes, video traces
Procedia PDF Downloads 2491405 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce
Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.
Abstract:
One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies
Procedia PDF Downloads 271404 Spatial Patterns of Urban Expansion in Kuwait City between 1989 and 2001
Authors: Saad Algharib, Jay Lee
Abstract:
Urbanization is a complex phenomenon that occurs during the city’s development from one form to another. In other words, it is the process when the activities in the land use/land cover change from rural to urban. Since the oil exploration, Kuwait City has been growing rapidly due to its urbanization and population growth by both natural growth and inward immigration. The main objective of this study is to detect changes in urban land use/land cover and to examine the changing spatial patterns of urban growth in and around Kuwait City between 1989 and 2001. In addition, this study also evaluates the spatial patterns of the changes detected and how they can be related to the spatial configuration of the city. Recently, the use of remote sensing and geographic information systems became very useful and important tools in urban studies because of the integration of them can allow and provide the analysts and planners to detect, monitor and analyze the urban growth in a region effectively. Moreover, both planners and users can predict the trends of the growth in urban areas in the future with remotely sensed and GIS data because they can be effectively updated with required precision levels. In order to identify the new urban areas between 1989 and 2001, the study uses satellite images of the study area and remote sensing technology for classifying these images. Unsupervised classification method was applied to classify images to land use and land cover data layers. After finishing the unsupervised classification method, GIS overlay function was applied to the classified images for detecting the locations and patterns of the new urban areas that developed during the study period. GIS was also utilized to evaluate the distribution of the spatial patterns. For example, Moran’s index was applied for all data inputs to examine the urban growth distribution. Furthermore, this study assesses if the spatial patterns and process of these changes take place in a random fashion or with certain identifiable trends. During the study period, the result of this study indicates that the urban growth has occurred and expanded 10% from 32.4% in 1989 to 42.4% in 2001. Also, the results revealed that the largest increase of the urban area occurred between the major highways after the forth ring road from the center of Kuwait City. Moreover, the spatial distribution of urban growth occurred in cluster manners.Keywords: geographic information systems, remote sensing, urbanization, urban growth
Procedia PDF Downloads 1711403 Pulmonary Hydatid Cyst in a 13-Year-Old Child: A Case Report
Authors: Ghada Esheba, Bayan Hafiz, Ashwaq Al-Qarni, Abdulelah AlMalki, Esraa Kaheel
Abstract:
Hydatid disease is caused by genus Echinococcus, it is transmitted to human through sheep and cattle. People who lived in an endemic area should be suspected to have the disease. Pulmonary hydatid disease can be presented by respiratory manifestations as in our case. We report a case of child, 13 years old, who was presented by shortness of breath and non-productive cough 2 months ago. The patient had an attack of hemoptysis 3 months ago but there is no history of fever, other constitutional symptoms or any medical illness. The patient has had a close contact with a horse. On examination, the patient was oriented and vitally stable. Both side of chest were moving equally with decrease air entry on the left side of the chest. Cervical lymph node enlargement was also detected. The case was provisionally diagnosed as tuberculosis. The x-ray was normal, while CT scan showed two cysts in the left side. The patient was treated surgically with resection of both cysts without lobectomy. Broncho-alveolar lavage was done and together with plural effusion and both cysts were sent for histopathology. The patient received the following medication: albendazole 200MG/BID/Orally for 30 days and Cefuroxime 250MG/Q12H/Orally for 10 days.Keywords: Echinococcus granulosus, hydatid disease, pediatrics, pulmonary hydatid cyst
Procedia PDF Downloads 2731402 Leakage Current Analysis of FinFET Based 7T SRAM at 32nm Technology
Authors: Chhavi Saxena
Abstract:
FinFETs can be a replacement for bulk-CMOS transistors in many different designs. Its low leakage/standby power property makes FinFETs a desirable option for memory sub-systems. Memory modules are widely used in most digital and computer systems. Leakage power is very important in memory cells since most memory applications access only one or very few memory rows at a given time. As technology scales down, the importance of leakage current and power analysis for memory design is increasing. In this paper, we discover an option for low power interconnect synthesis at the 32nm node and beyond, using Fin-type Field-Effect Transistors (FinFETs) which are a promising substitute for bulk CMOS at the considered gate lengths. We consider a mechanism for improving FinFETs efficiency, called variable supply voltage schemes. In this paper, we’ve illustrated the design and implementation of FinFET based 4x4 SRAM cell array by means of one bit 7T SRAM. FinFET based 7T SRAM has been designed and analysis have been carried out for leakage current, dynamic power and delay. For the validation of our design approach, the output of FinFET SRAM array have been compared with standard CMOS SRAM and significant improvements are obtained in proposed model.Keywords: FinFET, 7T SRAM cell, leakage current, delay
Procedia PDF Downloads 4551401 Normalized Compression Distance Based Scene Alteration Analysis of a Video
Authors: Lakshay Kharbanda, Aabhas Chauhan
Abstract:
In this paper, an application of Normalized Compression Distance (NCD) to detect notable scene alterations occurring in videos is presented. Several research groups have been developing methods to perform image classification using NCD, a computable approximation to Normalized Information Distance (NID) by studying the degree of similarity in images. The timeframes where significant aberrations between the frames of a video have occurred have been identified by obtaining a threshold NCD value, using two compressors: LZMA and BZIP2 and defining scene alterations using Pixel Difference Percentage metrics.Keywords: image compression, Kolmogorov complexity, normalized compression distance, root mean square error
Procedia PDF Downloads 3401400 Recognition of Tifinagh Characters with Missing Parts Using Neural Network
Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui
Abstract:
In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN
Procedia PDF Downloads 3341399 Classification of Sturm-Liouville Problems at Infinity
Authors: Kishor J. shinde
Abstract:
We determine the values of k and p such that the Sturm-Liouville differential operator τu=-(d^2 u)/(dx^2) + kx^p u is in limit point case or limit circle case at infinity. In particular it is shown that τ is in the limit point case when (i) for p=2 and ∀k, (ii) for ∀p and k=0, (iii) for all p and k>0, (iv) for 0≤p≤2 and k<0, (v) for p<0 and k<0. τ is in the limit circle case when (i) for p>2 and k<0.Keywords: limit point case, limit circle case, Sturm-Liouville, infinity
Procedia PDF Downloads 3671398 Rice Area Determination Using Landsat-Based Indices and Land Surface Temperature Values
Authors: Burçin Saltık, Levent Genç
Abstract:
In this study, it was aimed to determine a route for identification of rice cultivation areas within Thrace and Marmara regions of Turkey using remote sensing and GIS. Landsat 8 (OLI-TIRS) imageries acquired in production season of 2013 with 181/32 Path/Row number were used. Four different seasonal images were generated utilizing original bands and different transformation techniques. All images were classified individually using supervised classification techniques and Land Use Land Cover Maps (LULC) were generated with 8 classes. Areas (ha, %) of each classes were calculated. In addition, district-based rice distribution maps were developed and results of these maps were compared with Turkish Statistical Institute (TurkSTAT; TSI)’s actual rice cultivation area records. Accuracy assessments were conducted, and most accurate map was selected depending on accuracy assessment and coherency with TSI results. Additionally, rice areas on over 4° slope values were considered as mis-classified pixels and they eliminated using slope map and GIS tools. Finally, randomized rice zones were selected to obtain maximum-minimum value ranges of each date (May, June, July, August, September images separately) NDVI, LSWI, and LST images to test whether they may be used for rice area determination via raster calculator tool of ArcGIS. The most accurate classification for rice determination was obtained from seasonal LSWI LULC map, and considering TSI data and accuracy assessment results and mis-classified pixels were eliminated from this map. According to results, 83151.5 ha of rice areas exist within study area. However, this result is higher than TSI records with an area of 12702.3 ha. Use of maximum-minimum range of rice area NDVI, LSWI, and LST was tested in Meric district. It was seen that using the value ranges obtained from July imagery, gave the closest results to TSI records, and the difference was only 206.4 ha. This difference is normal due to relatively low resolution of images. Thus, employment of images with higher spectral, spatial, temporal and radiometric resolutions may provide more reliable results.Keywords: landsat 8 (OLI-TIRS), LST, LSWI, LULC, NDVI, rice
Procedia PDF Downloads 2281397 Advanced Hybrid Particle Swarm Optimization for Congestion and Power Loss Reduction in Distribution Networks with High Distributed Generation Penetration through Network Reconfiguration
Authors: C. Iraklis, G. Evmiridis, A. Iraklis
Abstract:
Renewable energy sources and distributed power generation units already have an important role in electrical power generation. A mixture of different technologies penetrating the electrical grid, adds complexity in the management of distribution networks. High penetration of distributed power generation units creates node over-voltages, huge power losses, unreliable power management, reverse power flow and congestion. This paper presents an optimization algorithm capable of reducing congestion and power losses, both described as a function of weighted sum. Two factors that describe congestion are being proposed. An upgraded selective particle swarm optimization algorithm (SPSO) is used as a solution tool focusing on the technique of network reconfiguration. The upgraded SPSO algorithm is achieved with the addition of a heuristic algorithm specializing in reduction of power losses, with several scenarios being tested. Results show significant improvement in minimization of losses and congestion while achieving very small calculation times.Keywords: congestion, distribution networks, loss reduction, particle swarm optimization, smart grid
Procedia PDF Downloads 4451396 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 94