Search results for: neural networking algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5208

Search results for: neural networking algorithm

3558 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 82
3557 Battery Control with Moving Average Algorithm to Smoothen the Intermittent Output Power of Photovoltaic Solar Power Plants in Off-Grid Configuration

Authors: Muhammad Gillfran Samual, Rinaldy Dalimi, Fauzan Hanif Jufri, Budi Sudiarto, Ismi Rosyiana Fitri

Abstract:

Solar energy is increasingly recognized as an important future energy source due to its abundant availability and renewable nature. However, the intermittent nature of solar energy can cause fluctuations in the electricity produced, making it difficult to guarantee a stable and reliable electricity supply. One solution that can be implemented is to use batteries in a photovoltaic solar power plant system with a Moving Average control algorithm, which can help smooth and reduce fluctuations in solar power output power. The parameter that can be adjusted in the Moving Average algorithm is the window size or the arithmetic average width of the photovoltaic output power over time. This research evaluates the effect of a change of window size parameter in the Moving Average algorithm on the resulting smoothed photovoltaic output power and the technical effects on batteries, i.e., power and energy usage. Based on the evaluation, it is found that the increase of window size parameter will slow down the response of photovoltaic output power to changes in irradiation and increase the smoothing quality of the intermittent photovoltaic output power. In addition, increasing the window size will reduce the maximum power received on the load side, and the amount of energy used by the battery during the power smoothing process will increase, which, in turn, increases the required battery capacity.

Keywords: battery, intermittent, moving average, photovoltaic, power smoothing

Procedia PDF Downloads 40
3556 Emerging Social Media Presence of International Organisations - Challenges and Opportunities

Authors: Laura Hervai

Abstract:

One of the most significant phenomena of the 2000s was the emergence of social media sites and web 2.0 that revolutionized communication processes. Social networking platforms have fundamentally changed social and political participation of the public, which require organisations in the public and non-profit sector not only to adapt to these new trends but also to actively engage their audiences. Opportunity for interaction, freer expression of opinion and the proliferation of user generated content are major changes brought by web 2.0 technologies. Furthermore, due to the wide penetration of mobile technologies, social media sites are capable of connecting underdeveloped regions to the global flow of information. Taking advantage of these characteristics, organisations have the opportunity to engage much wider audiences, exploit new ways to raise awareness or reach out to regions that are difficult to access. The early adopters of these new communication tools soon recognized the need of developing social media guidelines for their organisations as well as the increased workload that they require. While ten years ago communication officers could handle their organisation’s social media presence, today it is a separate profession. International organisations face several challenges related to their social media presence. Early adopters have contributed to the development of best practices among which the ethics of social media usage still remained problematic. Another challenge for international organisations is to adapt to country-specific social media trends while they have to comply with the requirements of their parent organisation as well. However in the 21st century social media presence can be crucial to the successful operation of international organisations, their importance is still not taken seriously enough. The measurement of the effects and influence of social networking on the organisations’ productivity is an unsolved problem thus further research should focus on this matter. Research methods included primary research of major IGOs’ and NGOs’ social media presence and guidelines along with secondary research of social media statistics and scientific articles in the topic.

Keywords: international organisations, non-profit sector, NGO, social media, social network

Procedia PDF Downloads 296
3555 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 235
3554 Computer Aided Analysis of Breast Based Diagnostic Problems from Mammograms Using Image Processing and Deep Learning Methods

Authors: Ali Berkan Ural

Abstract:

This paper presents the analysis, evaluation, and pre-diagnosis of early stage breast based diagnostic problems (breast cancer, nodulesorlumps) by Computer Aided Diagnosing (CAD) system from mammogram radiological images. According to the statistics, the time factor is crucial to discover the disease in the patient (especially in women) as possible as early and fast. In the study, a new algorithm is developed using advanced image processing and deep learning method to detect and classify the problem at earlystagewithmoreaccuracy. This system first works with image processing methods (Image acquisition, Noiseremoval, Region Growing Segmentation, Morphological Operations, Breast BorderExtraction, Advanced Segmentation, ObtainingRegion Of Interests (ROIs), etc.) and segments the area of interest of the breast and then analyzes these partly obtained area for cancer detection/lumps in order to diagnosis the disease. After segmentation, with using the Spectrogramimages, 5 different deep learning based methods (specified Convolutional Neural Network (CNN) basedAlexNet, ResNet50, VGG16, DenseNet, Xception) are applied to classify the breast based problems.

Keywords: computer aided diagnosis, breast cancer, region growing, segmentation, deep learning

Procedia PDF Downloads 79
3553 Optimization of the Dam Management to Satisfy the Irrigation Demand: A Case Study in Algeria

Authors: Merouane Boudjerda, Bénina Touaibia, Mustapha K Mihoubi

Abstract:

In Algeria, water resources play a crucial role in economic development. But over the last decades, they are relatively limited and gradually decreasing to the detriment of agriculture. The agricultural irrigation is the primary water consuming sector followed by the domestic and industrial sectors. The research presented in this paper focuses on the optimization of irrigation water demand. Dynamic Programming-Neural Network (DPNN) method is applied to investigate reservoir optimization. The optimal operation rule is formulated to minimize the gap between water release and water irrigation demand. As a case study, Boukerdane dam’s reservoir system in North of Algeria has been selected to examine our proposed optimization model. The application of DPNN method allowed increasing the satisfaction rate (SR) from 34% to 60%. In addition, the operation rule generated showed more reliable and resilience operation for the examined case study.

Keywords: water management, agricultural demand, Boukerdane dam, dynamic programming, artificial neural network

Procedia PDF Downloads 119
3552 Genetic Algorithm Methods for Determination Over Flow Coefficient of Medium Throat Length Morning Glory Spillway Equipped Crest Vortex Breakers

Authors: Roozbeh Aghamajidi

Abstract:

Shaft spillways are circling spillways used generally for emptying unexpected floods on earth and concrete dams. There are different types of shaft spillways: Stepped and Smooth spillways. Stepped spillways pass more flow discharges through themselves in comparison to smooth spillways. Therefore, awareness of flow behavior of these spillways helps using them better and more efficiently. Moreover, using vortex breaker has great effect on passing flow through shaft spillway. In order to use more efficiently, the risk of flow pressure decreases to less than fluid vapor pressure, called cavitations, should be prevented as far as possible. At this research, it has been tried to study different behavior of spillway with different vortex shapes on spillway crest on flow. From the viewpoint of the effects of flow regime changes on spillway, changes of step dimensions, and the change of type of discharge will be studied effectively. Therefore, two spillway models with three different vortex breakers and three arrangements have been used to assess the hydraulic characteristics of flow. With regard to the inlet discharge to spillway, the parameters of pressure and flow velocity on spillway surface have been measured at several points and after each run. Using these kinds of information leads us to create better design criteria of spillway profile. To achieve these purposes, optimization has important role and genetic algorithm are utilized to study the emptying discharge. As a result, it turned out that the best type of spillway with maximum discharge coefficient is smooth spillway with ogee shapes as vortex breaker and 3 number as arrangement. Besides it has been concluded that the genetic algorithm can be used to optimize the results.

Keywords: shaft spillway, vortex breaker, flow, genetic algorithm

Procedia PDF Downloads 364
3551 Light-Weight Network for Real-Time Pose Estimation

Authors: Jianghao Hu, Hongyu Wang

Abstract:

The effective and efficient human pose estimation algorithm is an important task for real-time human pose estimation on mobile devices. This paper proposes a light-weight human key points detection algorithm, Light-Weight Network for Real-Time Pose Estimation (LWPE). LWPE uses light-weight backbone network and depthwise separable convolutions to reduce parameters and lower latency. LWPE uses the feature pyramid network (FPN) to fuse the high-resolution, semantically weak features with the low-resolution, semantically strong features. In the meantime, with multi-scale prediction, the predicted result by the low-resolution feature map is stacked to the adjacent higher-resolution feature map to intermediately monitor the network and continuously refine the results. At the last step, the key point coordinates predicted in the highest-resolution are used as the final output of the network. For the key-points that are difficult to predict, LWPE adopts the online hard key points mining strategy to focus on the key points that hard predicting. The proposed algorithm achieves excellent performance in the single-person dataset selected in the AI (artificial intelligence) challenge dataset. The algorithm maintains high-precision performance even though the model only contains 3.9M parameters, and it can run at 225 frames per second (FPS) on the generic graphics processing unit (GPU).

Keywords: depthwise separable convolutions, feature pyramid network, human pose estimation, light-weight backbone

Procedia PDF Downloads 143
3550 Space Time Adaptive Algorithm in Bi-Static Passive Radar Systems for Clutter Mitigation

Authors: D. Venu, N. V. Koteswara Rao

Abstract:

Space – time adaptive processing (STAP) is an effective tool for detecting a moving target in spaceborne or airborne radar systems. Since airborne passive radar systems utilize broadcast, navigation and excellent communication signals to perform various surveillance tasks and also has attracted significant interest from the distinct past, therefore the need of the hour is to have cost effective systems as compared to conventional active radar systems. Moreover, requirements of small number of secondary samples for effective clutter suppression in bi-static passive radar offer abundant illuminator resources for passive surveillance radar systems. This paper presents a framework for incorporating knowledge sources directly in the space-time beam former of airborne adaptive radars. STAP algorithm for clutter mitigation for passive bi-static radar has better quantitation of the reduction in sample size thereby amalgamating the earlier data bank with existing radar data sets. Also, we proposed a novel method to estimate the clutter matrix and perform STAP for efficient clutter suppression based on small sample size. Furthermore, the effectiveness of the proposed algorithm is verified using MATLAB simulations in order to validate STAP algorithm for passive bi-static radar. In conclusion, this study highlights the importance for various applications which augments traditional active radars using cost-effective measures.

Keywords: bistatic radar, clutter, covariance matrix passive radar, STAP

Procedia PDF Downloads 286
3549 Optimal Design of Composite Patch for a Cracked Pipe by Utilizing Genetic Algorithm and Finite Element Method

Authors: Mahdi Fakoor, Seyed Mohammad Navid Ghoreishi

Abstract:

Composite patching is a common way for reinforcing the cracked pipes and cylinders. The effects of composite patch reinforcement on fracture parameters of a cracked pipe depend on a variety of parameters such as number of layers, angle, thickness, and material of each layer. Therefore, stacking sequence optimization of composite patch becomes crucial for the applications of cracked pipes. In this study, in order to obtain the optimal stacking sequence for a composite patch that has minimum weight and maximum resistance in propagation of cracks, a coupled Multi-Objective Genetic Algorithm (MOGA) and Finite Element Method (FEM) process is proposed. This optimization process has done for longitudinal and transverse semi-elliptical cracks and optimal stacking sequences and Pareto’s front for each kind of cracks are presented. The proposed algorithm is validated against collected results from the existing literature.

Keywords: multi objective optimization, pareto front, composite patch, cracked pipe

Procedia PDF Downloads 303
3548 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: big data, two-hop transmission, physical layer wireless security, cooperative jamming, energy balance

Procedia PDF Downloads 475
3547 Prediction of Compressive Strength Using Artificial Neural Network

Authors: Vijay Pal Singh, Yogesh Chandra Kotiyal

Abstract:

Structures are a combination of various load carrying members which transfer the loads to the foundation from the superstructure safely. At the design stage, the loading of the structure is defined and appropriate material choices are made based upon their properties, mainly related to strength. The strength of materials kept on reducing with time because of many factors like environmental exposure and deformation caused by unpredictable external loads. Hence, to predict the strength of materials used in structures, various techniques are used. Among these techniques, Non-Destructive Techniques (NDT) are the one that can be used to predict the strength without damaging the structure. In the present study, the compressive strength of concrete has been predicted using Artificial Neural Network (ANN). The predicted strength was compared with the experimentally obtained actual compressive strength of concrete and equations were developed for different models. A good co-relation has been obtained between the predicted strength by these models and experimental values. Further, the co-relation has been developed using two NDT techniques for prediction of strength by regression analysis. It was found that the percentage error has been reduced between the predicted strength by using combined techniques in place of single techniques.

Keywords: rebound, ultra-sonic pulse, penetration, ANN, NDT, regression

Procedia PDF Downloads 411
3546 Robust Heart Sounds Segmentation Based on the Variation of the Phonocardiogram Curve Length

Authors: Mecheri Zeid Belmecheri, Maamar Ahfir, Izzet Kale

Abstract:

Automatic cardiac auscultation is still a subject of research in order to establish an objective diagnosis. Recorded heart sounds as Phonocardiogram signals (PCG) can be used for automatic segmentation into components that have clinical meanings. These are the first sound, S1, the second sound, S2, and the systolic and diastolic components, respectively. In this paper, an automatic method is proposed for the robust segmentation of heart sounds. This method is based on calculating an intermediate sawtooth-shaped signal from the length variation of the recorded Phonocardiogram (PCG) signal in the time domain and, using its positive derivative function that is a binary signal in training a Recurrent Neural Network (RNN). Results obtained in the context of a large database of recorded PCGs with their simultaneously recorded ElectroCardioGrams (ECGs) from different patients in clinical settings, including normal and abnormal subjects, show a segmentation testing performance average of 76 % sensitivity and 94 % specificity.

Keywords: heart sounds, PCG segmentation, event detection, recurrent neural networks, PCG curve length

Procedia PDF Downloads 169
3545 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement

Authors: Pogula Rakesh, T. Kishore Kumar

Abstract:

Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.

Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss

Procedia PDF Downloads 468
3544 Spectrum Allocation in Cognitive Radio Using Monarch Butterfly Optimization

Authors: Avantika Vats, Kushal Thakur

Abstract:

This paper displays the point at issue, improvement, and utilization of a Monarch Butterfly Optimization (MBO) rather than a Genetic Algorithm (GA) in cognitive radio for the channel portion. This approach offers a satisfactory approach to get the accessible range of both the users, i.e., primary users (PUs) and secondary users (SUs). The proposed enhancement procedure depends on a nature-inspired metaheuristic algorithm. In MBO, all the monarch butterfly individuals are located in two distinct lands, viz. Southern Canada and the northern USA (land 1), and Mexico (Land 2). The positions of the monarch butterflies are modernizing in two ways. At first, the offsprings are generated (position updating) by the migration operator and can be adjusted by the migration ratio. It is trailed by tuning the positions for different butterflies by the methods for the butterfly adjusting operator. To keep the population unaltered and minimize fitness evaluations, the aggregate of the recently produced butterflies in these two ways stays equivalent to the first population. The outcomes obviously display the capacity of the MBO technique towards finding the upgraded work values on issues regarding the genetic algorithm.

Keywords: cognitive radio, channel allocation, monarch butterfly optimization, evolutionary, computation

Procedia PDF Downloads 50
3543 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 69
3542 Detection and Classification Strabismus Using Convolutional Neural Network and Spatial Image Processing

Authors: Anoop T. R., Otman Basir, Robert F. Hess, Eileen E. Birch, Brooke A. Koritala, Reed M. Jost, Becky Luu, David Stager, Ben Thompson

Abstract:

Strabismus refers to a misalignment of the eyes. Early detection and treatment of strabismus in childhood can prevent the development of permanent vision loss due to abnormal development of visual brain areas. We developed a two-stage method for strabismus detection and classification based on photographs of the face. The first stage detects the presence or absence of strabismus, and the second stage classifies the type of strabismus. The first stage comprises face detection using Haar cascade, facial landmark estimation, face alignment, aligned face landmark detection, segmentation of the eye region, and detection of strabismus using VGG 16 convolution neural networks. Face alignment transforms the face to a canonical pose to ensure consistency in subsequent analysis. Using facial landmarks, the eye region is segmented from the aligned face and fed into a VGG 16 CNN model, which has been trained to classify strabismus. The CNN determines whether strabismus is present and classifies the type of strabismus (exotropia, esotropia, and vertical deviation). If stage 1 detects strabismus, the eye region image is fed into stage 2, which starts with the estimation of pupil center coordinates using mask R-CNN deep neural networks. Then, the distance between the pupil coordinates and eye landmarks is calculated along with the angle that the pupil coordinates make with the horizontal and vertical axis. The distance and angle information is used to characterize the degree and direction of the strabismic eye misalignment. This model was tested on 100 clinically labeled images of children with (n = 50) and without (n = 50) strabismus. The True Positive Rate (TPR) and False Positive Rate (FPR) of the first stage were 94% and 6% respectively. The classification stage has produced a TPR of 94.73%, 94.44%, and 100% for esotropia, exotropia, and vertical deviations, respectively. This method also had an FPR of 5.26%, 5.55%, and 0% for esotropia, exotropia, and vertical deviation, respectively. The addition of one more feature related to the location of corneal light reflections may reduce the FPR, which was primarily due to children with pseudo-strabismus (the appearance of strabismus due to a wide nasal bridge or skin folds on the nasal side of the eyes).

Keywords: strabismus, deep neural networks, face detection, facial landmarks, face alignment, segmentation, VGG 16, mask R-CNN, pupil coordinates, angle deviation, horizontal and vertical deviation

Procedia PDF Downloads 72
3541 Optimizing Bridge Deck Construction: A Deep Neural Network Approach for Limiting Exterior Grider Rotation

Authors: Li Hui, Riyadh Hindi

Abstract:

In the United States, bridge construction often employs overhang brackets to support the deck overhang, the weight of fresh concrete, and loads from construction equipment. This approach, however, can lead to significant torsional moments on the exterior girders, potentially causing excessive girder rotation. Such rotations can result in various safety and maintenance issues, including thinning of the deck, reduced concrete cover, and cracking during service. Traditionally, these issues are addressed by installing temporary lateral bracing systems and conducting comprehensive torsional analysis through detailed finite element analysis for the construction of bridge deck overhang. However, this process is often intricate and time-intensive, with the spacing between temporary lateral bracing systems usually relying on the field engineers’ expertise. In this study, a deep neural network model is introduced to limit exterior girder rotation during bridge deck construction. The model predicts the optimal spacing between temporary bracing systems. To train this model, over 10,000 finite element models were generated in SAP2000, incorporating varying parameters such as girder dimensions, span length, and types and spacing of lateral bracing systems. The findings demonstrate that the deep neural network provides an effective and efficient alternative for limiting the exterior girder rotation for bridge deck construction. By reducing dependence on extensive finite element analyses, this approach stands out as a significant advancement in improving safety and maintenance effectiveness in the construction of bridge decks.

Keywords: bridge deck construction, exterior girder rotation, deep learning, finite element analysis

Procedia PDF Downloads 54
3540 DocPro: A Framework for Processing Semantic and Layout Information in Business Documents

Authors: Ming-Jen Huang, Chun-Fang Huang, Chiching Wei

Abstract:

With the recent advance of the deep neural network, we observe new applications of NLP (natural language processing) and CV (computer vision) powered by deep neural networks for processing business documents. However, creating a real-world document processing system needs to integrate several NLP and CV tasks, rather than treating them separately. There is a need to have a unified approach for processing documents containing textual and graphical elements with rich formats, diverse layout arrangement, and distinct semantics. In this paper, a framework that fulfills this unified approach is presented. The framework includes a representation model definition for holding the information generated by various tasks and specifications defining the coordination between these tasks. The framework is a blueprint for building a system that can process documents with rich formats, styles, and multiple types of elements. The flexible and lightweight design of the framework can help build a system for diverse business scenarios, such as contract monitoring and reviewing.

Keywords: document processing, framework, formal definition, machine learning

Procedia PDF Downloads 202
3539 Comprehensive Strategy for Healthy City from Local Practice Networking among Citizens, Industry, University and Municipality

Authors: Yuki Hara

Abstract:

Healthy assets are recognized as important for all people in the world through experiencing COVID-19. Each part of life and work is important to be changed against the preceding wide-spreading of COVID-19. Furthermore, it is necessary to innovate the whole structure of a city upon the sum of the parts. This study aims at creating a comprehensive strategy from a small practice of making healthier lives with collaborating local actors for a city. This paper employs action research as the research framework. The core practice is the 'Ken’iku Festival' at Ken’iku Festival Committee. The field locates the urban-rural fringe in the northwest part of Fujisawa city, Kanagawa prefecture, Japan. The data is collected through the author's practices for three years from the observations and interviews at meetings and discussions among stakeholders, texts in municipal reports, books, and movies, 3 questionnaires for customers and stakeholders at the Ken’iku Festival. These data are analysed by qualitative methods. The results show that couples in their 40s with children and couples or friends over the 70s are at the heart of promoting healthy lifestyles. In contrast, 40% of the visitors at the festival are the people who have no idea or no interest in healthier actions, which the committee has to suggest healthy activities through more pleasing services. The committee could organize staff and local actors as the core parties involved through gradually expanding its tasks relating to the local practices. This private sectoral activity from health promotion is covering a part of the whole-city planning of Fujisawa municipality by including many people over organisations into one community. This paper concludes from local practice networking through the festival that a comprehensive strategy for a healthy city is both a practical approach easily applied to each partner and one of the holistic services.

Keywords: communal practice network, healthy cities, health & development, health promotion, with and after COVID-19

Procedia PDF Downloads 118
3538 A New Heuristic Algorithm for Maximization Total Demands of Nodes and Number of Covered Nodes Simultaneously

Authors: Ehsan Saghehei, Mahdi Eghbali

Abstract:

The maximal covering location problem (MCLP) was originally developed to determine a set of facility locations which would maximize the total customers' demand serviced by the facilities within a predetermined critical service criterion. However, on some problems that differences between the demand nodes are covered or the number of nodes each node is large, the method of solving MCLP may ignore these differences. In this paper, Heuristic solution based on the ranking of demands in each node and the number of nodes covered by each node according to a predetermined critical value is proposed. The output of this method is to maximize total demands of nodes and number of covered nodes, simultaneously. Furthermore, by providing an example, the solution algorithm is described and its results are compared with Greedy and Lagrange algorithms. Also, the results of the algorithm to solve the larger problem sizes that compared with other methods are provided. A summary and future works conclude the paper.

Keywords: heuristic solution, maximal covering location problem, ranking, set covering

Procedia PDF Downloads 560
3537 On Dynamic Chaotic S-BOX Based Advanced Encryption Standard Algorithm for Image Encryption

Authors: Ajish Sreedharan

Abstract:

Security in transmission and storage of digital images has its importance in today’s image communications and confidential video conferencing. Due to the increasing use of images in industrial process, it is essential to protect the confidential image data from unauthorized access. Advanced Encryption Standard (AES) is a well known block cipher that has several advantages in data encryption. However, it is not suitable for real-time applications. This paper presents modifications to the Advanced Encryption Standard to reflect a high level security and better image encryption. The modifications are done by adjusting the ShiftRow Transformation and using On Dynamic chaotic S-BOX. In AES the Substitute bytes, Shift row and Mix columns by themselves would provide no security because they do not use the key. In Dynamic chaotic S-BOX Based AES the Substitute bytes provide security because the S-Box is constructed from the key. Experimental results verify and prove that the proposed modification to image cryptosystem is highly secure from the cryptographic viewpoint. The results also prove that with a comparison to original AES encryption algorithm the modified algorithm gives better encryption results in terms of security against statistical attacks.

Keywords: advanced encryption standard (AES), on dynamic chaotic S-BOX, image encryption, security analysis, ShiftRow transformation

Procedia PDF Downloads 427
3536 Forecasting Electricity Spot Price with Generalized Long Memory Modeling: Wavelet and Neural Network

Authors: Souhir Ben Amor, Heni Boubaker, Lotfi Belkacem

Abstract:

This aims of this paper is to forecast the electricity spot prices. First, we focus on modeling the conditional mean of the series so we adopt a generalized fractional -factor Gegenbauer process (k-factor GARMA). Secondly, the residual from the -factor GARMA model has used as a proxy for the conditional variance; these residuals were predicted using two different approaches. In the first approach, a local linear wavelet neural network model (LLWNN) has developed to predict the conditional variance using the Back Propagation learning algorithms. In the second approach, the Gegenbauer generalized autoregressive conditional heteroscedasticity process (G-GARCH) has adopted, and the parameters of the k-factor GARMA-G-GARCH model has estimated using the wavelet methodology based on the discrete wavelet packet transform (DWPT) approach. The empirical results have shown that the k-factor GARMA-G-GARCH model outperform the hybrid k-factor GARMA-LLWNN model, and find it is more appropriate for forecasts.

Keywords: electricity price, k-factor GARMA, LLWNN, G-GARCH, forecasting

Procedia PDF Downloads 220
3535 DWT-SATS Based Detection of Image Region Cloning

Authors: Michael Zimba

Abstract:

A duplicated image region may be subjected to a number of attacks such as noise addition, compression, reflection, rotation, and scaling with the intention of either merely mating it to its targeted neighborhood or preventing its detection. In this paper, we present an effective and robust method of detecting duplicated regions inclusive of those affected by the various attacks. In order to reduce the dimension of the image, the proposed algorithm firstly performs discrete wavelet transform, DWT, of a suspicious image. However, unlike most existing copy move image forgery (CMIF) detection algorithms operating in the DWT domain which extract only the low frequency sub-band of the DWT of the suspicious image thereby leaving valuable information in the other three sub-bands, the proposed algorithm simultaneously extracts features from all the four sub-bands. The extracted features are not only more accurate representation of image regions but also robust to additive noise, JPEG compression, and affine transformation. Furthermore, principal component analysis-eigenvalue decomposition, PCA-EVD, is applied to reduce the dimension of the features. The extracted features are then sorted using the more computationally efficient Radix Sort algorithm. Finally, same affine transformation selection, SATS, a duplication verification method, is applied to detect duplicated regions. The proposed algorithm is not only fast but also more robust to attacks compared to the related CMIF detection algorithms. The experimental results show high detection rates.

Keywords: affine transformation, discrete wavelet transform, radix sort, SATS

Procedia PDF Downloads 220
3534 Analysis of Different Classification Techniques Using WEKA for Diabetic Disease

Authors: Usama Ahmed

Abstract:

Data mining is the process of analyze data which are used to predict helpful information. It is the field of research which solve various type of problem. In data mining, classification is an important technique to classify different kind of data. Diabetes is most common disease. This paper implements different classification technique using Waikato Environment for Knowledge Analysis (WEKA) on diabetes dataset and find which algorithm is suitable for working. The best classification algorithm based on diabetic data is Naïve Bayes. The accuracy of Naïve Bayes is 76.31% and take 0.06 seconds to build the model.

Keywords: data mining, classification, diabetes, WEKA

Procedia PDF Downloads 135
3533 Fuzzy Logic Based Ventilation for Controlling Harmful Gases in Livestock Houses

Authors: Nuri Caglayan, H. Kursat Celik

Abstract:

There are many factors that influence the health and productivity of the animals in livestock production fields, including temperature, humidity, carbon dioxide (CO2), ammonia (NH3), hydrogen sulfide (H2S), physical activity and particulate matter. High NH3 concentrations reduce feed consumption and cause daily weight gain. At high concentrations, H2S causes respiratory problems and CO2 displace oxygen, which can cause suffocation or asphyxiation. Good air quality in livestock facilities can have an impact on the health and well-being of animals and humans. Air quality assessment basically depends on strictly given limits without taking into account specific local conditions between harmful gases and other meteorological factors. The stated limitations may be eliminated. using controlling systems based on neural networks and fuzzy logic. This paper describes a fuzzy logic based ventilation algorithm, which can calculate different fan speeds under pre-defined boundary conditions, for removing harmful gases from the production environment. In the paper, a fuzzy logic model has been developed based on a Mamedani’s fuzzy method. The model has been built on MATLAB software. As the result, optimum fan speeds under pre-defined boundary conditions have been presented.

Keywords: air quality, fuzzy logic model, livestock housing, fan speed

Procedia PDF Downloads 360
3532 A New Learning Automata-Based Algorithm to the Priority-Based Target Coverage Problem in Directional Sensor Networks

Authors: Shaharuddin Salleh, Sara Marouf, Hosein Mohammadi

Abstract:

Directional sensor networks (DSNs) have recently attracted a great deal of attention due to their extensive applications in a wide range of situations. One of the most important problems associated with DSNs is covering a set of targets in a given area and, at the same time, maximizing the network lifetime. This is due to limitation in sensing angle and battery power of the directional sensors. This problem gets more complicated by the possibility that targets may have different coverage requirements. In the present study, this problem is referred to as priority-based target coverage (PTC). As sensors are often densely deployed, organizing the sensors into several cover sets and then activating these cover sets successively is a promising solution to this problem. In this paper, we propose a learning automata-based algorithm to organize the directional sensors into several cover sets in such a way that each cover set could satisfy coverage requirements of all the targets. Several experiments are conducted to evaluate the performance of the proposed algorithm. The results demonstrated that the algorithms were able to contribute to solving the problem.

Keywords: directional sensor networks, target coverage problem, cover set formation, learning automata

Procedia PDF Downloads 400
3531 Unlocking the Future of Grocery Shopping: Graph Neural Network-Based Cold Start Item Recommendations with Reverse Next Item Period Recommendation (RNPR)

Authors: Tesfaye Fenta Boka, Niu Zhendong

Abstract:

Recommender systems play a crucial role in connecting individuals with the items they require, as is particularly evident in the rapid growth of online grocery shopping platforms. These systems predominantly rely on user-centered recommendations, where items are suggested based on individual preferences, garnering considerable attention and adoption. However, our focus lies on the item-centered recommendation task within the grocery shopping context. In the reverse next item period recommendation (RNPR) task, we are presented with a specific item and challenged to identify potential users who are likely to consume it in the upcoming period. Despite the ever-expanding inventory of products on online grocery platforms, the cold start item problem persists, posing a substantial hurdle in delivering personalized and accurate recommendations for new or niche grocery items. To address this challenge, we propose a Graph Neural Network (GNN)-based approach. By capitalizing on the inherent relationships among grocery items and leveraging users' historical interactions, our model aims to provide reliable and context-aware recommendations for cold-start items. This integration of GNN technology holds the promise of enhancing recommendation accuracy and catering to users' individual preferences. This research contributes to the advancement of personalized recommendations in the online grocery shopping domain. By harnessing the potential of GNNs and exploring item-centered recommendation strategies, we aim to improve the overall shopping experience and satisfaction of users on these platforms.

Keywords: recommender systems, cold start item recommendations, online grocery shopping platforms, graph neural networks

Procedia PDF Downloads 77
3530 Improvements in OpenCV's Viola Jones Algorithm in Face Detection–Skin Detection

Authors: Jyoti Bharti, M. K. Gupta, Astha Jain

Abstract:

This paper proposes a new improved approach for false positives filtering of detected face images on OpenCV’s Viola Jones Algorithm In this approach, for Filtering of False Positives, Skin Detection in two colour spaces i.e. HSV (Hue, Saturation and Value) and YCrCb (Y is luma component and Cr- red difference, Cb- Blue difference) is used. As a result, it is found that false detection has been reduced. Our proposed method reaches the accuracy of about 98.7%. Thus, a better recognition rate is achieved.

Keywords: face detection, Viola Jones, false positives, OpenCV

Procedia PDF Downloads 393
3529 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran

Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard

Abstract:

Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.

Keywords: data mining, ischemic stroke, decision tree, Bayesian network

Procedia PDF Downloads 163