Search results for: AI algorithm internal audit
5203 Numerical Solution of Steady Magnetohydrodynamic Boundary Layer Flow Due to Gyrotactic Microorganism for Williamson Nanofluid over Stretched Surface in the Presence of Exponential Internal Heat Generation
Authors: M. A. Talha, M. Osman Gani, M. Ferdows
Abstract:
This paper focuses on the study of two dimensional magnetohydrodynamic (MHD) steady incompressible viscous Williamson nanofluid with exponential internal heat generation containing gyrotactic microorganism over a stretching sheet. The governing equations and auxiliary conditions are reduced to a set of non-linear coupled differential equations with the appropriate boundary conditions using similarity transformation. The transformed equations are solved numerically through spectral relaxation method. The influences of various parameters such as Williamson parameter γ, power constant λ, Prandtl number Pr, magnetic field parameter M, Peclet number Pe, Lewis number Le, Bioconvection Lewis number Lb, Brownian motion parameter Nb, thermophoresis parameter Nt, and bioconvection constant σ are studied to obtain the momentum, heat, mass and microorganism distributions. Moment, heat, mass and gyrotactic microorganism profiles are explored through graphs and tables. We computed the heat transfer rate, mass flux rate and the density number of the motile microorganism near the surface. Our numerical results are in better agreement in comparison with existing calculations. The Residual error of our obtained solutions is determined in order to see the convergence rate against iteration. Faster convergence is achieved when internal heat generation is absent. The effect of magnetic parameter M decreases the momentum boundary layer thickness but increases the thermal boundary layer thickness. It is apparent that bioconvection Lewis number and bioconvection parameter has a pronounced effect on microorganism boundary. Increasing brownian motion parameter and Lewis number decreases the thermal boundary layer. Furthermore, magnetic field parameter and thermophoresis parameter has an induced effect on concentration profiles.Keywords: convection flow, similarity, numerical analysis, spectral method, Williamson nanofluid, internal heat generation
Procedia PDF Downloads 1825202 ACO-TS: an ACO-based Algorithm for Optimizing Cloud Task Scheduling
Authors: Fahad Y. Al-dawish
Abstract:
The current trend by a large number of organizations and individuals to use cloud computing. Many consider it a significant shift in the field of computing. Cloud computing are distributed and parallel systems consisting of a collection of interconnected physical and virtual machines. With increasing request and profit of cloud computing infrastructure, diverse computing processes can be executed on cloud environment. Many organizations and individuals around the world depend on the cloud computing environments infrastructure to carry their applications, platform, and infrastructure. One of the major and essential issues in this environment related to allocating incoming tasks to suitable virtual machine (cloud task scheduling). Cloud task scheduling is classified as optimization problem, and there are several meta-heuristic algorithms have been anticipated to solve and optimize this problem. Good task scheduler should execute its scheduling technique on altering environment and the types of incoming task set. In this research project a cloud task scheduling methodology based on ant colony optimization ACO algorithm, we call it ACO-TS Ant Colony Optimization for Task Scheduling has been proposed and compared with different scheduling algorithms (Random, First Come First Serve FCFS, and Fastest Processor to the Largest Task First FPLTF). Ant Colony Optimization (ACO) is random optimization search method that will be used for assigning incoming tasks to available virtual machines VMs. The main role of proposed algorithm is to minimizing the makespan of certain tasks set and maximizing resource utilization by balance the load among virtual machines. The proposed scheduling algorithm was evaluated by using Cloudsim toolkit framework. Finally after analyzing and evaluating the performance of experimental results we find that the proposed algorithm ACO-TS perform better than Random, FCFS, and FPLTF algorithms in each of the makespaan and resource utilization.Keywords: cloud Task scheduling, ant colony optimization (ACO), cloudsim, cloud computing
Procedia PDF Downloads 4215201 A Case Study for User Rating Prediction on Automobile Recommendation System Using Mapreduce
Authors: Jiao Sun, Li Pan, Shijun Liu
Abstract:
Recommender systems have been widely used in contemporary industry, and plenty of work has been done in this field to help users to identify items of interest. Collaborative Filtering (CF, for short) algorithm is an important technology in recommender systems. However, less work has been done in automobile recommendation system with the sharp increase of the amount of automobiles. What’s more, the computational speed is a major weakness for collaborative filtering technology. Therefore, using MapReduce framework to optimize the CF algorithm is a vital solution to this performance problem. In this paper, we present a recommendation of the users’ comment on industrial automobiles with various properties based on real world industrial datasets of user-automobile comment data collection, and provide recommendation for automobile providers and help them predict users’ comment on automobiles with new-coming property. Firstly, we solve the sparseness of matrix using previous construction of score matrix. Secondly, we solve the data normalization problem by removing dimensional effects from the raw data of automobiles, where different dimensions of automobile properties bring great error to the calculation of CF. Finally, we use the MapReduce framework to optimize the CF algorithm, and the computational speed has been improved times. UV decomposition used in this paper is an often used matrix factorization technology in CF algorithm, without calculating the interpolation weight of neighbors, which will be more convenient in industry.Keywords: collaborative filtering, recommendation, data normalization, mapreduce
Procedia PDF Downloads 2175200 Extracts of Cola acuminata, Lupinus arboreus and Bougainvillea spectabilis as Natural Photosensitizers for Dye-Sensitized Solar Cells
Authors: M. L. Akinyemi, T. J. Abodurin, A. O. Boyo, J. A. O. Olugbuyiro
Abstract:
Organic dyes from Cola acuminata (C. acuminata), Lupinus arboreus (L. arboreus) and Bougainvillea spectabilis (B. spectabilis) leaves and their mixtures were used as sensitizers to manufacture dye-sensitized solar cells (DSSC). Photoelectric measurements of C. acuminata showed a short circuit current (Jsc) of 0.027 mA/ cm2, 0.026 mA/ cm2 and 0.018 mA/ cm2 with a mixture of mercury chloride and iodine (Hgcl2 + I); potassium bromide and iodine (KBr + I); and potassium chloride and iodine (KCl + I) respectively. The open circuit voltage (Voc) was 24 mV, 25 mV and 20 mV for the three dyes respectively. L. arboreus had Jsc of 0.034 mA/ cm2, 0.021 mA/ cm2 and 0.013 mA/ cm2; and corresponding Voc of 28 mV, 14.2 mV and 15 mV for the three electrolytes respectively. B. spectabilis recorded Jsc 0.023 mA/ cm2, 0.026 mA/ cm2 and 0.015 mA/ cm2; and corresponding Voc values of 6.2 mV, 14.3 mV and 4.0 mV for the three electrolytes respectively. It was observed that the fill factor (FF) was 0.140 for C. acuminata, 0.3198 for L. arboreus and 0.1138 for B. spectabilis. Internal conversions of 0.096%, 0.056% and 0.063% were recorded for three dyes when combined with (KBr + I) electrolyte. The internal efficiency of C. acuminata DSSC was highest in value.Keywords: dye-sensitized solar cells, organic dye, C. acuminate, L. arboreus, B. spectabilis, dye mixture
Procedia PDF Downloads 2875199 Spatial Data Mining by Decision Trees
Authors: Sihem Oujdi, Hafida Belbachir
Abstract:
Existing methods of data mining cannot be applied on spatial data because they require spatial specificity consideration, as spatial relationships. This paper focuses on the classification with decision trees, which are one of the data mining techniques. We propose an extension of the C4.5 algorithm for spatial data, based on two different approaches Join materialization and Querying on the fly the different tables. Similar works have been done on these two main approaches, the first - Join materialization - favors the processing time in spite of memory space, whereas the second - Querying on the fly different tables- promotes memory space despite of the processing time. The modified C4.5 algorithm requires three entries tables: a target table, a neighbor table, and a spatial index join that contains the possible spatial relationship among the objects in the target table and those in the neighbor table. Thus, the proposed algorithms are applied to a spatial data pattern in the accidentology domain. A comparative study of our approach with other works of classification by spatial decision trees will be detailed.Keywords: C4.5 algorithm, decision trees, S-CART, spatial data mining
Procedia PDF Downloads 6125198 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4255197 Maximum Likelihood Estimation Methods on a Two-Parameter Rayleigh Distribution under Progressive Type-Ii Censoring
Authors: Daniel Fundi Murithi
Abstract:
Data from economic, social, clinical, and industrial studies are in some way incomplete or incorrect due to censoring. Such data may have adverse effects if used in the estimation problem. We propose the use of Maximum Likelihood Estimation (MLE) under a progressive type-II censoring scheme to remedy this problem. In particular, maximum likelihood estimates (MLEs) for the location (µ) and scale (λ) parameters of two Parameter Rayleigh distribution are realized under a progressive type-II censoring scheme using the Expectation-Maximization (EM) and the Newton-Raphson (NR) algorithms. These algorithms are used comparatively because they iteratively produce satisfactory results in the estimation problem. The progressively type-II censoring scheme is used because it allows the removal of test units before the termination of the experiment. Approximate asymptotic variances and confidence intervals for the location and scale parameters are derived/constructed. The efficiency of EM and the NR algorithms is compared given root mean squared error (RMSE), bias, and the coverage rate. The simulation study showed that in most sets of simulation cases, the estimates obtained using the Expectation-maximization algorithm had small biases, small variances, narrower/small confidence intervals width, and small root of mean squared error compared to those generated via the Newton-Raphson (NR) algorithm. Further, the analysis of a real-life data set (data from simple experimental trials) showed that the Expectation-Maximization (EM) algorithm performs better compared to Newton-Raphson (NR) algorithm in all simulation cases under the progressive type-II censoring scheme.Keywords: expectation-maximization algorithm, maximum likelihood estimation, Newton-Raphson method, two-parameter Rayleigh distribution, progressive type-II censoring
Procedia PDF Downloads 1635196 PID Sliding Mode Control with Sliding Surface Dynamics based Continuous Control Action for Robotic Systems
Authors: Wael M. Elawady, Mohamed F. Asar, Amany M. Sarhan
Abstract:
This paper adopts a continuous sliding mode control scheme for trajectory tracking control of robot manipulators with structured and unstructured uncertain dynamics and external disturbances. In this algorithm, the equivalent control in the conventional sliding mode control is replaced by a PID control action. Moreover, the discontinuous switching control signal is replaced by a continuous proportional-integral (PI) control term such that the implementation of the proposed control algorithm does not require the prior knowledge of the bounds of unknown uncertainties and external disturbances and completely eliminates the chattering phenomenon of the conventional sliding mode control approach. The closed-loop system with the adopted control algorithm has been proved to be globally stable by using Lyapunov stability theory. Numerical simulations using the dynamical model of robot manipulators with modeling uncertainties demonstrate the superiority and effectiveness of the proposed approach in high speed trajectory tracking problems.Keywords: PID, robot, sliding mode control, uncertainties
Procedia PDF Downloads 5085195 FlexPoints: Efficient Algorithm for Detection of Electrocardiogram Characteristic Points
Authors: Daniel Bulanda, Janusz A. Starzyk, Adrian Horzyk
Abstract:
The electrocardiogram (ECG) is one of the most commonly used medical tests, essential for correct diagnosis and treatment of the patient. While ECG devices generate a huge amount of data, only a small part of them carries valuable medical information. To deal with this problem, many compression algorithms and filters have been developed over the past years. However, the rapid development of new machine learning techniques poses new challenges. To address this class of problems, we created the FlexPoints algorithm that searches for characteristic points on the ECG signal and ignores all other points that do not carry relevant medical information. The conducted experiments proved that the presented algorithm can significantly reduce the number of data points which represents ECG signal without losing valuable medical information. These sparse but essential characteristic points (flex points) can be a perfect input for some modern machine learning models, which works much better using flex points as an input instead of raw data or data compressed by many popular algorithms.Keywords: characteristic points, electrocardiogram, ECG, machine learning, signal compression
Procedia PDF Downloads 1625194 Physical-Mechanical Characteristics of Monocrystalline Si1-xGex(X 0,02) Solid Solutions
Authors: I. Kurashvili, A. Sichinava, G. Bokuchava, G. Darsavelidze
Abstract:
Si-Ge solid solutions (bulk poly- and monocrystalline samples, thin films) are characterized by high perspectives for application in semiconductor devices, in particular, optoelectronics and microelectronics. In this light complex studying of structural state of the defects and structural-sensitive physical properties of Si-Ge solid solutions depending on the contents of Si and Ge components is very important. Present work deals with the investigations of microstructure, electrophysical characteristics, microhardness, internal friction and shear modulus of Si1-xGex(x≤0,02) bulk monocrystals conducted at a room temperatures. Si-Ge bulk crystals were obtained by Czochralski method in [111] crystallographic direction. Investigated monocrystalline Si-Ge samples are characterized by p-type conductivity and carriers concentration 5.1014-1.1015cm-3, dislocation density 5.103-1.104cm-2, microhardness according to Vickers method 900-1200 Kg/mm2. Investigate samples are characterized with 0,5x0,5x(10-15) mm3 sizes, oriented along [111] direction at torsion oscillations ≈1Hz, multistage changing of internal friction and shear modulus has been revealed in an interval of strain amplitude of 10-5-5.10-3. Critical values of strain amplitude have been determined at which hysteretic changes of inelastic characteristics and microplasticity are observed. The critical strain amplitude and elasticity limit values are also determined. Tendency to decrease of dynamic mechanical characteristics is shown with increasing Ge content in Si-Ge solid solutions. Observed changes are discussed from the point of view of interaction of various dislocations with point defects and their complexes in a real structure of Si-Ge solid solutions.Keywords: Microhardness, internal friction, shear modulus, Monocrystalline
Procedia PDF Downloads 3525193 Study on NOₓ Emission Characteristics of Internal Gas Recirculation Technique
Authors: DaeHae Kim, MinJun Kwon, Sewon Kim
Abstract:
This study is aimed to develop ultra-low NOₓ burner using the internal recirculation of flue gas inside the combustion chamber that utilizes the momentum of intake fuel and air. Detailed experimental investigations are carried out to study these fluid dynamic effects on the emission characteristics of newly developed burner in industrial steam boiler system. Experimental parameters are distance of Venturi tube from burner, Coanda nozzle gap distance, and air sleeve length at various fuel/air ratio and thermal heat load conditions. The results showed that NOₓ concentration decreases as the distance of Venturi tube from burner increases. The CO concentration values at all operating conditions were negligible. In addition, the increase of the Coanda nozzle gap distance decreased the NOₓ concentration. It is experimentally found out that both fuel injection recirculation and air injection recirculation technique was very effective in reducing NOₓ formation.Keywords: Coanda effect, combustion, burner, low NOₓ
Procedia PDF Downloads 2015192 Pavement Maintenance and Rehabilitation Scheduling Using Genetic Algorithm Based Multi Objective Optimization Technique
Authors: Ashwini Gowda K. S, Archana M. R, Anjaneyappa V
Abstract:
This paper presents pavement maintenance and management system (PMMS) to obtain optimum pavement maintenance and rehabilitation strategies and maintenance scheduling for a network using a multi-objective genetic algorithm (MOGA). Optimal pavement maintenance & rehabilitation strategy is to maximize the pavement condition index of the road section in a network with minimum maintenance and rehabilitation cost during the planning period. In this paper, NSGA-II is applied to perform maintenance optimization; this maintenance approach was expected to preserve and improve the existing condition of the highway network in a cost-effective way. The proposed PMMS is applied to a network that assessed pavement based on the pavement condition index (PCI). The minimum and maximum maintenance cost for a planning period of 20 years obtained from the non-dominated solution was found to be 5.190x10¹⁰ ₹ and 4.81x10¹⁰ ₹, respectively.Keywords: genetic algorithm, maintenance and rehabilitation, optimization technique, pavement condition index
Procedia PDF Downloads 1505191 Retrofitting Measures for Existing Housing Stock in Kazakhstan
Authors: S. Yessengabulov, A. Uyzbayeva
Abstract:
Residential buildings fund of Kazakhstan was built in the Soviet time about 35-60 years ago without considering energy efficiency measures. Currently, most of these buildings are in a rundown condition and fail to meet the minimum of hygienic, sanitary and comfortable living requirements. The paper aims to examine the reports of recent building energy survey activities in the country and provide a possible solution for retrofitting existing housing stock built before 1989 which could be applicable for building envelope in cold climate. Methodology also includes two-dimensional modeling of possible practical solutions and further recommendations.Keywords: energy audit, energy efficient buildings in Kazakhstan, retrofit, two-dimensional conduction heat transfer analysis
Procedia PDF Downloads 2475190 Insights on Behavior of Tunisian Auditors
Authors: Dammak Saida, Mbarek Sonia
Abstract:
This paper aims to examine the impact of public interest commitment, the attitude towards independence enforcement, and organizational ethical culture on auditors' ethical behavior. It also tests the moderating effect of gender diversity on these relationships. The sample consisted of 100 Tunisian chartered accountants. An online survey was used to collect the data. Data analysis techniques used to test hypotheses The findings of this study provide practical implications for accounting professionals, regulators, and audit firms as they help understand auditors' beliefs and behaviors, which implies more effective mechanisms for improving their ethical values.Keywords: public interest, independence, organizational culture, professional behavior, Tunisian auditors
Procedia PDF Downloads 745189 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 3005188 Motivation and Efficiency of Quality Management Systems Implementation: A Study of Kosovo Organizations
Authors: Naim Ismajli, Ilir Rexhepi
Abstract:
The article presents the results of the study about the motives and efficiency of quality management system (Quality System, QS) implementation in Kosovo companies. The main purpose of the study was to find out why Kosovo companies seek the implementation and certification of QS in accordance with the requirements of the ISO 9001 series of the standards and what has changed after the QS implementation. Furthermore, the results of the research were compared with similar performed in the other European countries. The performed research revealed that the implementation of QS mostly results in the benefits of an intangible nature that are internal to the company. In addition, although the main reasons to start implementing QS are the expectations of the external advantages, the implementation results mostly in the increase of the internal benefits such as an improvement in the definition of the responsibilities and obligations of the employees, a decrease in the nonconformities, better communication among the employees, and increased efficiency.Keywords: quality management systems, ISO 9001, total quality management, environmental management system, ISO 14000, competitiveness, efciency
Procedia PDF Downloads 3655187 Improvement of the Robust Proportional–Integral–Derivative (PID) Controller Parameters for Controlling the Frequency in the Intelligent Multi-Zone System at the Present of Wind Generation Using the Seeker Optimization Algorithm
Authors: Roya Ahmadi Ahangar, Hamid Madadyari
Abstract:
The seeker optimization algorithm (SOA) is increasingly gaining popularity among the researchers society due to its effectiveness in solving some real-world optimization problems. This paper provides the load-frequency control method based on the SOA for removing oscillations in the power system. A three-zone power system includes a thermal zone, a hydraulic zone and a wind zone equipped with robust proportional-integral-differential (PID) controllers. The result of simulation indicates that load-frequency changes in the wind zone for the multi-zone system are damped in a short period of time. Meanwhile, in the oscillation period, the oscillations amplitude is not significant. The result of simulation emphasizes that the PID controller designed using the seeker optimization algorithm has a robust function and a better performance for oscillations damping compared to the traditional PID controller. The proposed controller’s performance has been compared to the performance of PID controller regulated with Particle Swarm Optimization (PSO) and. Genetic Algorithm (GA) and Artificial Bee Colony (ABC) algorithms in order to show the superior capability of the proposed SOA in regulating the PID controller. The simulation results emphasize the better performance of the optimized PID controller based on SOA compared to the PID controller optimized with PSO, GA and ABC algorithms.Keywords: load-frequency control, multi zone, robust PID controller, wind generation
Procedia PDF Downloads 3035186 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 895185 Modeling Average Paths Traveled by Ferry Vessels Using AIS Data
Authors: Devin Simmons
Abstract:
At the USDOT’s Bureau of Transportation Statistics, a biannual census of ferry operators in the U.S. is conducted, with results such as route mileage used to determine federal funding levels for operators. AIS data allows for the possibility of using GIS software and geographical methods to confirm operator-reported mileage for individual ferry routes. As part of the USDOT’s work on the ferry census, an algorithm was developed that uses AIS data for ferry vessels in conjunction with known ferry terminal locations to model the average route travelled for use as both a cartographic product and confirmation of operator-reported mileage. AIS data from each vessel is first analyzed to determine individual journeys based on the vessel’s velocity, and changes in velocity over time. These trips are then converted to geographic linestring objects. Using the terminal locations, the algorithm then determines whether the trip represented a known ferry route. Given a large enough dataset, routes will be represented by multiple trip linestrings, which are then filtered by DBSCAN spatial clustering to remove outliers. Finally, these remaining trips are ready to be averaged into one route. The algorithm interpolates the point on each trip linestring that represents the start point. From these start points, a centroid is calculated, and the first point of the average route is determined. Each trip is interpolated again to find the point that represents one percent of the journey’s completion, and the centroid of those points is used as the next point in the average route, and so on until 100 points have been calculated. Routes created using this algorithm have shown demonstrable improvement over previous methods, which included the implementation of a LOESS model. Additionally, the algorithm greatly reduces the amount of manual digitizing needed to visualize ferry activity.Keywords: ferry vessels, transportation, modeling, AIS data
Procedia PDF Downloads 1765184 Adaptation of Hough Transform Algorithm for Text Document Skew Angle Detection
Authors: Kayode A. Olaniyi, Olabanji F. Omotoye, Adeola A. Ogunleye
Abstract:
The skew detection and correction form an important part of digital document analysis. This is because uncompensated skew can deteriorate document features and can complicate further document image processing steps. Efficient text document analysis and digitization can rarely be achieved when a document is skewed even at a small angle. Once the documents have been digitized through the scanning system and binarization also achieved, document skew correction is required before further image analysis. Research efforts have been put in this area with algorithms developed to eliminate document skew. Skew angle correction algorithms can be compared based on performance criteria. Most important performance criteria are accuracy of skew angle detection, range of skew angle for detection, speed of processing the image, computational complexity and consequently memory space used. The standard Hough Transform has successfully been implemented for text documentation skew angle estimation application. However, the standard Hough Transform algorithm level of accuracy depends largely on how much fine the step size for the angle used. This consequently consumes more time and memory space for increase accuracy and, especially where number of pixels is considerable large. Whenever the Hough transform is used, there is always a tradeoff between accuracy and speed. So a more efficient solution is needed that optimizes space as well as time. In this paper, an improved Hough transform (HT) technique that optimizes space as well as time to robustly detect document skew is presented. The modified algorithm of Hough Transform presents solution to the contradiction between the memory space, running time and accuracy. Our algorithm starts with the first step of angle estimation accurate up to zero decimal place using the standard Hough Transform algorithm achieving minimal running time and space but lacks relative accuracy. Then to increase accuracy, suppose estimated angle found using the basic Hough algorithm is x degree, we then run again basic algorithm from range between ±x degrees with accuracy of one decimal place. Same process is iterated till level of desired accuracy is achieved. The procedure of our skew estimation and correction algorithm of text images is implemented using MATLAB. The memory space estimation and process time are also tabulated with skew angle assumption of within 00 and 450. The simulation results which is demonstrated in Matlab show the high performance of our algorithms with less computational time and memory space used in detecting document skew for a variety of documents with different levels of complexity.Keywords: hough-transform, skew-detection, skew-angle, skew-correction, text-document
Procedia PDF Downloads 1595183 Optimal Design of Multimachine Power System Stabilizers Using Improved Multi-Objective Particle Swarm Optimization Algorithm
Authors: Badr M. Alshammari, T. Guesmi
Abstract:
In this paper, the concept of a non-dominated sorting multi-objective particle swarm optimization with local search (NSPSO-LS) is presented for the optimal design of multimachine power system stabilizers (PSSs). The controller design is formulated as an optimization problem in order to shift the system electromechanical modes in a pre-specified region in the s-plan. A composite set of objective functions comprising the damping factor and the damping ratio of the undamped and lightly damped electromechanical modes is considered. The performance of the proposed optimization algorithm is verified for the 3-machine 9-bus system. Simulation results based on eigenvalue analysis and nonlinear time-domain simulation show the potential and superiority of the NSPSO-LS algorithm in tuning PSSs over a wide range of loading conditions and large disturbance compared to the classic PSO technique and genetic algorithms.Keywords: multi-objective optimization, particle swarm optimization, power system stabilizer, low frequency oscillations
Procedia PDF Downloads 4315182 A Protein-Wave Alignment Tool for Frequency Related Homologies Identification in Polypeptide Sequences
Authors: Victor Prevost, Solene Landerneau, Michel Duhamel, Joel Sternheimer, Olivier Gallet, Pedro Ferrandiz, Marwa Mokni
Abstract:
The search for homologous proteins is one of the ongoing challenges in biology and bioinformatics. Traditionally, a pair of proteins is thought to be homologous when they originate from the same ancestral protein. In such a case, their sequences share similarities, and advanced scientific research effort is spent to investigate this question. On this basis, we propose the Protein-Wave Alignment Tool (”P-WAT”) developed within the framework of the France Relance 2030 plan. Our work takes into consideration the mass-related wave aspect of protein biosynthesis, by associating specific frequencies to each amino acid according to its mass. Amino acids are then regrouped within their mass category. This way, our algorithm produces specific alignments in addition to those obtained with a common amino acid coding system. For this purpose, we develop the ”P-WAT” original algorithm, able to address large protein databases, with different attributes such as species, protein names, etc. that allow us to align user’s requests with a set of specific protein sequences. The primary intent of this algorithm is to achieve efficient alignments, in this specific conceptual frame, by minimizing execution costs and information loss. Our algorithm identifies sequence similarities by searching for matches of sub-sequences of different sizes, referred to as primers. Our algorithm relies on Boolean operations upon a dot plot matrix to identify primer amino acids common to both proteins which are likely to be part of a significant alignment of peptides. From those primers, dynamic programming-like traceback operations generate alignments and alignment scores based on an adjusted PAM250 matrix.Keywords: protein, alignment, homologous, Genodic
Procedia PDF Downloads 1135181 An AI-Based Dynamical Resource Allocation Calculation Algorithm for Unmanned Aerial Vehicle
Authors: Zhou Luchen, Wu Yubing, Burra Venkata Durga Kumar
Abstract:
As the scale of the network becomes larger and more complex than before, the density of user devices is also increasing. The development of Unmanned Aerial Vehicle (UAV) networks is able to collect and transform data in an efficient way by using software-defined networks (SDN) technology. This paper proposed a three-layer distributed and dynamic cluster architecture to manage UAVs by using an AI-based resource allocation calculation algorithm to address the overloading network problem. Through separating services of each UAV, the UAV hierarchical cluster system performs the main function of reducing the network load and transferring user requests, with three sub-tasks including data collection, communication channel organization, and data relaying. In this cluster, a head node and a vice head node UAV are selected considering the Central Processing Unit (CPU), operational (RAM), and permanent (ROM) memory of devices, battery charge, and capacity. The vice head node acts as a backup that stores all the data in the head node. The k-means clustering algorithm is used in order to detect high load regions and form the UAV layered clusters. The whole process of detecting high load areas, forming and selecting UAV clusters, and moving the selected UAV cluster to that area is proposed as offloading traffic algorithm.Keywords: k-means, resource allocation, SDN, UAV network, unmanned aerial vehicles
Procedia PDF Downloads 1115180 Subband Coding and Glottal Closure Instant (GCI) Using SEDREAMS Algorithm
Authors: Harisudha Kuresan, Dhanalakshmi Samiappan, T. Rama Rao
Abstract:
In modern telecommunication applications, Glottal Closure Instants location finding is important and is directly evaluated from the speech waveform. Here, we study the GCI using Speech Event Detection using Residual Excitation and the Mean Based Signal (SEDREAMS) algorithm. Speech coding uses parameter estimation using audio signal processing techniques to model the speech signal combined with generic data compression algorithms to represent the resulting modeled in a compact bit stream. This paper proposes a sub-band coder SBC, which is a type of transform coding and its performance for GCI detection using SEDREAMS are evaluated. In SBCs code in the speech signal is divided into two or more frequency bands and each of these sub-band signal is coded individually. The sub-bands after being processed are recombined to form the output signal, whose bandwidth covers the whole frequency spectrum. Then the signal is decomposed into low and high-frequency components and decimation and interpolation in frequency domain are performed. The proposed structure significantly reduces error, and precise locations of Glottal Closure Instants (GCIs) are found using SEDREAMS algorithm.Keywords: SEDREAMS, GCI, SBC, GOI
Procedia PDF Downloads 3565179 Acid Injection PTFE Internal Lining in Raw Water System
Authors: Fikri Suwaileh
Abstract:
In the reverse osmosis (RO) water treatment plant, operation was suffering from several leaks on the acid injection point spool and downstream spools, due to insufficient injection monitoring and the coating failure leading to pin holes. The paper will go over the background of the leaks in the acid injection point, the process in the RO plant, the material, and coating used in the existing spools, the impact of these repeated leaks, the type of damage mechanism that occurred in the system due to the manner of acid injection and the heat in the spools, which lead to coating failure, leaks and water release. This paper will also look at the analysis, both the short- and long-term recommendations, and the utilization of Teflon internal lining to stop the leaks. Sharing this case study will enhance the knowledge of the importance of taking all factors that will lead to leaks in the acid injection points, along with the importance of utilizing the appropriate coating material lining to enhance the full system.Keywords: corrosion, coating, raw water, lining
Procedia PDF Downloads 195178 Multi-Objective Random Drift Particle Swarm Optimization Algorithm Based on RDPSO and Crowding Distance Sorting
Authors: Yiqiong Yuan, Jun Sun, Dongmei Zhou, Jianan Sun
Abstract:
In this paper, we presented a Multi-Objective Random Drift Particle Swarm Optimization algorithm (MORDPSO-CD) based on RDPSO and crowding distance sorting to improve the convergence and distribution with less computation cost. MORDPSO-CD makes the most of RDPSO to approach the true Pareto optimal solutions fast. We adopt the crowding distance sorting technique to update and maintain the archived optimal solutions. Introducing the crowding distance technique into MORDPSO can make the leader particles find the true Pareto solution ultimately. The simulation results reveal that the proposed algorithm has better convergence and distributionKeywords: multi-objective optimization, random drift particle swarm optimization, crowding distance sorting, pareto optimal solution
Procedia PDF Downloads 2555177 The Impact of Geopolitical Risks and the Oil Price Fluctuations on the Kuwaiti Financial Market
Authors: Layal Mansour
Abstract:
The aim of this paper is to identify whether oil price volatility or geopolitical risks can predict future financial stress periods or economic recessions in Kuwait. We construct the first Financial Stress Index for Kuwait (FSIK) that includes informative vulnerable indicators of the main financial sectors: the banking sector, the equities market, and the foreign exchange market. The study covers the period from 2000 to 2020, so it includes the two recent most devastating world economic crises with oil price fluctuation: the Covid-19 pandemic crisis and Ukraine-Russia War. All data are taken by the central bank of Kuwait, the World Bank, IMF, DataStream, and from Federal Reserve System St Louis. The variables are computed as the percentage growth rate, then standardized and aggregated into one index using the variance equal weights method, the most frequently used in the literature. The graphical FSIK analysis provides detailed information (by dates) to policymakers on how internal financial stability depends on internal policy and events such as government elections or resignation. It also shows how monetary authorities or internal policymakers’ decisions to relieve personal loans or increase/decrease the public budget trigger internal financial instability. The empirical analysis under vector autoregression (VAR) models shows the dynamic causal relationship between the oil price fluctuation and the Kuwaiti economy, which relies heavily on the oil price. Similarly, using vector autoregression (VAR) models to assess the impact of the global geopolitical risks on Kuwaiti financial stability, results reveal whether Kuwait is confronted with or sheltered from geopolitical risks. The Financial Stress Index serves as a guide for macroprudential regulators in order to understand the weakness of the overall Kuwaiti financial market and economy regardless of the Kuwaiti dinar strength and exchange rate stability. It helps policymakers predict future stress periods and, thus, address alternative cushions to confront future possible financial threats.Keywords: Kuwait, financial stress index, causality test, VAR, oil price, geopolitical risks
Procedia PDF Downloads 815176 Discrimination during a Resume Audit: The Impact of Job Context in Hiring
Authors: Alexandra Roy
Abstract:
Building on literature on cognitive matching and social categorization and using the correspondence testing method, we test the interaction effect of person characteristics (Gender with physical attractiveness) and job context (client contact, industry status, coworker contact). As expected, while findings show a strong impact of gender with beauty on hiring chances, job context characteristics have also a significant overall effect of this hiring outcome. Moreover, the rate of positive responses varies according some of the recruiter’s characteristics. Results are robust to various sensitivity checks. Implications of the results, limitations of the study, and directions for future research are discussed.Keywords: correspondence testing, discrimination, hiring, physical attractiveness
Procedia PDF Downloads 2095175 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor
Authors: Hidir S. Nogay
Abstract:
In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.Keywords: cascaded neural network, internal temperature, inverter, three-phase induction motor
Procedia PDF Downloads 3455174 Global Optimization: The Alienor Method Mixed with Piyavskii-Shubert Technique
Authors: Guettal Djaouida, Ziadi Abdelkader
Abstract:
In this paper, we study a coupling of the Alienor method with the algorithm of Piyavskii-Shubert. The classical multidimensional global optimization methods involves great difficulties for their implementation to high dimensions. The Alienor method allows to transform a multivariable function into a function of a single variable for which it is possible to use efficient and rapid method for calculating the the global optimum. This simplification is based on the using of a reducing transformation called Alienor.Keywords: global optimization, reducing transformation, α-dense curves, Alienor method, Piyavskii-Shubert algorithm
Procedia PDF Downloads 503