Search results for: sparse algorithm
2186 Arbitrarily Shaped Blur Kernel Estimation for Single Image Blind Deblurring
Authors: Aftab Khan, Ashfaq Khan
Abstract:
The research paper focuses on an interesting challenge faced in Blind Image Deblurring (BID). It relates to the estimation of arbitrarily shaped or non-parametric Point Spread Functions (PSFs) of motion blur caused by camera handshake. These PSFs exhibit much more complex shapes than their parametric counterparts and deblurring in this case requires intricate ways to estimate the blur and effectively remove it. This research work introduces a novel blind deblurring scheme visualized for deblurring images corrupted by arbitrarily shaped PSFs. It is based on Genetic Algorithm (GA) and utilises the Blind/Reference-less Image Spatial QUality Evaluator (BRISQUE) measure as the fitness function for arbitrarily shaped PSF estimation. The proposed BID scheme has been compared with other single image motion deblurring schemes as benchmark. Validation has been carried out on various blurred images. Results of both benchmark and real images are presented. Non-reference image quality measures were used to quantify the deblurring results. For benchmark images, the proposed BID scheme using BRISQUE converges in close vicinity of the original blurring functions.Keywords: blind deconvolution, blind image deblurring, genetic algorithm, image restoration, image quality measures
Procedia PDF Downloads 4432185 The Influence of Audio on Perceived Quality of Segmentation
Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa
Abstract:
To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality
Procedia PDF Downloads 1172184 A Comparative Analysis of Heuristics Applied to Collecting Used Lubricant Oils Generated in the City of Pereira, Colombia
Authors: Diana Fajardo, Sebastián Ortiz, Oscar Herrera, Angélica Santis
Abstract:
Currently, in Colombia is arising a problem related to collecting used lubricant oils which are generated by the increment of the vehicle fleet. This situation does not allow a proper disposal of this type of waste, which in turn results in a negative impact on the environment. Therefore, through the comparative analysis of various heuristics, the best solution to the VRP (Vehicle Routing Problem) was selected by comparing costs and times for the collection of used lubricant oils in the city of Pereira, Colombia; since there is no presence of management companies engaged in the direct administration of the collection of this pollutant. To achieve this aim, six proposals of through methods of solution of two phases were discussed. First, the assignment of the group of generator points of the residue was made (previously identified). Proposals one and four of through methods are based on the closeness of points. The proposals two and five are using the scanning method and the proposals three and six are considering the restriction of the capacity of collection vehicle. Subsequently, the routes were developed - in the first three proposals by the Clarke and Wright's savings algorithm and in the following proposals by the Traveling Salesman optimization mathematical model. After applying techniques, a comparative analysis of the results was performed and it was determined which of the proposals presented the most optimal values in terms of the distance, cost and travel time.Keywords: Heuristics, optimization Model, savings algorithm, used vehicular oil, V.R.P.
Procedia PDF Downloads 4142183 Conception of a Regulated, Dynamic and Intelligent Sewerage in Ostrevent
Authors: Rabaa Tlili Yaakoubi, Hind Nakouri, Olivier Blanpain
Abstract:
The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of the CARDIO project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 40 to 100%. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 60% of total volume rejected to the natural environment and of 80 % in the number of discharges.Keywords: RTC, paradigm, optimization, automation
Procedia PDF Downloads 2842182 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation
Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy
Abstract:
A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.Keywords: cognitive activity, EEG, machine learning, personalized recovery
Procedia PDF Downloads 2202181 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control
Procedia PDF Downloads 1622180 Adaptation of Projection Profile Algorithm for Skewed Handwritten Text Line Detection
Authors: Kayode A. Olaniyi, Tola. M. Osifeko, Adeola A. Ogunleye
Abstract:
Text line segmentation is an important step in document image processing. It represents a labeling process that assigns the same label using distance metric probability to spatially aligned units. Text line detection techniques have successfully been implemented mainly in printed documents. However, processing of the handwritten texts especially unconstrained documents has remained a key problem. This is because the unconstrained hand-written text lines are often not uniformly skewed. The spaces between text lines may not be obvious, complicated by the nature of handwriting and, overlapping ascenders and/or descenders of some characters. Hence, text lines detection and segmentation represents a leading challenge in handwritten document image processing. Text line detection methods that rely on the traditional global projection profile of the text document cannot efficiently confront with the problem of variable skew angles between different text lines. Hence, the formulation of a horizontal line as a separator is often not efficient. This paper presents a technique to segment a handwritten document into distinct lines of text. The proposed algorithm starts, by partitioning the initial text image into columns, across its width into chunks of about 5% each. At each vertical strip of 5%, the histogram of horizontal runs is projected. We have worked with the assumption that text appearing in a single strip is almost parallel to each other. The algorithm developed provides a sliding window through the first vertical strip on the left side of the page. It runs through to identify the new minimum corresponding to a valley in the projection profile. Each valley would represent the starting point of the orientation line and the ending point is the minimum point on the projection profile of the next vertical strip. The derived text-lines traverse around any obstructing handwritten vertical strips of connected component by associating it to either the line above or below. A decision of associating such connected component is made by the probability obtained from a distance metric decision. The technique outperforms the global projection profile for text line segmentation and it is robust to handle skewed documents and those with lines running into each other.Keywords: connected-component, projection-profile, segmentation, text-line
Procedia PDF Downloads 1242179 Novel Adaptive Radial Basis Function Neural Networks Based Approach for Short-Term Load Forecasting of Jordanian Power Grid
Authors: Eyad Almaita
Abstract:
In this paper, a novel adaptive Radial Basis Function Neural Networks (RBFNN) algorithm is used to forecast the hour by hour electrical load demand in Jordan. A small and effective RBFNN model is used to forecast the hourly total load demand based on a small number of features. These features are; the load in the previous day, the load in the same day in the previous week, the temperature in the same hour, the hour number, the day number, and the day type. The proposed adaptive RBFNN model can enhance the reliability of the conventional RBFNN after embedding the network in the system. This is achieved by introducing an adaptive algorithm that allows the change of the weights of the RBFNN after the training process is completed, which will eliminates the need to retrain the RBFNN model again. The data used in this paper is real data measured by National Electrical Power co. (Jordan). The data for the period Jan./2012-April/2013 is used train the RBFNN models and the data for the period May/2013- Sep. /2013 is used to validate the models effectiveness.Keywords: load forecasting, adaptive neural network, radial basis function, short-term, electricity consumption
Procedia PDF Downloads 3442178 Integrating Process Planning, WMS Dispatching, and WPPW Weighted Due Date Assignment Using a Genetic Algorithm
Authors: Halil Ibrahim Demir, Tarık Cakar, Ibrahim Cil, Muharrem Dugenci, Caner Erden
Abstract:
Conventionally, process planning, scheduling, and due-date assignment functions are performed separately and sequentially. The interdependence of these functions requires integration. Although integrated process planning and scheduling, and scheduling with due date assignment problems are popular research topics, only a few works address the integration of these three functions. This work focuses on the integration of process planning, WMS scheduling, and WPPW due date assignment. Another novelty of this work is the use of a weighted due date assignment. In the literature, due dates are generally assigned without considering the importance of customers. However, in this study, more important customers get closer due dates. Typically, only tardiness is punished, but the JIT philosophy punishes both earliness and tardiness. In this study, all weighted earliness, tardiness, and due date related costs are penalized. As no customer desires distant due dates, such distant due dates should be penalized. In this study, various levels of integration of these three functions are tested and genetic search and random search are compared both with each other and with ordinary solutions. Higher integration levels are superior, while search is always useful. Genetic searches outperformed random searches.Keywords: process planning, weighted scheduling, weighted due-date assignment, genetic algorithm, random search
Procedia PDF Downloads 3942177 De-Novo Structural Elucidation from Mass/NMR Spectra
Authors: Ismael Zamora, Elisabeth Ortega, Tatiana Radchenko, Guillem Plasencia
Abstract:
The structure elucidation based on Mass Spectra (MS) data of unknown substances is an unresolved problem that affects many different fields of application. The recent overview of software available for structure elucidation of small molecules has shown the demand for efficient computational tool that will be able to perform structure elucidation of unknown small molecules and peptides. We developed an algorithm for De-Novo fragment analysis based on MS data that proposes a set of scored and ranked structures that are compatible with the MS and MSMS spectra. Several different algorithms were developed depending on the initial set of fragments and the structure building processes. Also, in all cases, several scores for the final molecule ranking were computed. They were validated with small and middle databases (DB) with the eleven test set compounds. Similar results were obtained from any of the databases that contained the fragments of the expected compound. We presented an algorithm. Or De-Novo fragment analysis based on only mass spectrometry (MS) data only that proposed a set of scored/ranked structures that was validated on different types of databases and showed good results as proof of concept. Moreover, the solutions proposed by Mass Spectrometry were submitted to the prediction of NMR spectra in order to elucidate which of the proposed structures was compatible with the NMR spectra collected.Keywords: De Novo, structure elucidation, mass spectrometry, NMR
Procedia PDF Downloads 2952176 Numerical Studies for Standard Bi-Conjugate Gradient Stabilized Method and the Parallel Variants for Solving Linear Equations
Authors: Kuniyoshi Abe
Abstract:
Bi-conjugate gradient (Bi-CG) is a well-known method for solving linear equations Ax = b, for x, where A is a given n-by-n matrix, and b is a given n-vector. Typically, the dimension of the linear equation is high and the matrix is sparse. A number of hybrid Bi-CG methods such as conjugate gradient squared (CGS), Bi-CG stabilized (Bi-CGSTAB), BiCGStab2, and BiCGstab(l) have been developed to improve the convergence of Bi-CG. Bi-CGSTAB has been most often used for efficiently solving the linear equation, but we have seen the convergence behavior with a long stagnation phase. In such cases, it is important to have Bi-CG coefficients that are as accurate as possible, and the stabilization strategy, which stabilizes the computation of the Bi-CG coefficients, has been proposed. It may avoid stagnation and lead to faster computation. Motivated by a large number of processors in present petascale high-performance computing hardware, the scalability of Krylov subspace methods on parallel computers has recently become increasingly prominent. The main bottleneck for efficient parallelization is the inner products which require a global reduction. The resulting global synchronization phases cause communication overhead on parallel computers. The parallel variants of Krylov subspace methods reducing the number of global communication phases and hiding the communication latency have been proposed. However, the numerical stability, specifically, the convergence speed of the parallel variants of Bi-CGSTAB may become worse than that of the standard Bi-CGSTAB. In this paper, therefore, we compare the convergence speed between the standard Bi-CGSTAB and the parallel variants by numerical experiments and show that the convergence speed of the standard Bi-CGSTAB is faster than the parallel variants. Moreover, we propose the stabilization strategy for the parallel variants.Keywords: bi-conjugate gradient stabilized method, convergence speed, Krylov subspace methods, linear equations, parallel variant
Procedia PDF Downloads 1642175 Numerical Solution of Momentum Equations Using Finite Difference Method for Newtonian Flows in Two-Dimensional Cartesian Coordinate System
Authors: Ali Ateş, Ansar B. Mwimbo, Ali H. Abdulkarim
Abstract:
General transport equation has a wide range of application in Fluid Mechanics and Heat Transfer problems. In this equation, generally when φ variable which represents a flow property is used to represent fluid velocity component, general transport equation turns into momentum equations or with its well known name Navier-Stokes equations. In these non-linear differential equations instead of seeking for analytic solutions, preferring numerical solutions is a more frequently used procedure. Finite difference method is a commonly used numerical solution method. In these equations using velocity and pressure gradients instead of stress tensors decreases the number of unknowns. Also, continuity equation, by integrating the system, number of equations is obtained as number of unknowns. In this situation, velocity and pressure components emerge as two important parameters. In the solution of differential equation system, velocities and pressures must be solved together. However, in the considered grid system, when pressure and velocity values are jointly solved for the same nodal points some problems confront us. To overcome this problem, using staggered grid system is a referred solution method. For the computerized solutions of the staggered grid system various algorithms were developed. From these, two most commonly used are SIMPLE and SIMPLER algorithms. In this study Navier-Stokes equations were numerically solved for Newtonian flow, whose mass or gravitational forces were neglected, for incompressible and laminar fluid, as a hydro dynamically fully developed region and in two dimensional cartesian coordinate system. Finite difference method was chosen as the solution method. This is a parametric study in which varying values of velocity components, pressure and Reynolds numbers were used. Differential equations were discritized using central difference and hybrid scheme. The discritized equation system was solved by Gauss-Siedel iteration method. SIMPLE and SIMPLER were used as solution algorithms. The obtained results, were compared for central difference and hybrid as discritization methods. Also, as solution algorithm, SIMPLE algorithm and SIMPLER algorithm were compared to each other. As a result, it was observed that hybrid discritization method gave better results over a larger area. Furthermore, as computer solution algorithm, besides some disadvantages, it can be said that SIMPLER algorithm is more practical and gave result in short time. For this study, a code was developed in DELPHI programming language. The values obtained in a computer program were converted into graphs and discussed. During sketching, the quality of the graph was increased by adding intermediate values to the obtained result values using Lagrange interpolation formula. For the solution of the system, number of grid and node was found as an estimated. At the same time, to indicate that the obtained results are satisfactory enough, by doing independent analysis from the grid (GCI analysis) for coarse, medium and fine grid system solution domain was obtained. It was observed that when graphs and program outputs were compared with similar studies highly satisfactory results were achieved.Keywords: finite difference method, GCI analysis, numerical solution of the Navier-Stokes equations, SIMPLE and SIMPLER algoritms
Procedia PDF Downloads 3912174 Optoelectronic Hardware Architecture for Recurrent Learning Algorithm in Image Processing
Authors: Abdullah Bal, Sevdenur Bal
Abstract:
This paper purposes a new type of hardware application for training of cellular neural networks (CNN) using optical joint transform correlation (JTC) architecture for image feature extraction. CNNs require much more computation during the training stage compare to test process. Since optoelectronic hardware applications offer possibility of parallel high speed processing capability for 2D data processing applications, CNN training algorithm can be realized using Fourier optics technique. JTC employs lens and CCD cameras with laser beam that realize 2D matrix multiplication and summation in the light speed. Therefore, in the each iteration of training, JTC carries more computation burden inherently and the rest of mathematical computation realized digitally. The bipolar data is encoded by phase and summation of correlation operations is realized using multi-object input joint images. Overlapping properties of JTC are then utilized for summation of two cross-correlations which provide less computation possibility for training stage. Phase-only JTC does not require data rearrangement, electronic pre-calculation and strict system alignment. The proposed system can be incorporated simultaneously with various optical image processing or optical pattern recognition techniques just in the same optical system.Keywords: CNN training, image processing, joint transform correlation, optoelectronic hardware
Procedia PDF Downloads 5062173 Ischemic Stroke Detection in Computed Tomography Examinations
Authors: Allan F. F. Alves, Fernando A. Bacchim Neto, Guilherme Giacomini, Marcela de Oliveira, Ana L. M. Pavan, Maria E. D. Rosa, Diana R. Pina
Abstract:
Stroke is a worldwide concern, only in Brazil it accounts for 10% of all registered deaths. There are 2 stroke types, ischemic (87%) and hemorrhagic (13%). Early diagnosis is essential to avoid irreversible cerebral damage. Non-enhanced computed tomography (NECT) is one of the main diagnostic techniques used due to its wide availability and rapid diagnosis. Detection depends on the size and severity of lesions and the time spent between the first symptoms and examination. The Alberta Stroke Program Early CT Score (ASPECTS) is a subjective method that increases the detection rate. The aim of this work was to implement an image segmentation system to enhance ischemic stroke and to quantify the area of ischemic and hemorrhagic stroke lesions in CT scans. We evaluated 10 patients with NECT examinations diagnosed with ischemic stroke. Analyzes were performed in two axial slices, one at the level of the thalamus and basal ganglion and one adjacent to the top edge of the ganglionic structures with window width between 80 and 100 Hounsfield Units. We used different image processing techniques such as morphological filters, discrete wavelet transform and Fuzzy C-means clustering. Subjective analyzes were performed by a neuroradiologist according to the ASPECTS scale to quantify ischemic areas in the middle cerebral artery region. These subjective analysis results were compared with objective analyzes performed by the computational algorithm. Preliminary results indicate that the morphological filters actually improve the ischemic areas for subjective evaluations. The comparison in area of the ischemic region contoured by the neuroradiologist and the defined area by computational algorithm showed no deviations greater than 12% in any of the 10 examination tests. Although there is a tendency that the areas contoured by the neuroradiologist are smaller than those obtained by the algorithm. These results show the importance of a computer aided diagnosis software to assist neuroradiology decisions, especially in critical situations as the choice of treatment for ischemic stroke.Keywords: ischemic stroke, image processing, CT scans, Fuzzy C-means
Procedia PDF Downloads 3662172 Non-Destructive Evaluation for Physical State Monitoring of an Angle Section Thin-Walled Curved Beam
Authors: Palash Dey, Sudip Talukdar
Abstract:
In this work, a cross-breed approach is presented for obtaining both the amount of the damage intensity and location of damage existing in thin-walled members. This cross-breed approach is developed based on response surface methodology (RSM) and genetic algorithm (GA). Theoretical finite element (FE) model of cracked angle section thin walled curved beam has been linked to the developed approach to carry out trial experiments to generate response surface functions (RSFs) of free, forced and heterogeneous dynamic response data. Subsequently, the error between the computed response surface functions and measured dynamic response data has been minimized using GA to find out the optimum damage parameters (amount of the damage intensity and location). A single crack of varying location and depth has been considered in this study. The presented approach has been found to reveal good accuracy in prediction of crack parameters and possess great potential in crack detection as it requires only the current response of a cracked beam.Keywords: damage parameters, finite element, genetic algorithm, response surface methodology, thin walled curved beam
Procedia PDF Downloads 2482171 Mobility-Aware Relay Selection in Two Hop Unmanned Aerial Vehicles Network
Authors: Tayyaba Hussain, Sobia Jangsher, Saqib Ali, Saqib Ejaz
Abstract:
Unmanned Aerial vehicles (UAV’s) have gained great popularity due to their remoteness, ease of deployment and high maneuverability in different applications like real-time surveillance, image capturing, weather atmospheric studies, disaster site monitoring and mapping. These applications can involve a real-time communication with the ground station. However, altitude and mobility possess a few challenges for the communication. UAV’s at high altitude usually require more transmit power. One possible solution can be with the use of multi hops (UAV’s acting as relays) and exploiting the mobility pattern of the UAV’s. In this paper, we studied a relay (UAV’s acting as relays) selection for a reliable transmission to a destination UAV. We exploit the mobility information of the UAV’s to propose a Mobility-Aware Relay Selection (MARS) algorithm with the objective of giving improved data rates. The results are compared with Non Mobility-Aware relay selection scheme and optimal values. Numerical results show that our proposed MARS algorithm gives 6% better achievable data rates for the mobile UAV’s as compared with Non MobilityAware relay selection scheme. On average a decrease of 20.2% in data rate is achieved with MARS as compared with SDP solver in Yalmip.Keywords: mobility aware, relay selection, time division multiple acess, unmanned aerial vehicle
Procedia PDF Downloads 2382170 Improve Divers Tracking and Classification in Sonar Images Using Robust Diver Wake Detection Algorithm
Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy
Abstract:
Harbor protection systems are so important. The need for automatic protection systems has increased over the last years. Diver detection active sonar has great significance. It used to detect underwater threats such as divers and autonomous underwater vehicle. To automatically detect such threats the sonar image is processed by algorithms. These algorithms used to detect, track and classify of underwater objects. In this work, divers tracking and classification algorithm is improved be proposing a robust wake detection method. To detect objects the sonar images is normalized then segmented based on fixed threshold. Next, the centroids of the segments are found and clustered based on distance metric. Then to track the objects linear Kalman filter is applied. To reduce effect of noise and creation of false tracks, the Kalman tracker is fine tuned. The tuning is done based on our active sonar specifications. After the tracks are initialed and updated they are subjected to a filtering stage to eliminate the noisy and unstable tracks. Also to eliminate object with a speed out of the diver speed range such as buoys and fast boats. Afterwards the result tracks are subjected to a classification stage to deiced the type of the object been tracked. Here the classification stage is to deice wither if the tracked object is an open circuit diver or a close circuit diver. At the classification stage, a small area around the object is extracted and a novel wake detection method is applied. The morphological features of the object with his wake is extracted. We used support vector machine to find the best classifier. The sonar training images and the test images are collected by ARMELSAN Defense Technologies Company using the portable diver detection sonar ARAS-2023. After applying the algorithm to the test sonar data, we get fine and stable tracks of the divers. The total classification accuracy achieved with the diver type is 97%.Keywords: harbor protection, diver detection, active sonar, wake detection, diver classification
Procedia PDF Downloads 2382169 A Recommender System for Dynamic Selection of Undergraduates' Elective Courses
Authors: Adewale O. Ogunde, Emmanuel O. Ajibade
Abstract:
The task of selecting a few elective courses from a variety of available elective courses has been a difficult one for many students over the years. In many higher institutions, guidance and counselors or level advisers are usually employed to assist the students in picking the right choice of courses. In reality, these counselors and advisers are most times overloaded with too many students to attend to, and sometimes they do not have enough time for the students. Most times, the academic strength of the student based on past results are not considered in the new choice of electives. Recommender systems implement advanced data analysis techniques to help users find the items of their interest by producing a predicted likeliness score or a list of top recommended items for a given active user. Therefore, in this work, a collaborative filtering-based recommender system that will dynamically recommend elective courses to undergraduate students based on their past grades in related courses was developed. This approach employed the use of the k-nearest neighbor algorithm to discover hidden relationships between the related courses passed by students in the past and the currently available elective courses. Real students’ results dataset was used to build and test the recommendation model. The developed system will not only improve the academic performance of students, but it will also help reduce the workload on the level advisers and school counselors.Keywords: collaborative filtering, elective courses, k-nearest neighbor algorithm, recommender systems
Procedia PDF Downloads 1642168 Friend or Foe: Decoding the Legal Challenges Posed by Artificial Intellegence in the Era of Intellectual Property
Authors: Latika Choudhary
Abstract:
“The potential benefits of Artificial Intelligence are huge, So are the dangers.” - Dave Water. Artificial intelligence is one of the facet of Information technology domain which despite several attempts does not have a clear definition or ambit. However it can be understood as technology to solve problems via automated decisions and predictions. Artificial intelligence is essentially an algorithm based technology which analyses the large amounts of data and then solves problems by detecting useful patterns. Owing to its automated feature it will not be wrong to say that humans & AI have more utility than humans alone or computers alone.1 For many decades AI experienced enthusiasm as well as setbacks, yet it has today become part and parcel of our everyday life, making it convenient or at times problematic. AI and related technology encompass Intellectual Property in multiple ways, the most important being AI technology for management of Intellectual Property, IP for protecting AI and IP as a hindrance to the transparency of AI systems. Thus the relationship between the two is of reciprocity as IP influences AI and vice versa. While AI is a recent concept, the IP laws for protection or even dealing with its challenges are relatively older, raising the need for revision to keep up with the pace of technological advancements. This paper will analyze the relationship between AI and IP to determine how beneficial or conflictual the same is, address how the old concepts of IP are being stretched to its maximum limits so as to accommodate the unwanted consequences of the Artificial Intelligence and propose ways to mitigate the situation so that AI becomes the friend it is and not turn into a potential foe it appears to be.Keywords: intellectual property rights, information technology, algorithm, artificial intelligence
Procedia PDF Downloads 872167 Automatic Content Curation of Visual Heritage
Authors: Delphine Ribes Lemay, Valentine Bernasconi, André Andrade, Lara DéFayes, Mathieu Salzmann, FréDéRic Kaplan, Nicolas Henchoz
Abstract:
Digitization and preservation of large heritage induce high maintenance costs to keep up with the technical standards and ensure sustainable access. Creating impactful usage is instrumental to justify the resources for long-term preservation. The Museum für Gestaltung of Zurich holds one of the biggest poster collections of the world from which 52’000 were digitised. In the process of building a digital installation to valorize the collection, one objective was to develop an algorithm capable of predicting the next poster to show according to the ones already displayed. The work presented here describes the steps to build an algorithm able to automatically create sequences of posters reflecting associations performed by curator and professional designers. The exposed challenge finds similarities with the domain of song playlist algorithms. Recently, artificial intelligence techniques and more specifically, deep-learning algorithms have been used to facilitate their generations. Promising results were found thanks to Recurrent Neural Networks (RNN) trained on manually generated playlist and paired with clusters of extracted features from songs. We used the same principles to create the proposed algorithm but applied to a challenging medium, posters. First, a convolutional autoencoder was trained to extract features of the posters. The 52’000 digital posters were used as a training set. Poster features were then clustered. Next, an RNN learned to predict the next cluster according to the previous ones. RNN training set was composed of poster sequences extracted from a collection of books from the Gestaltung Museum of Zurich dedicated to displaying posters. Finally, within the predicted cluster, the poster with the best proximity compared to the previous poster is selected. The mean square distance between features of posters was used to compute the proximity. To validate the predictive model, we compared sequences of 15 posters produced by our model to randomly and manually generated sequences. Manual sequences were created by a professional graphic designer. We asked 21 participants working as professional graphic designers to sort the sequences from the one with the strongest graphic line to the one with the weakest and to motivate their answer with a short description. The sequences produced by the designer were ranked first 60%, second 25% and third 15% of the time. The sequences produced by our predictive model were ranked first 25%, second 45% and third 30% of the time. The sequences produced randomly were ranked first 15%, second 29%, and third 55% of the time. Compared to designer sequences, and as reported by participants, model and random sequences lacked thematic continuity. According to the results, the proposed model is able to generate better poster sequencing compared to random sampling. Eventually, our algorithm is sometimes able to outperform a professional designer. As a next step, the proposed algorithm should include a possibility to create sequences according to a selected theme. To conclude, this work shows the potentiality of artificial intelligence techniques to learn from existing content and provide a tool to curate large sets of data, with a permanent renewal of the presented content.Keywords: Artificial Intelligence, Digital Humanities, serendipity, design research
Procedia PDF Downloads 1842166 The Data-Driven Localized Wave Solution of the Fokas-Lenells Equation Using Physics-Informed Neural Network
Authors: Gautam Kumar Saharia, Sagardeep Talukdar, Riki Dutta, Sudipta Nandy
Abstract:
The physics-informed neural network (PINN) method opens up an approach for numerically solving nonlinear partial differential equations leveraging fast calculating speed and high precession of modern computing systems. We construct the PINN based on a strong universal approximation theorem and apply the initial-boundary value data and residual collocation points to weekly impose initial and boundary conditions to the neural network and choose the optimization algorithms adaptive moment estimation (ADAM) and Limited-memory Broyden-Fletcher-Golfard-Shanno (L-BFGS) algorithm to optimize learnable parameter of the neural network. Next, we improve the PINN with a weighted loss function to obtain both the bright and dark soliton solutions of the Fokas-Lenells equation (FLE). We find the proposed scheme of adjustable weight coefficients into PINN has a better convergence rate and generalizability than the basic PINN algorithm. We believe that the PINN approach to solve the partial differential equation appearing in nonlinear optics would be useful in studying various optical phenomena.Keywords: deep learning, optical soliton, physics informed neural network, partial differential equation
Procedia PDF Downloads 702165 An Integrated Ecosystem Service-based Approach for the Sustainable Management of Forested Islands in South Korea
Authors: Jang-Hwan Jo
Abstract:
Implementing sustainable island forest management policies requires categorizing islands into groups based on key indicators and establishing a consistent management system. Building on the results of previous studies, a typology of forested islands was established: Type 1 – connected islands with high natural vegetation cover; Type 2 – connected islands with moderate natural vegetation cover; Type 3 – connected islands with low natural vegetation cover; Type 4 – unconnected islands with high natural vegetation cover; Type 5 – unconnected islands with moderate natural vegetation cover; and Type 6 – unconnected islands with low natural vegetation cover. An AHP analysis was conducted with island forest experts to identify priority ecosystem services (ESs) for the sustainable management of each island type. In connected islands, provisioning services (natural resources, natural medicines, etc.) assumed greater importance than regulating (erosion control) and supporting services (genetic diversity). In unconnected islands, particularly those with a small proportion of natural vegetation, regulating services (erosion control) requires greater emphasis in management. Considering that Type 3 islands require urgent management as connectivity to the mainland makes natural vegetation-sparse island forest ecosystems vulnerable to anthropogenic activities, the land-use scoring method was carried out on Jin-do, a Type 3 forested island. Comparisons between AHP-derived expert demand for key island ESs and the spatial distribution of ES supply potential revealed mismatches between the supply and demand of erosion control, freshwater supply, and habitat provision. The framework developed in this study can help guide decisions and indicate where interventions should be focused to achieve sustainable island management.Keywords: ecosystem service, sustainable management, forested islands, Analytic hierarchy process
Procedia PDF Downloads 752164 Embedded Digital Image System
Authors: Dawei Li, Cheng Liu, Yiteng Liu
Abstract:
This paper introduces an embedded digital image system for Chinese space environment vertical exploration sounding rocket. In order to record the flight status of the sounding rocket as well as the payloads, an onboard embedded image processing system based on ADV212, a JPEG2000 compression chip, is designed in this paper. Since the sounding rocket is not designed to be recovered, all image data should be transmitted to the ground station before the re-entry while the downlink band used for the image transmission is only about 600 kbps. Under the same condition of compression ratio compared with other algorithm, JPEG2000 standard algorithm can achieve better image quality. So JPEG2000 image compression is applied under this condition with a limited downlink data band. This embedded image system supports lossless to 200:1 real time compression, with two cameras to monitor nose ejection and motor separation, and two cameras to monitor boom deployment. The encoder, ADV7182, receives PAL signal from the camera, then output the ITU-R BT.656 signal to ADV212. ADV7182 switches between four input video channels as the program sequence. Two SRAMs are used for Ping-pong operation and one 512 Mb SDRAM for buffering high frame-rate images. The whole image system has the characteristics of low power dissipation, low cost, small size and high reliability, which is rather suitable for this sounding rocket application.Keywords: ADV212, image system, JPEG2000, sounding rocket
Procedia PDF Downloads 4212163 Homosexuality and Culture: A Case Study Depicting the Struggles of a Married Lady
Authors: Athulya Jayakumar, M. Manjula
Abstract:
Though there has been a shift in the understanding of homosexuality from being a sin, crime or pathology in the medical and legal perspectives, the acceptance of homosexuality still remains very scanty in the Indian subcontinent. The present case study is a 24-year-old female who has completed a diploma in polytechnic engineering and residing in the state of Kerala. She initially presented with her husband with complaints of lack of sexual desire and non-cooperation from the index client. After an initial few sessions, the client revealed, in an individual session, about her homosexual orientation which was unknown to her family. She has had multiple short-term relations with females and never had any heterosexual orientation/interest. During her adolescence, she was wondering if she could change herself into a male. However, currently, she accepts her gender. She never wanted a heterosexual marriage; but, had to succumb to the pressure of mother, as a result of a series of unexpected incidents at home and had to agree for the marriage, also with a hope that she may change herself into a bi-sexual. The client was able to bond with the husband emotionally but the multiple attempts at sexual intercourse, at the insistence of the husband, had always been non-pleasurable and induced a sense of disgust. Currently, for several months, there has not been any sexual activity. Also, she actively avoids any chance to have a warm communication with him so that she can avoid chances of him approaching her in a sexual manner. The case study is an attempt to highlight the culture and the struggles of a homosexual individual who comes to therapy for wanting to be a ‘normal wife’ despite having knowledge of legal rights and scenario. There is a scarcity of Indian literature that has systematically investigated issues related to homosexuality. Data on prevalence, emotional problems faced and clinical services available are sparse though it is crucial for increasing understanding of sexual behaviour, orientation and difficulties faced in India.Keywords: case study, culture, cognitive behavior therapy, female homosexuality
Procedia PDF Downloads 3452162 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach
Authors: Evan Lowhorn, Rocio Alba-Flores
Abstract:
The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.Keywords: classification, computer vision, convolutional neural networks, drone control
Procedia PDF Downloads 2102161 A Statistical-Algorithmic Approach for the Design and Evaluation of a Fresnel Solar Concentrator-Receiver System
Authors: Hassan Qandil
Abstract:
Using a statistical algorithm incorporated in MATLAB, four types of non-imaging Fresnel lenses are designed; spot-flat, linear-flat, dome-shaped and semi-cylindrical-shaped. The optimization employs a statistical ray-tracing methodology of the incident light, mainly considering effects of chromatic aberration, varying focal lengths, solar inclination and azimuth angles, lens and receiver apertures, and the optimum number of prism grooves. While adopting an equal-groove-width assumption of the Poly-methyl-methacrylate (PMMA) prisms, the main target is to maximize the ray intensity on the receiver’s aperture and therefore achieving higher values of heat flux. The algorithm outputs prism angles and 2D sketches. 3D drawings are then generated via AutoCAD and linked to COMSOL Multiphysics software to simulate the lenses under solar ray conditions, which provides optical and thermal analysis at both the lens’ and the receiver’s apertures while setting conditions as per the Dallas-TX weather data. Once the lenses’ characterization is finalized, receivers are designed based on its optimized aperture size. Several cavity shapes; including triangular, arc-shaped and trapezoidal, are tested while coupled with a variety of receiver materials, working fluids, heat transfer mechanisms, and enclosure designs. A vacuum-reflective enclosure is also simulated for an enhanced thermal absorption efficiency. Each receiver type is simulated via COMSOL while coupled with the optimized lens. A lab-scale prototype for the optimum lens-receiver configuration is then fabricated for experimental evaluation. Application-based testing is also performed for the selected configuration, including that of a photovoltaic-thermal cogeneration system and solar furnace system. Finally, some future research work is pointed out, including the coupling of the collector-receiver system with an end-user power generator, and the use of a multi-layered genetic algorithm for comparative studies.Keywords: COMSOL, concentrator, energy, fresnel, optics, renewable, solar
Procedia PDF Downloads 1552160 Accelerated Structural Reliability Analysis under Earthquake-Induced Tsunamis by Advanced Stochastic Simulation
Authors: Sai Hung Cheung, Zhe Shao
Abstract:
Recent earthquake-induced tsunamis in Padang, 2004 and Tohoku, 2011 brought huge losses of lives and properties. Maintaining vertical evacuation systems is the most crucial strategy to effectively reduce casualty during the tsunami event. Thus, it is of our great interest to quantify the risk to structural dynamic systems due to earthquake-induced tsunamis. Despite continuous advancement in computational simulation of the tsunami and wave-structure interaction modeling, it still remains computationally challenging to evaluate the reliability (or its complement failure probability) of a structural dynamic system when uncertainties related to the system and its modeling are taken into account. The failure of the structure in a tsunami-wave-structural system is defined as any response quantities of the system exceeding specified thresholds during the time when the structure is subjected to dynamic wave impact due to earthquake-induced tsunamis. In this paper, an approach based on a novel integration of the Subset Simulation algorithm and a recently proposed moving least squares response surface approach for stochastic sampling is proposed. The effectiveness of the proposed approach is discussed by comparing its results with those obtained from the Subset Simulation algorithm without using the response surface approach.Keywords: response surface model, subset simulation, structural reliability, Tsunami risk
Procedia PDF Downloads 3832159 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics
Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah
Abstract:
Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics
Procedia PDF Downloads 1312158 Remote Assessment and Change Detection of GreenLAI of Cotton Crop Using Different Vegetation Indices
Authors: Ganesh B. Shinde, Vijaya B. Musande
Abstract:
Cotton crop identification based on the timely information has significant advantage to the different implications of food, economic and environment. Due to the significant advantages, the accurate detection of cotton crop regions using supervised learning procedure is challenging problem in remote sensing. Here, classifiers on the direct image are played a major role but the results are not much satisfactorily. In order to further improve the effectiveness, variety of vegetation indices are proposed in the literature. But, recently, the major challenge is to find the better vegetation indices for the cotton crop identification through the proposed methodology. Accordingly, fuzzy c-means clustering is combined with neural network algorithm, trained by Levenberg-Marquardt for cotton crop classification. To experiment the proposed method, five LISS-III satellite images was taken and the experimentation was done with six vegetation indices such as Simple Ratio, Normalized Difference Vegetation Index, Enhanced Vegetation Index, Green Atmospherically Resistant Vegetation Index, Wide-Dynamic Range Vegetation Index, Green Chlorophyll Index. Along with these indices, Green Leaf Area Index is also considered for investigation. From the research outcome, Green Atmospherically Resistant Vegetation Index outperformed with all other indices by reaching the average accuracy value of 95.21%.Keywords: Fuzzy C-Means clustering (FCM), neural network, Levenberg-Marquardt (LM) algorithm, vegetation indices
Procedia PDF Downloads 3182157 Evolving Credit Scoring Models using Genetic Programming and Language Integrated Query Expression Trees
Authors: Alexandru-Ion Marinescu
Abstract:
There exist a plethora of methods in the scientific literature which tackle the well-established task of credit score evaluation. In its most abstract form, a credit scoring algorithm takes as input several credit applicant properties, such as age, marital status, employment status, loan duration, etc. and must output a binary response variable (i.e. “GOOD” or “BAD”) stating whether the client is susceptible to payment return delays. Data imbalance is a common occurrence among financial institution databases, with the majority being classified as “GOOD” clients (clients that respect the loan return calendar) alongside a small percentage of “BAD” clients. But it is the “BAD” clients we are interested in since accurately predicting their behavior is crucial in preventing unwanted loss for loan providers. We add to this whole context the constraint that the algorithm must yield an actual, tractable mathematical formula, which is friendlier towards financial analysts. To this end, we have turned to genetic algorithms and genetic programming, aiming to evolve actual mathematical expressions using specially tailored mutation and crossover operators. As far as data representation is concerned, we employ a very flexible mechanism – LINQ expression trees, readily available in the C# programming language, enabling us to construct executable pieces of code at runtime. As the title implies, they model trees, with intermediate nodes being operators (addition, subtraction, multiplication, division) or mathematical functions (sin, cos, abs, round, etc.) and leaf nodes storing either constants or variables. There is a one-to-one correspondence between the client properties and the formula variables. The mutation and crossover operators work on a flattened version of the tree, obtained via a pre-order traversal. A consequence of our chosen technique is that we can identify and discard client properties which do not take part in the final score evaluation, effectively acting as a dimensionality reduction scheme. We compare ourselves with state of the art approaches, such as support vector machines, Bayesian networks, and extreme learning machines, to name a few. The data sets we benchmark against amount to a total of 8, of which we mention the well-known Australian credit and German credit data sets, and the performance indicators are the following: percentage correctly classified, area under curve, partial Gini index, H-measure, Brier score and Kolmogorov-Smirnov statistic, respectively. Finally, we obtain encouraging results, which, although placing us in the lower half of the hierarchy, drive us to further refine the algorithm.Keywords: expression trees, financial credit scoring, genetic algorithm, genetic programming, symbolic evolution
Procedia PDF Downloads 117