Search results for: elliptic curve digital signature algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4692

Search results for: elliptic curve digital signature algorithm

4092 A Novel Deinterlacing Algorithm Based on Adaptive Polynomial Interpolation

Authors: Seung-Won Jung, Hye-Soo Kim, Le Thanh Ha, Seung-Jin Baek, Sung-Jea Ko

Abstract:

In this paper, a novel deinterlacing algorithm is proposed. The proposed algorithm approximates the distribution of the luminance into a polynomial function. Instead of using one polynomial function for all pixels, different polynomial functions are used for the uniform, texture, and directional edge regions. The function coefficients for each region are computed by matrix multiplications. Experimental results demonstrate that the proposed method performs better than the conventional algorithms.

Keywords: Deinterlacing, polynomial interpolation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1382
4091 A Study of the Effectiveness of the Routing Decision Support Algorithm

Authors: Wayne Goodridge, Alexander Nikov, Ashok Sahai

Abstract:

Multi criteria decision making (MCDM) methods like analytic hierarchy process, ELECTRE and multi-attribute utility theory are critically studied. They have irregularities in terms of the reliability of ranking of the best alternatives. The Routing Decision Support (RDS) algorithm is trying to improve some of their deficiencies. This paper gives a mathematical verification that the RDS algorithm conforms to the test criteria for an effective MCDM method when a linear preference function is considered.

Keywords: Decision support systems, linear preference function, multi-criteria decision-making algorithm, analytic hierarchy process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583
4090 A new Heuristic Algorithm for the Dynamic Facility Layout Problem with Budget Constraint

Authors: Parham Azimi, Hamid Reza Charmchi

Abstract:

In this research, we have developed a new efficient heuristic algorithm for the dynamic facility layout problem with budget constraint (DFLPB). This heuristic algorithm combines two mathematical programming methods such as discrete event simulation and linear integer programming (IP) to obtain a near optimum solution. In the proposed algorithm, the non-linear model of the DFLP has been changed to a pure integer programming (PIP) model. Then, the optimal solution of the PIP model has been used in a simulation model that has been designed in a similar manner as the DFLP for determining the probability of assigning a facility to a location. After a sufficient number of runs, the simulation model obtains near optimum solutions. Finally, to verify the performance of the algorithm, several test problems have been solved. The results show that the proposed algorithm is more efficient in terms of speed and accuracy than other heuristic algorithms presented in previous works found in the literature.

Keywords: Budget constraint, Dynamic facility layout problem, Integer programming, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1965
4089 Statistical Modeling of Mandarin Tone Sandhi: Neutralization of Underlying Pitch Targets

Authors: Si Chen, Caroline Wiltshire, Bin Li

Abstract:

This study statistically models the surface f0 contour and the underlying pitch target of a well-studied third sandhi tone of Mandarin Chinese. Although the growth curve analysis on the surface f0 contours indicates non-neutralization of this sandhi tone (T3) and the base T2, their underlying pitch targets do show neutralization. These results in Mandarin are also consistent with the perception of native speakers, where they cannot distinguish the third T3 from the base T2, compensating contextual variation. It is possible to use the proposed statistical procedure of testing underlying pitch targets to verify tone sandhi processes in other tonal languages.

Keywords: Growth curve analysis, tone sandhi, underlying pitch targets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 974
4088 CT Medical Images Denoising Based on New Wavelet Thresholding Compared with Curvelet and Contourlet

Authors: Amir Moslemi, Amir Movafeghi, Shahab Moradi

Abstract:

One of the most important challenging factors in medical images is nominated as noise. Image denoising refers to the improvement of a digital medical image that has been infected by Additive White Gaussian Noise (AWGN). The digital medical image or video can be affected by different types of noises. They are impulse noise, Poisson noise and AWGN. Computed tomography (CT) images are subjects to low quality due to the noise. Quality of CT images is dependent on absorbed dose to patients directly in such a way that increase in absorbed radiation, consequently absorbed dose to patients (ADP), enhances the CT images quality. In this manner, noise reduction techniques on purpose of images quality enhancement exposing no excess radiation to patients is one the challenging problems for CT images processing. In this work, noise reduction in CT images was performed using two different directional 2 dimensional (2D) transformations; i.e., Curvelet and Contourlet and Discrete Wavelet Transform (DWT) thresholding methods of BayesShrink and AdaptShrink, compared to each other and we proposed a new threshold in wavelet domain for not only noise reduction but also edge retaining, consequently the proposed method retains the modified coefficients significantly that result good visual quality. Data evaluations were accomplished by using two criterions; namely, peak signal to noise ratio (PSNR) and Structure similarity (Ssim).

Keywords: Computed Tomography (CT), noise reduction, curve-let, contour-let, Signal to Noise Peak-Peak Ratio (PSNR), Structure Similarity (Ssim), Absorbed Dose to Patient (ADP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2920
4087 Developing Vision-Based Digital Public Display as an Interactive Media

Authors: Adrian Samuel Limanto, Yunli Lee

Abstract:

Interactive public displays give access as an innovative media to promote enhanced communication between people and information. However, digital public displays are subject to a few constraints, such as content presentation. Content presentation needs to be developed to be more interesting to attract people’s attention and motivate people to interact with the display. In this paper, we proposed idea to implement contents with interaction elements for vision-based digital public display. Vision-based techniques are applied as a sensor to detect passers-by and theme contents are suggested to attract their attention for encouraging them to interact with the announcement content. Virtual object, gesture detection and projection installation are applied for attracting attention from passers-by. Preliminary study showed positive feedback of interactive content designing towards the public display. This new trend would be a valuable innovation as delivery of announcement content and information communication through this media is proven to be more engaging.

Keywords: Digital announcement, digital public display, human-information interaction, interactive media.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1738
4086 A Variable Incremental Conductance MPPT Algorithm Applied to Photovoltaic Water Pumping System

Authors: S. Abdourraziq, R. El Bachtiri

Abstract:

The use of solar energy as a source for pumping water is one of the promising areas in the photovoltaic (PV) application. The energy of photovoltaic pumping systems (PVPS) can be widely improved by employing an MPPT algorithm. This will lead consequently to maximize the electrical motor speed of the system. This paper presents a modified incremental conductance (IncCond) MPPT algorithm with direct control method applied to a standalone PV pumping system. The influence of the algorithm parameters on system behavior is investigated and compared with the traditional (INC) method. The studied system consists of a PV panel, a DC-DC boost converter, and a PMDC motor-pump. The simulation of the system by MATLAB-SIMULINK is carried out. Simulation results found are satisfactory.

Keywords: Photovoltaic pumping system (PVPS), incremental conductance (INC), MPPT algorithm, boost converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253
4085 Fast 3D Collision Detection Algorithm using 2D Intersection Area

Authors: Taehyun Yoon, Keechul Jung

Abstract:

There are many researches to detect collision between real object and virtual object in 3D space. In general, these techniques are need to huge computing power. So, many research and study are constructed by using cloud computing, network computing, and distribute computing. As a reason of these, this paper proposed a novel fast 3D collision detection algorithm between real and virtual object using 2D intersection area. Proposed algorithm uses 4 multiple cameras and coarse-and-fine method to improve accuracy and speed performance of collision detection. In the coarse step, this system examines the intersection area between real and virtual object silhouettes from all camera views. The result of this step is the index of virtual sensors which has a possibility of collision in 3D space. To decide collision accurately, at the fine step, this system examines the collision detection in 3D space by using the visual hull algorithm. Performance of the algorithm is verified by comparing with existing algorithm. We believe proposed algorithm help many other research, study and application fields such as HCI, augmented reality, intelligent space, and so on.

Keywords: Collision Detection, Computer Vision, Human Computer Interaction, Visual Hull

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2405
4084 A Novel Pareto-Based Meta-Heuristic Algorithm to Optimize Multi-Facility Location-Allocation Problem

Authors: Vahid Hajipour, Samira V. Noshafagh, Reza Tavakkoli-Moghaddam

Abstract:

This article proposes a novel Pareto-based multiobjective meta-heuristic algorithm named non-dominated ranking genetic algorithm (NRGA) to solve multi-facility location-allocation problem. In NRGA, a fitness value representing rank is assigned to each individual of the population. Moreover, two features ranked based roulette wheel selection including select the fronts and choose solutions from the fronts, are utilized. The proposed solving methodology is validated using several examples taken from the specialized literature. The performance of our approach shows that NRGA algorithm is able to generate true and well distributed Pareto optimal solutions.

Keywords: Non-dominated ranking genetic algorithm, Pareto solutions, Multi-facility location-allocation problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2170
4083 A Novel Compression Algorithm for Electrocardiogram Signals based on Wavelet Transform and SPIHT

Authors: Sana Ktata, Kaïs Ouni, Noureddine Ellouze

Abstract:

Electrocardiogram (ECG) data compression algorithm is needed that will reduce the amount of data to be transmitted, stored and analyzed, but without losing the clinical information content. A wavelet ECG data codec based on the Set Partitioning In Hierarchical Trees (SPIHT) compression algorithm is proposed in this paper. The SPIHT algorithm has achieved notable success in still image coding. We modified the algorithm for the one-dimensional (1-D) case and applied it to compression of ECG data. By this compression method, small percent root mean square difference (PRD) and high compression ratio with low implementation complexity are achieved. Experiments on selected records from the MIT-BIH arrhythmia database revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. Compression ratios of up to 48:1 for ECG signals lead to acceptable results for visual inspection.

Keywords: Discrete Wavelet Transform, ECG compression, SPIHT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2131
4082 Data Mining Using Learning Automata

Authors: M. R. Aghaebrahimi, S. H. Zahiri, M. Amiri

Abstract:

In this paper a data miner based on the learning automata is proposed and is called LA-miner. The LA-miner extracts classification rules from data sets automatically. The proposed algorithm is established based on the function optimization using learning automata. The experimental results on three benchmarks indicate that the performance of the proposed LA-miner is comparable with (sometimes better than) the Ant-miner (a data miner algorithm based on the Ant Colony optimization algorithm) and CNZ (a well-known data mining algorithm for classification).

Keywords: Data mining, Learning automata, Classification rules, Knowledge discovery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
4081 Consumer Load Profile Determination with Entropy-Based K-Means Algorithm

Authors: Ioannis P. Panapakidis, Marios N. Moschakis

Abstract:

With the continuous increment of smart meter installations across the globe, the need for processing of the load data is evident. Clustering-based load profiling is built upon the utilization of unsupervised machine learning tools for the purpose of formulating the typical load curves or load profiles. The most commonly used algorithm in the load profiling literature is the K-means. While the algorithm has been successfully tested in a variety of applications, its drawback is the strong dependence in the initialization phase. This paper proposes a novel modified form of the K-means that addresses the aforementioned problem. Simulation results indicate the superiority of the proposed algorithm compared to the K-means.

Keywords: Clustering, load profiling, load modeling, machine learning, energy efficiency and quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1211
4080 A Selective 3-Anchor DV-Hop Algorithm Based On the Nearest Anchor for Wireless Sensor Network

Authors: Hichem Sassi, Tawfik Najeh, Noureddine Liouane

Abstract:

Information of nodes’ locations is an important criterion for lots of applications in Wireless Sensor Networks. In the hop-based range-free localization methods, anchors transmit the localization messages counting a hop count value to the whole network. Each node receives this message and calculates its own distance with anchor in hops and then approximates its own position. However the estimative distances can provoke large error, and affect the localization precision. To solve the problem, this paper proposes an algorithm, which makes the unknown nodes fix the nearest anchor as a reference and select two other anchors which are the most accurate to achieve the estimated location. Compared to the DV-Hop algorithm, experiment results illustrate that proposed algorithm has less average localization error and is more effective.

Keywords: Wireless Sensors Networks, Localization problem, localization average error, DV–Hop Algorithm, MATLAB.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2958
4079 Development of Integrated GIS Interface for Characteristics of Regional Daily Flow

Authors: Ju Young Lee, Jung-Seok Yang, Jaeyoung Choi

Abstract:

The purpose of this paper primarily intends to develop GIS interface for estimating sequences of stream-flows at ungauged stations based on known flows at gauged stations. The integrated GIS interface is composed of three major steps. The first, precipitation characteristics using statistical analysis is the procedure for making multiple linear regression equation to get the long term mean daily flow at ungauged stations. The independent variables in regression equation are mean daily flow and drainage area. Traditionally, mean flow data are generated by using Thissen polygon method. However, method for obtaining mean flow data can be selected by user such as Kriging, IDW (Inverse Distance Weighted), Spline methods as well as other traditional methods. At the second, flow duration curve (FDC) is computing at unguaged station by FDCs in gauged stations. Finally, the mean annual daily flow is computed by spatial interpolation algorithm. The third step is to obtain watershed/topographic characteristics. They are the most important factors which govern stream-flows. In summary, the simulated daily flow time series are compared with observed times series. The results using integrated GIS interface are closely similar and are well fitted each other. Also, the relationship between the topographic/watershed characteristics and stream flow time series is highly correlated.

Keywords: Integrated GIS interface, spatial interpolation algorithm, FDC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1510
4078 User Behavior Based Enhanced Protocol (UBEP) for Secure Near Field Communication

Authors: Vinay Gautam, Vivek Gautam

Abstract:

With increase in the unauthorized users access, it is required to increase the security in the Near Field Communication (NFC). In the paper we propose a user behavior based enhanced protocol entitled ‘User Behavior based Enhanced Protocol (UBEP)’ to increase the security in NFC enabled devices. The UBEP works on the history of interaction of a user with system.The propose protocol considers four different factors (touch, time and distance & angle) of user behavior to know the authenticity or authorization of the users. These factors can be same for a user during interaction with the system. The UBEP uses two phase user verification system to authenticate a user. Firstly the acquisition phase is used to acquire and store the user interaction with NFC device and the same information is used in future to detect the authenticity of the user. The second phase (recognition) uses analysis of current and previous scenario of user interaction and digital signature verification system to finally authenticate user. The analysis of user based input makes a NFC transaction more advance and secure. This security is very tactical because it is completely depends on usage of the device.

Keywords: Security, Network Field communication, NFC Protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2007
4077 An Improved Method to Compute Sparse Graphs for Traveling Salesman Problem

Authors: Y. Wang

Abstract:

The Traveling salesman problem (TSP) is NP-hard in combinatorial optimization. The research shows the algorithms for TSP on the sparse graphs have the shorter computation time than those for TSP according to the complete graphs. We present an improved iterative algorithm to compute the sparse graphs for TSP by frequency graphs computed with frequency quadrilaterals. The iterative algorithm is enhanced by adjusting two parameters of the algorithm. The computation time of the algorithm is O(CNmaxn2) where C is the iterations, Nmax is the maximum number of frequency quadrilaterals containing each edge and n is the scale of TSP. The experimental results showed the computed sparse graphs generally have less than 5n edges for most of these Euclidean instances. Moreover, the maximum degree and minimum degree of the vertices in the sparse graphs do not have much difference. Thus, the computation time of the methods to resolve the TSP on these sparse graphs will be greatly reduced.

Keywords: Frequency quadrilateral, iterative algorithm, sparse graph, traveling salesman problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1010
4076 Fast Calculation for Particle Interactions in SPH Simulations: Outlined Sub-domain Technique

Authors: Buntara Sthenly Gan, Naohiro Kawada

Abstract:

A simple and easy algorithm is presented for a fast calculation of kernel functions which required in fluid simulations using the Smoothed Particle Hydrodynamic (SPH) method. Present proposed algorithm improves the Linked-list algorithm and adopts the Pair-Wise Interaction technique, which are widely used for evaluating kernel functions in fluid simulations using the SPH method. The algorithm is easy to be implemented without any complexities in programming. Some benchmark examples are used to show the simulation time saved by using the proposed algorithm. Parametric studies on the number of divisions for sub-domains, smoothing length and total amount of particles are conducted to show the effectiveness of the present technique. A compact formulation is proposed for practical usage.

Keywords: Technique, fluid simulation, smoothing particle hydrodynamic (SPH), particle interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1630
4075 An ICA Algorithm for Separation of Convolutive Mixture of Speech Signals

Authors: Rajkishore Prasad, Hiroshi Saruwatari, Kiyohiro Shikano

Abstract:

This paper describes Independent Component Analysis (ICA) based fixed-point algorithm for the blind separation of the convolutive mixture of speech, picked-up by a linear microphone array. The proposed algorithm extracts independent sources by non- Gaussianizing the Time-Frequency Series of Speech (TFSS) in a deflationary way. The degree of non-Gaussianization is measured by negentropy. The relative performances of algorithm under random initialization and Null beamformer (NBF) based initialization are studied. It has been found that an NBF based initial value gives speedy convergence as well as better separation performance

Keywords: Blind signal separation, independent component analysis, negentropy, convolutive mixture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1778
4074 Puff Noise Detection and Cancellation for Robust Speech Recognition

Authors: Sangjun Park, Jungpyo Hong, Byung-Ok Kang, Yun-keun Lee, Minsoo Hahn

Abstract:

In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system.

Keywords: Gaussian mixture model, puff detection and cancellation, speech enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2233
4073 On the Computation of a Common n-finger Robotic Grasp for a Set of Objects

Authors: Avishai Sintov, Roland Menassa, Amir Shapiro

Abstract:

Industrial robotic arms utilize multiple end-effectors, each for a specific part and for a specific task. We propose a novel algorithm which will define a single end-effector’s configuration able to grasp a given set of objects with different geometries. The algorithm will have great benefit in production lines allowing a single robot to grasp various parts. Hence, reducing the number of endeffectors needed. Moreover, the algorithm will reduce end-effector design and manufacturing time and final product cost. The algorithm searches for a common grasp over the set of objects. The search algorithm maps all possible grasps for each object which satisfy a quality criterion and takes into account possible external wrenches (forces and torques) applied to the object. The mapped grasps are- represented by high-dimensional feature vectors which describes the shape of the gripper. We generate a database of all possible grasps for each object in the feature space. Then we use a search and classification algorithm for intersecting all possible grasps over all parts and finding a single common grasp suitable for all objects. We present simulations of planar and spatial objects to validate the feasibility of the approach.

Keywords: Common Grasping, Search Algorithm, Robotic End-Effector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
4072 Detection and Classification of Power Quality Disturbances Using S-Transform and Wavelet Algorithm

Authors: Mohamed E. Salem Abozaed

Abstract:

Detection and classification of power quality (PQ) disturbances is an important consideration to electrical utilities and many industrial customers so that diagnosis and mitigation of such disturbance can be implemented quickly. S-transform algorithm and continuous wavelet transforms (CWT) are time-frequency algorithms, and both of them are powerful in detection and classification of PQ disturbances. This paper presents detection and classification of PQ disturbances using S-transform and CWT algorithms. The results of detection and classification, provides that S-transform is more accurate in detection and classification for most PQ disturbance than CWT algorithm, where as CWT algorithm more powerful in detection in some disturbances like notching

Keywords: CWT, Disturbances classification, Disturbances detection, Power quality, S-transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2599
4071 Noise Performance Optimization of a Fast Wavelength Calibration Algorithm for OSAs

Authors: Thomas Fuhrmann

Abstract:

A new fast correlation algorithm for calibrating the wavelength of Optical Spectrum Analyzers (OSAs) was introduced in [1]. The minima of acetylene gas spectra were measured and correlated with saved theoretical data [2]. So it is possible to find the correct wavelength calibration data using a noisy reference spectrum. First tests showed good algorithmic performance for gas line spectra with high noise. In this article extensive performance tests were made to validate the noise resistance of this algorithm. The filter and correlation parameters of the algorithm were optimized for improved noise performance. With these parameters the performance of this wavelength calibration was simulated to predict the resulting wavelength error in real OSA systems. Long term simulations were made to evaluate the performance of the algorithm over the lifetime of a real OSA.

Keywords: correlation, gas reference, optical spectrum analyzer, wavelength calibration

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1272
4070 Fault-Tolerant Optimal Broadcast Algorithm for the Hypercube Topology

Authors: Lokendra Singh Umrao, Ravi Shankar Singh

Abstract:

This paper presents an optimal broadcast algorithm for the hypercube networks. The main focus of the paper is the effectiveness of the algorithm in the presence of many node faults. For the optimal solution, our algorithm builds with spanning tree connecting the all nodes of the networks, through which messages are propagated from source node to remaining nodes. At any given time, maximum n − 1 nodes may fail due to crashing. We show that the hypercube networks are strongly fault-tolerant. Simulation results analyze to accomplish algorithm characteristics under many node faults. We have compared our simulation results between our proposed method and the Fu’s method. Fu’s approach cannot tolerate n − 1 faulty nodes in the worst case, but our approach can tolerate n − 1 faulty nodes.

Keywords: Fault tolerance, hypercube, broadcasting, link/node faults, routing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1882
4069 Determination of Sequential Best Replies in N-player Games by Genetic Algorithms

Authors: Mattheos K. Protopapas, Elias B. Kosmatopoulos

Abstract:

An iterative algorithm is proposed and tested in Cournot Game models, which is based on the convergence of sequential best responses and the utilization of a genetic algorithm for determining each player-s best response to a given strategy profile of its opponents. An extra outer loop is used, to address the problem of finite accuracy, which is inherent in genetic algorithms, since the set of feasible values in such an algorithm is finite. The algorithm is tested in five Cournot models, three of which have convergent best replies sequence, one with divergent sequential best replies and one with “local NE traps"[14], where classical local search algorithms fail to identify the Nash Equilibrium. After a series of simulations, we conclude that the algorithm proposed converges to the Nash Equilibrium, with any level of accuracy needed, in all but the case where the sequential best replies process diverges.

Keywords: Best response, Cournot oligopoly, genetic algorithms, Nash equilibrium.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443
4068 Multiple Sensors and JPDA-IMM-UKF Algorithm for Tracking Multiple Maneuvering Targets

Authors: Wissem Saidani, Yacine Morsly, Mohand Saïd Djouadi

Abstract:

In this paper, we consider the problem of tracking multiple maneuvering targets using switching multiple target motion models. With this paper, we aim to contribute in solving the problem of model-based body motion estimation by using data coming from visual sensors. The Interacting Multiple Model (IMM) algorithm is specially designed to track accurately targets whose state and/or measurement (assumed to be linear) models changes during motion transition. However, when these models are nonlinear, the IMM algorithm must be modified in order to guarantee an accurate track. In this paper we propose to avoid the Extended Kalman filter because of its limitations and substitute it with the Unscented Kalman filter which seems to be more efficient especially according to the simulation results obtained with the nonlinear IMM algorithm (IMMUKF). To resolve the problem of data association, the JPDA approach is combined with the IMM-UKF algorithm, the derived algorithm is noted JPDA-IMM-UKF.

Keywords: Estimation, Kalman filtering, Multi-Target Tracking, Visual servoing, data association.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2564
4067 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence (AI) is invaluable in identifying crime. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISAs). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The proposed framework development is implemented using the Java Agent Development Framework, Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISAs and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5% of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: Artificial intelligence, computer science, criminal investigation, digital forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1292
4066 Use of Hierarchical Temporal Memory Algorithm in Heart Attack Detection

Authors: Tesnim Charrad, Kaouther Nouira, Ahmed Ferchichi

Abstract:

In order to reduce the number of deaths due to heart problems, we propose the use of Hierarchical Temporal Memory Algorithm (HTM) which is a real time anomaly detection algorithm. HTM is a cortical learning algorithm based on neocortex used for anomaly detection. In other words, it is based on a conceptual theory of how the human brain can work. It is powerful in predicting unusual patterns, anomaly detection and classification. In this paper, HTM have been implemented and tested on ECG datasets in order to detect cardiac anomalies. Experiments showed good performance in terms of specificity, sensitivity and execution time.

Keywords: HTM, Real time anomaly detection, ECG, Cardiac Anomalies.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
4065 Objective Assessment of Psoriasis Lesion Thickness for PASI Scoring using 3D Digital Imaging

Authors: M.H. Ahmad Fadzil, Hurriyatul Fitriyah, Esa Prakasa, Hermawan Nugroho, S.H. Hussein, Azura Mohd. Affandi

Abstract:

Psoriasis is a chronic inflammatory skin condition which affects 2-3% of population around the world. Psoriasis Area and Severity Index (PASI) is a gold standard to assess psoriasis severity as well as the treatment efficacy. Although a gold standard, PASI is rarely used because it is tedious and complex. In practice, PASI score is determined subjectively by dermatologists, therefore inter and intra variations of assessment are possible to happen even among expert dermatologists. This research develops an algorithm to assess psoriasis lesion for PASI scoring objectively. Focus of this research is thickness assessment as one of PASI four parameters beside area, erythema and scaliness. Psoriasis lesion thickness is measured by averaging the total elevation from lesion base to lesion surface. Thickness values of 122 3D images taken from 39 patients are grouped into 4 PASI thickness score using K-means clustering. Validation on lesion base construction is performed using twelve body curvature models and show good result with coefficient of determinant (R2) is equal to 1.

Keywords: 3D digital imaging, base construction, PASI, psoriasis lesion thickness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2454
4064 Grid–SVC: An Improvement in SVC Algorithm, Based On Grid Based Clustering

Authors: Farhad Hadinejad, Hasan Saberi, Saeed Kazem

Abstract:

Support vector clustering (SVC) is an important kernelbased clustering algorithm in multi applications. It has got two main bottle necks, the high computation price and labeling piece. In this paper, we presented a modified SVC method, named Grid–SVC, to improve the original algorithm computationally. First we normalized and then we parted the interval, where the SVC is processing, using a novel Grid–based clustering algorithm. The algorithm parts the intervals, based on the density function of the data set and then applying the cartesian multiply makes multi-dimensional grids. Eliminating many outliers and noise in the preprocess, we apply an improved SVC method to each parted grid in a parallel way. The experimental results show both improvement in time complexity order and the accuracy.

Keywords: Grid–based clustering, SVC, Density function, Radial basis function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
4063 A Deterministic Polynomial-time Algorithm for the Clique Problem and the Equality of P and NP Complexity Classes

Authors: Zohreh O. Akbari

Abstract:

In this paper a deterministic polynomial-time algorithm is presented for the Clique problem. The case is considered as the problem of omitting the minimum number of vertices from the input graph so that none of the zeroes on the graph-s adjacency matrix (except the main diagonal entries) would remain on the adjacency matrix of the resulting subgraph. The existence of a deterministic polynomial-time algorithm for the Clique problem, as an NP-complete problem will prove the equality of P and NP complexity classes.

Keywords: Clique problem, Deterministic Polynomial-time Algorithm, Equality of P and NP Complexity Classes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1810