Search results for: Artificial Bee Colony algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4150

Search results for: Artificial Bee Colony algorithm

1810 Extracting Terrain Points from Airborne Laser Scanning Data in Densely Forested Areas

Authors: Ziad Abdeldayem, Jakub Markiewicz, Kunal Kansara, Laura Edwards

Abstract:

Airborne Laser Scanning (ALS) is one of the main technologies for generating high-resolution digital terrain models (DTMs). DTMs are crucial to several applications, such as topographic mapping, flood zone delineation, geographic information systems (GIS), hydrological modelling, spatial analysis, etc. Laser scanning system generates irregularly spaced three-dimensional cloud of points. Raw ALS data are mainly ground points (that represent the bare earth) and non-ground points (that represent buildings, trees, cars, etc.). Removing all the non-ground points from the raw data is referred to as filtering. Filtering heavily forested areas is considered a difficult and challenging task as the canopy stops laser pulses from reaching the terrain surface. This research presents an approach for removing non-ground points from raw ALS data in densely forested areas. Smoothing splines are exploited to interpolate and fit the noisy ALS data. The presented filter utilizes a weight function to allocate weights for each point of the data. Furthermore, unlike most of the methods, the presented filtering algorithm is designed to be automatic. Three different forested areas in the United Kingdom are used to assess the performance of the algorithm. The results show that the generated DTMs from the filtered data are accurate (when compared against reference terrain data) and the performance of the method is stable for all the heavily forested data samples. The average root mean square error (RMSE) value is 0.35 m.

Keywords: Airborne laser scanning, digital terrain models, filtering, forested areas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 718
1809 Artifacts in Spiral X-ray CT Scanners: Problems and Solutions

Authors: Mehran Yazdi, Luc Beaulieu

Abstract:

Artifact is one of the most important factors in degrading the CT image quality and plays an important role in diagnostic accuracy. In this paper, some artifacts typically appear in Spiral CT are introduced. The different factors such as patient, equipment and interpolation algorithm which cause the artifacts are discussed and new developments and image processing algorithms to prevent or reduce them are presented.

Keywords: CT artifacts, Spiral CT, Artifact removal.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4506
1808 Some Separations in Covering Approximation Spaces

Authors: Xun Ge, Jinjin Li, Ying Ge

Abstract:

Adopting Zakowski-s upper approximation operator C and lower approximation operator C, this paper investigates granularity-wise separations in covering approximation spaces. Some characterizations of granularity-wise separations are obtained by means of Pawlak rough sets and some relations among granularitywise separations are established, which makes it possible to research covering approximation spaces by logical methods and mathematical methods in computer science. Results of this paper give further applications of Pawlak rough set theory in pattern recognition and artificial intelligence.

Keywords: Rough set, covering approximation space, granularitywise separation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1807 Comparative Study Using Weka for Red Blood Cells Classification

Authors: Jameela Ali Alkrimi, Hamid A. Jalab, Loay E. George, Abdul Rahim Ahmad, Azizah Suliman, Karim Al-Jashamy

Abstract:

Red blood cells (RBC) are the most common types of blood cells and are the most intensively studied in cell biology. The lack of RBCs is a condition in which the amount of hemoglobin level is lower than normal and is referred to as “anemia”. Abnormalities in RBCs will affect the exchange of oxygen. This paper presents a comparative study for various techniques for classifying the RBCs as normal or abnormal (anemic) using WEKA. WEKA is an open source consists of different machine learning algorithms for data mining applications. The algorithms tested are Radial Basis Function neural network, Support vector machine, and K-Nearest Neighbors algorithm. Two sets of combined features were utilized for classification of blood cells images. The first set, exclusively consist of geometrical features, was used to identify whether the tested blood cell has a spherical shape or non-spherical cells. While the second set, consist mainly of textural features was used to recognize the types of the spherical cells. We have provided an evaluation based on applying these classification methods to our RBCs image dataset which were obtained from Serdang Hospital - Malaysia, and measuring the accuracy of test results. The best achieved classification rates are 97%, 98%, and 79% for Support vector machines, Radial Basis Function neural network, and K-Nearest Neighbors algorithm respectively.

Keywords: K-Nearest Neighbors, Neural Network, Radial Basis Function, Red blood cells, Support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2996
1806 A Combined Neural Network Approach to Soccer Player Prediction

Authors: Wenbin Zhang, Hantian Wu, Jian Tang

Abstract:

An artificial neural network is a mathematical model inspired by biological neural networks. There are several kinds of neural networks and they are widely used in many areas, such as: prediction, detection, and classification. Meanwhile, in day to day life, people always have to make many difficult decisions. For example, the coach of a soccer club has to decide which offensive player to be selected to play in a certain game. This work describes a novel Neural Network using a combination of the General Regression Neural Network and the Probabilistic Neural Networks to help a soccer coach make an informed decision.

Keywords: General Regression Neural Network, Probabilistic Neural Networks, Neural function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3764
1805 An Artificial Neural Network Model Based Study of Seismic Wave

Authors: Hemant Kumar, Nilendu Das

Abstract:

A study based on ANN structure gives us the information to predict the size of the future in realizing a past event. ANN, IMD (Indian meteorological department) data and remote sensing were used to enable a number of parameters for calculating the size that may occur in the future. A threshold selected specifically above the high-frequency harvest reached the area during the selected seismic activity. In the field of human and local biodiversity it remains to obtain the right parameter compared to the frequency of impact. But during the study the assumption is that predicting seismic activity is a difficult process, not because of the parameters involved here, which can be analyzed and funded in research activity.

Keywords: ANN, Bayesian class, earthquakes, IMD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
1804 Maize Tolerance to Natural and Artificial Infestation with Diabrotica virgifera virgifera Eggs

Authors: Snežana T. Tanasković, Sonja M. Gvozdenac, Branka D. Popović, Vesna M. Đurović, Matthias Erb

Abstract:

Western corn rootworm – WCR (Diabrotica virgifera sp.virgifera, Coleoptera, Chrysomelidae) is economically the most important pest of maize worldwide. WCR natural population is already very abundant on Serbian fields, and keeps increasing each year. Tolerance is recognized by larger root size and bigger root regrowth. Severe larval injuries cause lack of compensatory regrowth and lead to reduction of plant growth and yield. The aim of this research was to evaluate tolerance of commercial Serbian maize hybrid NS 640, under natural WCR infestation and under conditions of artificial infestation, and to obtain the information about its tolerance to WCR larval feeding in two consecutive years. Field experiments were conducted in 2015 and 2016, in Bečej (Vojvodina province, Serbia). In experimental field, 96 plants were selected, marked and arranged in 48 pairs. Each pair represented two plants. The first plant was artificially infested with 4 mL WCR egg suspension in agar (550 eggs plant-1) in the root zone (D plant). The second plant represented control plant (C plant) with injection of 4 mL distilled water in root zone. The experimental field was inspected weekly. A hybrid tolerance was assessed based on root injury level and root mass. Root injury was rated using the Node-Injury Scale 1-6, during the last field inspection (September – October). Comparing the root injuries on D and C plants in 2015, more severe damages were recorded on D plants (12 plants - rate 5 and 17 plants - rate 6) compared to C plants (2 plants - rate 5 and 8 plants - rate 6). Also, the highest number of plants with healthy roots (rate 1), was registered in the control (25 plants), while only 4 D plants were rated as injury level 1. In 2016, root injuries caused by WCR larvae on D and C plants did not differ significantly. The reason is the difference in climatic conditions between the years. The 2015 was extremely dry and more suitable for WCR larval development and movement in the soil, compared to 2016. Thus, more severe damages appeared on artificially infested plants (D plants). Root mass was in strong correlation with the level of root injury, but did not differ significantly between D and C plants, in both years.

Keywords: D. v. virgifera, maize, root injury, tolerance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
1803 Decomposition Method for Neural Multiclass Classification Problem

Authors: H. El Ayech, A. Trabelsi

Abstract:

In this article we are going to discuss the improvement of the multi classes- classification problem using multi layer Perceptron. The considered approach consists in breaking down the n-class problem into two-classes- subproblems. The training of each two-class subproblem is made independently; as for the phase of test, we are going to confront a vector that we want to classify to all two classes- models, the elected class will be the strongest one that won-t lose any competition with the other classes. Rates of recognition gotten with the multi class-s approach by two-class-s decomposition are clearly better that those gotten by the simple multi class-s approach.

Keywords: Artificial neural network, letter-recognition, Multi class Classification, Multi Layer Perceptron.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
1802 Forecasting of Grape Juice Flavor by Using Support Vector Regression

Authors: Ren-Jieh Kuo, Chun-Shou Huang

Abstract:

The research of juice flavor forecasting has become more important in China. Due to the fast economic growth in China, many different kinds of juices have been introduced to the market. If a beverage company can understand their customers’ preference well, the juice can be served more attractive. Thus, this study intends to introducing the basic theory and computing process of grapes juice flavor forecasting based on support vector regression (SVR). Applying SVR, BPN, and LR to forecast the flavor of grapes juice in real data shows that SVR is more suitable and effective at predicting performance.

Keywords: Flavor forecasting, artificial neural networks, support vector regression, grape juice flavor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
1801 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: Constraint parameters, derivative matrix, magnetocardiography, regular term, total variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702
1800 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, adaptive antenna array, Deep Neural Network, LS-SVM optimization model, radial basis function, MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542
1799 Analysis of Histogram Asymmetry for Waste Recognition

Authors: Janusz Bobulski, Kamila Pasternak

Abstract:

Despite many years of effort and research, the problem of waste management is still current. There is a lack of fast and effective algorithms for classifying individual waste fractions. Many programs and projects improve statistics on the percentage of waste recycled every year. In these efforts, it is worth using modern Computer Vision techniques supported by artificial intelligence. In the article, we present a method of identifying plastic waste based on the asymmetry analysis of the histogram of the image containing the waste. The method is simple but effective (94%), which allows it to be implemented on devices with low computing power, in particular on microcomputers. Such de-vices will be used both at home and in waste sorting plants.

Keywords: Computer vision, environmental protection, image processing, waste management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 313
1798 Resource Allocation and Task Scheduling with Skill Level and Time Bound Constraints

Authors: Salam Saudagar, Ankit Kamboj, Niraj Mohan, Satgounda Patil, Nilesh Powar

Abstract:

Task Assignment and Scheduling is a challenging Operations Research problem when there is a limited number of resources and comparatively higher number of tasks. The Cost Management team at Cummins needs to assign tasks based on a deadline and must prioritize some of the tasks as per business requirements. Moreover, there is a constraint on the resources that assignment of tasks should be done based on an individual skill level, that may vary for different tasks. Another constraint is for scheduling the tasks that should be evenly distributed in terms of number of working hours, which adds further complexity to this problem. The proposed greedy approach to solve assignment and scheduling problem first assigns the task based on management priority and then by the closest deadline. This is followed by an iterative selection of an available resource with the least allocated total working hours for a task, i.e. finding the local optimal choice for each task with the goal of determining the global optimum. The greedy approach task allocation is compared with a variant of Hungarian Algorithm, and it is observed that the proposed approach gives an equal allocation of working hours among the resources. The comparative study of the proposed approach is also done with manual task allocation and it is noted that the visibility of the task timeline has increased from 2 months to 6 months. An interactive dashboard app is created for the greedy assignment and scheduling approach and the tasks with more than 2 months horizon that were waiting in a queue without a delivery date initially are now analyzed effectively by the business with expected timelines for completion.

Keywords: Assignment, deadline, greedy approach, hungarian algorithm, operations research, scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1202
1797 Towards a Computational Model of Consciousness: Global Abstraction Workspace

Authors: Halim Djerroud, Arab Ali Cherif

Abstract:

We assume that conscious functions are implemented automatically. In other words that consciousness as well as the non-consciousness aspect of human thought, planning and perception, are produced by biologically adaptive algorithms. We propose that the mechanisms of consciousness can be produced using similar adaptive algorithms to those executed by the mechanism. In this paper, we present a computational model of consciousness, the ”Global Abstraction Workspace” which is an internal environmental modelling perceived as a multi-agent system. This system is able to evolve and generate new data and processes as well as actions in the environment.

Keywords: Artificial consciousness, cognitive architecture, global abstraction workspace, mutli-agents system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1584
1796 Very Large Scale Integration Architecture of Finite Impulse Response Filter Implementation Using Retiming Technique

Authors: S. Jalaja, A. M. Vijaya Prakash

Abstract:

Recursive combination of an algorithm based on Karatsuba multiplication is exploited to design a generalized transpose and parallel Finite Impulse Response (FIR) Filter. Mid-range Karatsuba multiplication and Carry Save adder based on Karatsuba multiplication reduce time complexity for higher order multiplication implemented up to n-bit. As a result, we design modified N-tap Transpose and Parallel Symmetric FIR Filter Structure using Karatsuba algorithm. The mathematical formulation of the FFA Filter is derived. The proposed architecture involves significantly less area delay product (APD) then the existing block implementation. By adopting retiming technique, hardware cost is reduced further. The filter architecture is designed by using 90 nm technology library and is implemented by using cadence EDA Tool. The synthesized result shows better performance for different word length and block size. The design achieves switching activity reduction and low power consumption by applying with and without retiming for different combination of the circuit. The proposed structure achieves more than a half of the power reduction by adopting with and without retiming techniques compared to the earlier design structure. As a proof of the concept for block size 16 and filter length 64 for CKA method, it achieves a 51% as well as 70% less power by applying retiming technique, and for CSA method it achieves a 57% as well as 77% less power by applying retiming technique compared to the previously proposed design.

Keywords: Carry save adder Karatsuba multiplication, mid-range Karatsuba multiplication, modified FFA, transposed filter, retiming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
1795 Variational Iteration Method for the Solution of Boundary Value Problems

Authors: Olayiwola M.O., Gbolagade A .W., Akinpelu F. O.

Abstract:

In this work, we present a reliable framework to solve boundary value problems with particular significance in solid mechanics. These problems are used as mathematical models in deformation of beams. The algorithm rests mainly on a relatively new technique, the Variational Iteration Method. Some examples are given to confirm the efficiency and the accuracy of the method.

Keywords: Variational iteration method, boundary value problems, convergence, restricted variation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2104
1794 Manipulator Development for Telediagnostics

Authors: Adam Kurnicki, Bartłomiej Stanczyk, Bartosz Kania

Abstract:

This paper presents development of the light-weight manipulator with series elastic actuation for medical telediagnostics (USG examination). General structure of realized impedance control algorithm was shown. It was described how to perform force measurements based mainly on elasticity of manipulator links.

Keywords: Telediagnostics, elastic manipulator, impedance control, force measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
1793 Sociological Impact on Education An Analytical Approach Through Artificial Neural network

Authors: P. R. Jayathilaka, K.L. Jayaratne, H.L. Premaratne

Abstract:

This research presented in this paper is an on-going project of an application of neural network and fuzzy models to evaluate the sociological factors which affect the educational performance of the students in Sri Lanka. One of its major goals is to prepare the grounds to device a counseling tool which helps these students for a better performance at their examinations, especially at their G.C.E O/L (General Certificate of Education-Ordinary Level) examination. Closely related sociological factors are collected as raw data and the noise of these data are filtered through the fuzzy interface and the supervised neural network is being utilized to recognize the performance patterns against the chosen social factors.

Keywords: Education, Fuzzy, neural network, prediction, Sociology

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
1792 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms

Authors: J. Prakash, K. Rajesh

Abstract:

In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.

Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641
1791 Half-Circle Fuzzy Number Threshold Determination via Swarm Intelligence Method

Authors: P.-W. Tsai, J.-W. Chen, C.-W. Chen, C.-Y. Chen

Abstract:

In recent years, many researchers are involved in the field of fuzzy theory. However, there are still a lot of issues to be resolved. Especially on topics related to controller design such as the field of robot, artificial intelligence, and nonlinear systems etc. Besides fuzzy theory, algorithms in swarm intelligence are also a popular field for the researchers. In this paper, a concept of utilizing one of the swarm intelligence method, which is called Bacterial-GA Foraging, to find the stabilized common P matrix for the fuzzy controller system is proposed. An example is given in in the paper, as well.

Keywords: Half-circle fuzzy numbers, predictions, swarm intelligence, Lyapunov method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
1790 Dynamic Programming Based Algorithm for the Unit Commitment of the Transmission-Constrained Multi-Site Combined Heat and Power System

Authors: A. Rong, P. B. Luh, R. Lahdelma

Abstract:

High penetration of intermittent renewable energy sources (RES) such as solar power and wind power into the energy system has caused temporal and spatial imbalance between electric power supply and demand for some countries and regions. This brings about the critical need for coordinating power production and power exchange for different regions. As compared with the power-only systems, the combined heat and power (CHP) systems can provide additional flexibility of utilizing RES by exploiting the interdependence of power and heat production in the CHP plant. In the CHP system, power production can be influenced by adjusting heat production level and electric power can be used to satisfy heat demand by electric boiler or heat pump in conjunction with heat storage, which is much cheaper than electric storage. This paper addresses multi-site CHP systems without considering RES, which lay foundation for handling penetration of RES. The problem under study is the unit commitment (UC) of the transmission-constrained multi-site CHP systems. We solve the problem by combining linear relaxation of ON/OFF states and sequential dynamic programming (DP) techniques, where relaxed states are used to reduce the dimension of the UC problem and DP for improving the solution quality. Numerical results for daily scheduling with realistic models and data show that DP-based algorithm is from a few to a few hundred times faster than CPLEX (standard commercial optimization software) with good solution accuracy (less than 1% relative gap from the optimal solution on the average).

Keywords: Dynamic programming, multi-site combined heat and power system, relaxed states, transmission-constrained generation unit commitment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1685
1789 Scheduling Method for Electric Heater in HEMS Considering User’s Comfort

Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo Jin-O Kim

Abstract:

Home Energy Management System (HEMS), which makes the residential consumers, contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance, which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it represents impacts of the comfort level on the scheduling result.

Keywords: Load scheduling, usage pattern, user’s comfort, copula function, branch, bound, electric heater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2076
1788 Multicriteria Synthesis of a Polycentric Knee Prosthesis For Transfemoral Amputees

Authors: Oleksandr Poliakov, Olena Chepenyuk, Yevgen Pashkov, Mykhaylo Kalinin, Vadym Kramar

Abstract:

In one of the prosthesis designs for lower limb transfemoral amputations artificial knee joints with polycentric mechanisms are used. Such prostheses are characterized by high stability during the stance phase of the movement. The existing variety of polycentric mechanisms indicates the possibility of finding the optimal prosthesis design satisfying several quality criteria.In this paper we present a multicriteria method for the synthesis of the artifical polycentric knee mechanism based on the uniform systematic study of the design parameters space and on the analysis of Pareto optimal solutions.

Keywords: Optimalcriteria, polycentric knee, prosthesis, synthesis, transfemoral amputee.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
1787 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2069
1786 Design of an Intelligent Tutor using a Multiagent Approach

Authors: Kamel Khoualdi, Radia Benghezal

Abstract:

Research in distributed artificial intelligence and multiagent systems consider how a set of distributed entities can interact and coordinate their actions in order to solve a given problem. In this paper an overview of this concept and its evolution is presented particularly its application in the design of intelligent tutoring systems. An intelligent tutor based on the concept of agent and centered specifically on the design of a pedagogue agent is illustrated. Our work has two goals: the first one concerns the architecture aspect and the design of a tutor using multiagent approach. The second one deals particularly with the design of a part of a tutor system: the pedagogue agent.

Keywords: Intelligent tutoring systems, Multiagent systems, Pedagogue agent, Planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
1785 The AI Application and Talent Demand of Taiwan High-Tech Manufacturing Industry

Authors: Shi-Yu Lu, Chung-Han Yeh, Li-Ping Chen, Yu-Cheng Chang

Abstract:

This paper uses both quantitative and qualitative approaches to survey the current status of AI-related applications and the structure of key AI jobs in Taiwan's high-tech manufacturing industry, as well as the demand for professional AI talents, skills, and training. The result shows that AI applications and talent demand vary from different industries in many aspects, including technologies used, talent structure, and training methods. This paper serves as a reference for the government to establish appropriate talent training programs, and to reduce the demand gap for professional AI talents in Taiwan manufacturers.

Keywords: Artificial intelligence, manufacturing, talent, training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 394
1784 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1289
1783 Case Based Reasoning Technology for Medical Diagnosis

Authors: Abdel-Badeeh M. Salem

Abstract:

Case based reasoning (CBR) methodology presents a foundation for a new technology of building intelligent computeraided diagnoses systems. This Technology directly addresses the problems found in the traditional Artificial Intelligence (AI) techniques, e.g. the problems of knowledge acquisition, remembering, robust and maintenance. This paper discusses the CBR methodology, the research issues and technical aspects of implementing intelligent medical diagnoses systems. Successful applications in cancer and heart diseases developed by Medical Informatics Research Group at Ain Shams University are also discussed.

Keywords: Medical Informatics, Computer-Aided MedicalDiagnoses, AI in Medicine, Case-Based Reasoning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2943
1782 A Contribution to the Application of the Structural Analysis Method in Entrepreneurial Practice

Authors: Kamila Janovská, Šárka Vilamová, Petr Besta, Iveta Vozňáková, Roman Kozel

Abstract:

Quantitative methods of economic decision-making as the methodological base of the so called operational research represent an important set of tools for managing complex economic systems,both at the microeconomic level and on the macroeconomic scale. Mathematical models of controlled and controlling processes allow, by means of artificial experiments, obtaining information foroptimalor optimum approaching managerial decision-making.The quantitative methods of economic decision-making usually include a methodology known as structural analysis -an analysisof interdisciplinary production-consumption relations.

Keywords: economic decision-making, mathematical methods, structuralanalysis, technical coefficient

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1442
1781 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196

Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek

Abstract:

It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.

Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991