Search results for: common vector approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18513

Search results for: common vector approach

18363 A Tool for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: digital information management, file format, endangerment analysis, fuzzy models

Procedia PDF Downloads 378
18362 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 406
18361 Employee Inventor Compensation: A New Quest for Comparative Law

Authors: Andrea Borroni

Abstract:

The evolution of technology, the global scale of economy, and the new short-term employment contracts make a very peculiar set of disposition of raising interest for the legal interpreter: the employee inventor compensation. Around the globe, this issue is differently regulated according to the legal systems; therefore, it is extremely fragmented. Of course, employers with transnational businesses should face this issue from a comparative perspective. Different legal regimes are available worldwide awarding, as a consequence, diverse compensation to the inventor and according to their own methodology. Given these premises, the recourse to comparative law methodology (legal formants, diachronic and synchronic methodology, common core approach) is the best equipped to face all these different national approaches in order to achieve a tidy systematic. This research, so, elaborates a map of the specific criteria to grant the compensation for the inventor and to show the criteria to calculate them. This finding has been the first step to find out a common core of the discipline given by the common features present in the different legal systems.

Keywords: comparative law, employee invention, intellectual property, legal transplant

Procedia PDF Downloads 314
18360 Intrusion Detection In MANET Using Game Theory

Authors: S. B. Kumbalavati, J. D. Mallapur, K. Y. Bendigeri

Abstract:

A mobile Ad-hoc network (MANET) is a multihop wireless network where nodes communicate each other without any pre-deployed infrastructure. There is no central administrating unit. Hence, MANET is generally prone to many of the attacks. These attacks may alter, release or deny data. These attacks are nothing but intrusions. Intrusion is a set of actions that attempts to compromise integrity, confidentiality and availability of resources. A major issue in the design and operation of ad-hoc network is sharing the common spectrum or common channel bandwidth among all the nodes. We are performing intrusion detection using game theory approach. Game theory is a mathematical tool for analysing problems of competition and negotiation among the players in any field like marketing, e-commerce and networking. In this paper mathematical model is developed using game theory approach and intruders are detected and removed. Bandwidth utilization is estimated and comparison is made between bandwidth utilization with intrusion detection technique and without intrusion detection technique. Percentage of intruders and efficiency of the network is analysed.

Keywords: ad-hoc network, IDS, game theory, sensor networks

Procedia PDF Downloads 356
18359 Hearing Conservation Program for Vector Control Workers: Short-Term Outcomes from a Cluster-Randomized Controlled Trial

Authors: Rama Krishna Supramanian, Marzuki Isahak, Noran Naqiah Hairi

Abstract:

Noise-induced hearing loss (NIHL) is one of the highest recorded occupational diseases, despite being preventable. Hearing Conservation Program (HCP) is designed to protect workers hearing and prevent them from developing hearing impairment due to occupational noise exposures. However, there is still a lack of evidence regarding the effectiveness of this program. The purpose of this study was to determine the effectiveness of a Hearing Conservation Program (HCP) in preventing or reducing audiometric threshold changes among vector control workers. This study adopts a cluster randomized controlled trial study design, with district health offices as the unit of randomization. Nine district health offices were randomly selected and 183 vector control workers were randomized to intervention or control group. The intervention included a safety and health policy, noise exposure assessment, noise control, distribution of appropriate hearing protection devices, training and education program and audiometric testing. The control group only underwent audiometric testing. Audiometric threshold changes observed in the intervention group showed improvement in the hearing threshold level for all frequencies except 500 Hz and 8000 Hz for the left ear. The hearing threshold changes range from 1.4 dB to 5.2 dB with largest improvement at higher frequencies mainly 4000 Hz and 6000 Hz. Meanwhile for the right ear, the mean hearing threshold level remained similar at 4000 Hz and 6000 Hz after 3 months of intervention. The Hearing Conservation Program (HCP) is effective in preserving the hearing of vector control workers involved in fogging activity as well as increasing their knowledge, attitude and practice towards noise-induced hearing loss (NIHL).

Keywords: adult, hearing conservation program, noise-induced hearing loss, vector control worker

Procedia PDF Downloads 127
18358 Automated Ultrasound Carotid Artery Image Segmentation Using Curvelet Threshold Decomposition

Authors: Latha Subbiah, Dhanalakshmi Samiappan

Abstract:

In this paper, we propose denoising Common Carotid Artery (CCA) B mode ultrasound images by a decomposition approach to curvelet thresholding and automatic segmentation of the intima media thickness and adventitia boundary. By decomposition, the local geometry of the image, its direction of gradients are well preserved. The components are combined into a single vector valued function, thus removes noise patches. Double threshold is applied to inherently remove speckle noise in the image. The denoised image is segmented by active contour without specifying seed points. Combined with level set theory, they provide sub regions with continuous boundaries. The deformable contours match to the shapes and motion of objects in the images. A curve or a surface under constraints is developed from the image with the goal that it is pulled into the necessary features of the image. Region based and boundary based information are integrated to achieve the contour. The method treats the multiplicative speckle noise in objective and subjective quality measurements and thus leads to better-segmented results. The proposed denoising method gives better performance metrics compared with other state of art denoising algorithms.

Keywords: curvelet, decomposition, levelset, ultrasound

Procedia PDF Downloads 311
18357 The Voiceless Dental- Alveolar Common Augment in Arabic and Other Semitic Languages, a Morphophonemic Comparison

Authors: Tarek Soliman Mostafa Soliman Al-Nana'i

Abstract:

There are non-steady voiced augments in the Semitic languages, and in the morphological and structural augmentation, two sounds were augments in all Semitic languages at the level of the spoken language and two letters at the level of the written language, which are the hamza and the ta’. This research studies only the second of them; Therefore, we defined it as “The Voiceless Dental- alveolar common augment” (VDACA) to distinguish it from the glottal sound “Hamza”, first, middle, or last, in a noun or in a verb, in Arabic and its equivalent in the Semitic languages. What is meant by “VDACA” is the ta’ that is in addition to the root of the word at the morphological level: the word “voiceless” takes out the voiced sounds that we studied before, and the “dental- alveolar common augment” takes out the laryngeal sound of them, which is the “Hamza”: and the word “common” brings out the uncommon voiceless sounds, which are sīn, shīn, and hā’. The study is limited to the ta' alone among the Arabic sounds, and this title faced a problem in identifying it with the ta'. Because the designation of the ta is not the same in most Semitic languages. Hebrew, for example, has “tav” and is pronounced with the voiced fa (v), which is not in Arabic. It is called different names in other Semitic languages, such as “taw” or “tAu” in old Syriac. And so on. This goes hand in hand with the insistence on distance from the written level and the reference to the phonetic aspect in this study that is closely and closely linked to the morphological level. Therefore, the study is “morphophonemic”. What is meant by Semitic languages in this study are the following: Akkadian, Ugaritic, Hebrew, Syriac, Mandaean, Ge'ez, and Amharic. The problem of the study is the agreement or difference between these languages in the position of that augment, first, middle, or last. And in determining the distinguishing characteristics of each language from the other. As for the study methodology, it is determined by the comparative approach in Semitic languages, which is based on the descriptive approach for each language. The study is divided into an introduction, four sections, and a conclusion: Introduction: It included the subject of the study, its importance, motives, problem, methodology, and division. The first section: VDACA as a non-common phoneme. The second: VDACA as a common phoneme. The third: VDACA as a functional morpheme. The fourth section: Commentary and conclusion with the most important results. The positions of VDACA in Arabic and other Semitic languages, and in nouns and verbs, were limited to first, middle, and last. The research identified the individual addition, which is common with other augments, and the research proved that this augmentation is constant in all Semitic languages, but there are characteristics that distinguish each language from the other.

Keywords: voiceless -, dental- alveolar, augment, Arabic - semitic languages

Procedia PDF Downloads 40
18356 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

Authors: Benjamin Leiby, Darryl Ahner

Abstract:

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.

Keywords: correlation, country conflict, imputation, stochastic regression

Procedia PDF Downloads 93
18355 Incorporation of Safety into Design by Safety Cube

Authors: Mohammad Rajabalinejad

Abstract:

Safety is often seen as a requirement or a performance indicator through the design process, and this does not always result in optimally safe products or systems. This paper suggests integrating the best safety practices with the design process to enrich the exploration experience for designers and add extra values for customers. For this purpose, the commonly practiced safety standards and design methods have been reviewed and their common blocks have been merged forming Safety Cube. Safety Cube combines common blocks for design, hazard identification, risk assessment and risk reduction through an integral approach. An example application presents the use of Safety Cube for design of machinery.

Keywords: safety, safety cube, product, system, machinery, design

Procedia PDF Downloads 215
18354 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 133
18353 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods

Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo

Abstract:

The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.

Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines

Procedia PDF Downloads 583
18352 Multiclass Support Vector Machines with Simultaneous Multi-Factors Optimization for Corporate Credit Ratings

Authors: Hyunchul Ahn, William X. S. Wong

Abstract:

Corporate credit rating prediction is one of the most important topics, which has been studied by researchers in the last decade. Over the last decade, researchers are pushing the limit to enhance the exactness of the corporate credit rating prediction model by applying several data-driven tools including statistical and artificial intelligence methods. Among them, multiclass support vector machine (MSVM) has been widely applied due to its good predictability. However, heuristics, for example, parameters of a kernel function, appropriate feature and instance subset, has become the main reason for the critics on MSVM, as they have dictate the MSVM architectural variables. This study presents a hybrid MSVM model that is intended to optimize all the parameter such as feature selection, instance selection, and kernel parameter. Our model adopts genetic algorithm (GA) to simultaneously optimize multiple heterogeneous design factors of MSVM.

Keywords: corporate credit rating prediction, Feature selection, genetic algorithms, instance selection, multiclass support vector machines

Procedia PDF Downloads 266
18351 Automatic Moment-Based Texture Segmentation

Authors: Tudor Barbu

Abstract:

An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.

Keywords: image segmentation, moment-based, texture analysis, automatic classification, validation indexes

Procedia PDF Downloads 388
18350 Anomaly Detection with ANN and SVM for Telemedicine Networks

Authors: Edward Guillén, Jeisson Sánchez, Carlos Omar Ramos

Abstract:

In recent years, a wide variety of applications are developed with Support Vector Machines -SVM- methods and Artificial Neural Networks -ANN-. In general, these methods depend on intrusion knowledge databases such as KDD99, ISCX, and CAIDA among others. New classes of detectors are generated by machine learning techniques, trained and tested over network databases. Thereafter, detectors are employed to detect anomalies in network communication scenarios according to user’s connections behavior. The first detector based on training dataset is deployed in different real-world networks with mobile and non-mobile devices to analyze the performance and accuracy over static detection. The vulnerabilities are based on previous work in telemedicine apps that were developed on the research group. This paper presents the differences on detections results between some network scenarios by applying traditional detectors deployed with artificial neural networks and support vector machines.

Keywords: anomaly detection, back-propagation neural networks, network intrusion detection systems, support vector machines

Procedia PDF Downloads 317
18349 Producing Outdoor Design Conditions based on the Dependency between Meteorological Elements: Copula Approach

Authors: Zhichao Jiao, Craig Farnham, Jihui Yuan, Kazuo Emura

Abstract:

It is common to use the outdoor design weather data to select the air-conditioning capacity in the building design stage. The outdoor design weather data are usually comprised of multiple meteorological elements for a 24-hour period separately, but the dependency between the elements is not well considered, which may cause an overestimation of selecting air-conditioning capacity. Considering the dependency between the air temperature and global solar radiation, we used the copula approach to model the joint distributions of those two weather elements and suggest a new method of selecting more credible outdoor design conditions based on the specific simultaneous occurrence probability of air temperature and global solar radiation. In this paper, the 10-year period hourly weather data from 2001 to 2010 in Osaka, Japan, was used to analyze the dependency structure and joint distribution, the result shows that the Joe-Frank copula fit for almost all hourly data. According to calculating the simultaneous occurrence probability and the common exceeding probability of air temperature and global solar radiation, the results have shown that the maximum difference in design air temperature and global solar radiation of the day is about 2 degrees Celsius and 30W/m2, respectively.

Keywords: energy conservation, design weather database, HVAC, copula approach

Procedia PDF Downloads 224
18348 Integrated Navigation System Using Simplified Kalman Filter Algorithm

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

GPS and inertial navigation system (INS) have complementary qualities that make them ideal use for sensor fusion. The limitations of GPS include occasional high noise content, outages when satellite signals are blocked, interference and low bandwidth. The strengths of GPS include its long-term stability and its capacity to function as a stand-alone navigation system. In contrast, INS is not subject to interference or outages, have high bandwidth and good short-term noise characteristics, but have long-term drift errors and require external information for initialization. A combined system of GPS and INS subsystems can exhibit the robustness, higher bandwidth and better noise characteristics of the inertial system with the long-term stability of GPS. The most common estimation algorithm used in integrated INS/GPS is the Kalman Filter (KF). KF is able to take advantages of these characteristics to provide a common integrated navigation implementation with performance superior to that of either subsystem (GPS or INS). This paper presents a simplified KF algorithm for land vehicle navigation application. In this integration scheme, the GPS derived positions and velocities are used as the update measurements for the INS derived PVA. The KF error state vector in this case includes the navigation parameters as well as the accelerometer and gyroscope error states.

Keywords: GPS, INS, Kalman filter, inertial navigation system

Procedia PDF Downloads 450
18347 Texture-Based Image Forensics from Video Frame

Authors: Li Zhou, Yanmei Fang

Abstract:

With current technology, images and videos can be obtained more easily than ever. It is so easy to manipulate these digital multimedia information when obtained, and that the content or source of the image and video could be easily tampered. In this paper, we propose to identify the image and video frame by the texture-based approach, e.g. Markov Transition Probability (MTP), which is in space domain, DCT domain and DWT domain, respectively. In the experiment, image and video frame database is constructed, and is used to train and test the classifier Support Vector Machine (SVM). Experiment results show that the texture-based approach has good performance. In order to verify the experiment result, and testify the universality and robustness of algorithm, we build a random testing dataset, the random testing result is in keeping with above experiment.

Keywords: multimedia forensics, video frame, LBP, MTP, SVM

Procedia PDF Downloads 401
18346 A Bayesian Classification System for Facilitating an Institutional Risk Profile Definition

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for easy creation and classification of institutional risk profiles supporting endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support set up of the most important risk factors. Subsequently, risk profiles employ risk factors classifier and associated configurations to support digital preservation experts with a semi-automatic estimation of endangerment group for file format risk profiles. Our goal is to make use of an expert knowledge base, accuired through a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation of risk factors for a requried dimension for analysis. Using the naive Bayes method, the decision support system recommends to an expert the matching risk profile group for the previously selected institutional risk profile. The proposed methods improve the visibility of risk factor values and the quality of a digital preservation process. The presented approach is designed to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and values of file format risk profiles. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert and to define its profile group. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.

Keywords: linked open data, information integration, digital libraries, data mining

Procedia PDF Downloads 399
18345 Climatic and Environmental Variables Do Not Affect the Diversity of Possible Phytoplasmic Vector Insects Associated with Quercus humboltii Oak Trees in Bogota, Colombia

Authors: J. Lamilla-Monje, C. Solano-Puerto, L. Franco-Lara

Abstract:

Trees play an essential role in cities due to their ability to provide multiple ecosystem goods and services. Bogota trees are threatened by factors such as pests, pathogens, contamination, among others. Among the pathogens, phytoplasmas are a potential risk for urban trees, generating symptoms that affect the ecosystem services that these trees provide in Bogota, an example of this is the affectation of Q. humboldtii by phytoplasmas, these bacteria are transmitted for insects of the order Hemiptera, this is why the objective of this work was to know if the climatic variables (humidity, precipitation, and temperature) and environmental variables (PM10 and PM2.5) could be related to the distribution of the Oak Quercus entomofauna and specifically with the phytoplasma vector insects in Bogota. For this study, the sampling points were distributed in areas of the city with contrasting variables in two types of locations: parks and streets. A total of 68 trees were sampled in which the associated insects were collected using two methodologies: jameo and agitation traps. The results show that insects of the order Hemiptera were the most abundant, including a total of 1682 individuals represented by 29 morphotypes, within this order individuals from eight families were collected (Aphidae, Aradidae, Berytidae, Cicadellidae, Issidae, Membracidae, Miridae, and Psyllidae), finding as possible vectors the families Cicadellidae, Membracidae, and Psyllidae with 959, 8 and 14 individuals respectively. Within the Cicadellidae family, 21 morphotypes were found, being reported as vectors in the literature: Amplicephalus, Exitianus atratus, Haldorus sp., Xestocephalus desertorum, Idiocerinae sp., Scaphytopius sp., the Membracidae family was represented by two morphotypes and the Psyllidae by one. Results that suggest that there is no correlation between climatic and environmental variables with the diversity of insects associated with oak. Knowing the vector insects of phytoplasmas in oak trees will complete the pathosystem and generate effective vector control.

Keywords: vector insects, diversity, phytoplasmas, Cicadellidae

Procedia PDF Downloads 126
18344 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine

Authors: Djamila Benhaddouche, Abdelkader Benyettou

Abstract:

In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.

Keywords: biomedical data, learning, classifier, algorithms decision tree, knowledge extraction

Procedia PDF Downloads 515
18343 Experimental Implementation of Model Predictive Control for Permanent Magnet Synchronous Motor

Authors: Abdelsalam A. Ahmed

Abstract:

Fast speed drives for Permanent Magnet Synchronous Motor (PMSM) is a crucial performance for the electric traction systems. In this paper, PMSM is drived with a Model-based Predictive Control (MPC) technique. Fast speed tracking is achieved through optimization of the DC source utilization using MPC. The technique is based on predicting the optimum voltage vector applied to the driver. Control technique is investigated by comparing to the cascaded PI control based on Space Vector Pulse Width Modulation (SVPWM). MPC and SVPWM-based FOC are implemented with the TMS320F2812 DSP and its power driver circuits. The designed MPC for a PMSM drive is experimentally validated on a laboratory test bench. The performances are compared with those obtained by a conventional PI-based system in order to highlight the improvements, especially regarding speed tracking response.

Keywords: permanent magnet synchronous motor, model-based predictive control, DC source utilization, cascaded PI control, space vector pulse width modulation, TMS320F2812 DSP

Procedia PDF Downloads 615
18342 Utilizing Google Earth for Internet GIS

Authors: Alireza Derambakhsh

Abstract:

The objective of this examination is to explore the capability of utilizing Google Earth for Internet GIS applications. The study particularly analyzes the utilization of vector and characteristic information and the capability of showing and preparing this information in new ways utilizing the Google Earth stage. It has progressively been perceived that future improvements in GIS will fixate on Internet GIS, and in three noteworthy territories: GIS information access, spatial data scattering and GIS displaying/preparing. Google Earth is one of the group of geobrowsers that offer a free and simple to utilize administration that empower information with a spatial part to be overlain on top of a 3-D model of the Earth. This examination makes a methodological structure to accomplish its objective that comprises of three noteworthy parts: A database level, an application level and a customer level. As verification of idea a web model has been produced, which incorporates a differing scope of datasets and lets clients direst inquiries and make perceptions of this custom information. The outcomes uncovered that both vector and property information can be successfully spoken to and imagined utilizing Google Earth. In addition, the usefulness to question custom information and envision results has been added to the Google Earth stage.

Keywords: Google earth, internet GIS, vector, characteristic information

Procedia PDF Downloads 277
18341 Time-Frequency Feature Extraction Method Based on Micro-Doppler Signature of Ground Moving Targets

Authors: Ke Ren, Huiruo Shi, Linsen Li, Baoshuai Wang, Yu Zhou

Abstract:

Since some discriminative features are required for ground moving targets classification, we propose a new feature extraction method based on micro-Doppler signature. Firstly, the time-frequency analysis of measured data indicates that the time-frequency spectrograms of the three kinds of ground moving targets, i.e., single walking person, two people walking and a moving wheeled vehicle, are discriminative. Then, a three-dimensional time-frequency feature vector is extracted from the time-frequency spectrograms to depict these differences. At last, a Support Vector Machine (SVM) classifier is trained with the proposed three-dimensional feature vector. The classification accuracy to categorize ground moving targets into the three kinds of the measured data is found to be over 96%, which demonstrates the good discriminative ability of the proposed micro-Doppler feature.

Keywords: micro-doppler, time-frequency analysis, feature extraction, radar target classification

Procedia PDF Downloads 380
18340 Super-ellipsoidal Potential Function for Autonomous Collision Avoidance of a Teleoperated UAV

Authors: Mohammed Qasim, Kyoung-Dae Kim

Abstract:

In this paper, we present the design of the super-ellipsoidal potential function (SEPF), that can be used for autonomous collision avoidance of an unmanned aerial vehicle (UAV) in a 3-dimensional space. In the design of SEPF, we have the full control over the shape and size of the potential function. In particular, we can adjust the length, width, height, and the amount of flattening at the tips of the potential function so that the collision avoidance motion vector generated from the potential function can be adjusted accordingly. Based on the idea of the SEPF, we also propose an approach for the local autonomy of a UAV for its collision avoidance when the UAV is teleoperated by a human operator. In our proposed approach, a teleoperated UAV can not only avoid collision autonomously with other surrounding objects but also track the operator’s control input as closely as possible. As a result, an operator can always be in control of the UAV for his/her high-level guidance and navigation task without worrying too much about the UAVs collision avoidance while it is being teleoperated. The effectiveness of the proposed approach is demonstrated through a human-in-the-loop simulation of quadrotor UAV teleoperation using virtual robot experimentation platform (v-rep) and Matlab programs.

Keywords: artificial potential function, autonomous collision avoidance, teleoperation, quadrotor

Procedia PDF Downloads 376
18339 Data Hiding by Vector Quantization in Color Image

Authors: Yung Gi Wu

Abstract:

With the growing of computer and network, digital data can be spread to anywhere in the world quickly. In addition, digital data can also be copied or tampered easily so that the security issue becomes an important topic in the protection of digital data. Digital watermark is a method to protect the ownership of digital data. Embedding the watermark will influence the quality certainly. In this paper, Vector Quantization (VQ) is used to embed the watermark into the image to fulfill the goal of data hiding. This kind of watermarking is invisible which means that the users will not conscious the existing of embedded watermark even though the embedded image has tiny difference compared to the original image. Meanwhile, VQ needs a lot of computation burden so that we adopt a fast VQ encoding scheme by partial distortion searching (PDS) and mean approximation scheme to speed up the data hiding process. The watermarks we hide to the image could be gray, bi-level and color images. Texts are also can be regarded as watermark to embed. In order to test the robustness of the system, we adopt Photoshop to fulfill sharpen, cropping and altering to check if the extracted watermark is still recognizable. Experimental results demonstrate that the proposed system can resist the above three kinds of tampering in general cases.

Keywords: data hiding, vector quantization, watermark, color image

Procedia PDF Downloads 335
18338 Signal Processing Techniques for Adaptive Beamforming with Robustness

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.

Keywords: adaptive beamforming, robustness, signal blocking, steering angle error

Procedia PDF Downloads 99
18337 Expression of Human Papillomavirus Type 18 L1 Virus-Like Particles in Methylotropic Yeast, Pichia Pastoris

Authors: Hossein Rassi, Marjan Moradi Fard, Samaneh Niko

Abstract:

Human papillomavirus type 16 and 18 are closely associated with the development of human cervical carcinoma, which is one of the most common causes of cancer death in women worldwide. At present, HPV type 18 accounts for about 34 % of all HPV infections in Iran and the most promising vaccine against HPV infection is based on the L1 major capsid protein. The L1 protein of HPV18 has the capacity to self-assemble into capsomers or virus-like particles (VLPs) that are non-infectious, highly immunogenic and allowing their use in vaccine production. The methylotrophic yeast Pichia pastoris is an efficient and inexpensive expression system used to produce high levels of heterologous proteins. In this study we expressed HPV18 L1 VLPs in P. pastoris. The gene encoding the major capsid protein L1 of the high-risk HPV type 18 was isolated from Iranian patient by PCR and inserted into pTG19-T vector to obtain the recombinant expression vector pTG19-HPV18-L1. Then, the pTG19-HPV18-L1 was transformed into E. coli strain DH5α and the recombinant protein HPV18 L1 was expressed under IPTG induction in soluble form. The HPV18 L1 gene was excised from recombinant plasmid with XhoI and EcoRI enzymes and ligated into the yeast expression vector pPICZα linearized with the same enzymes, and transformed into P. pastoris. Induction and expression of HPV18 L1 protein was demonstrated by BMGY/BMMY and RT PCR. The parameters for induced cultivation for strain in P. pastoris KM71 with HPV16L1 were investigated in shaking flask cultures. After induced cultivation BMMY (pH 7.0) medium supplemented with methanol to a final concentration of 1.0% every 24 h at 37 degrees C for 96 h, the recombinant produced 78.6 mg/L of L1 protein. This work offers the possibility for the production of prophylactic vaccine for cervical carcinoma by P. pastoris for HPV-18 L1 gene. The VLP-based HPV vaccines can prevent persistent HPV18 infections and cervical cancer in Iran. The HPV-18 L1 gene was expressed successfully in E.coli, which provides necessary basis for preparing HPV-18 L1 vaccine in human. Also, HPV type 6 L1 proteins expressed in Pichia pastoris will facilitate the HPV vaccine development and structure-function study.

Keywords: Pichia pastoris, L1 virus-like particles, human papillomavirus type 18, biotechnology

Procedia PDF Downloads 382
18336 Utilization of an Object Oriented Tool to Perform Model-Based Safety Analysis According to Extended Failure System Models

Authors: Royia Soliman, Salma ElAnsary, Akram Amin Abdellatif, Florian Holzapfel

Abstract:

Model-Based Safety Analysis (MBSA) is an approach in which the system and safety engineers share a common system model created using a model-based development process. The model can also be extended by the failure modes of the system components. There are two famous approaches for the addition of fault behaviors to system models. The first one is to enclose the failure into the system design directly. The second approach is to develop a fault model separately from the system model, thus combining both independent models for safety analysis. This paper introduces a hybrid approach of MBSA. The approach tries to use informal abstracted models to investigate failure behaviors. The approach will combine various concepts such as directed graph traversal, event lists and Constraint Satisfaction Problems (CSP). The approach is implemented using an Object Oriented programming language. The components are abstracted to its failure logic and relationships of connected components. The implemented approach is tested on various flight control systems, including electrical and multi-domain examples. The various tests are analyzed, and a comparison to different approaches is represented.

Keywords: flight control systems, model based safety analysis, safety assessment analysis, system modelling

Procedia PDF Downloads 132
18335 Using Probe Person Data for Travel Mode Detection

Authors: Muhammad Awais Shafique, Eiji Hato, Hideki Yaginuma

Abstract:

Recently GPS data is used in a lot of studies to automatically reconstruct travel patterns for trip survey. The aim is to minimize the use of questionnaire surveys and travel diaries so as to reduce their negative effects. In this paper data acquired from GPS and accelerometer embedded in smart phones is utilized to predict the mode of transportation used by the phone carrier. For prediction, Support Vector Machine (SVM) and Adaptive boosting (AdaBoost) are employed. Moreover a unique method to improve the prediction results from these algorithms is also proposed. Results suggest that the prediction accuracy of AdaBoost after improvement is relatively better than the rest.

Keywords: accelerometer, AdaBoost, GPS, mode prediction, support vector machine

Procedia PDF Downloads 328
18334 Post-Earthquake Road Damage Detection by SVM Classification from Quickbird Satellite Images

Authors: Moein Izadi, Ali Mohammadzadeh

Abstract:

Detection of damaged parts of roads after earthquake is essential for coordinating rescuers. In this study, an approach is presented for the semi-automatic detection of damaged roads in a city using pre-event vector maps and both pre- and post-earthquake QuickBird satellite images. Damage is defined in this study as the debris of damaged buildings adjacent to the roads. Some spectral and texture features are considered for SVM classification step to detect damages. Finally, the proposed method is tested on QuickBird pan-sharpened images from the Bam City earthquake and the results show that an overall accuracy of 81% and a kappa coefficient of 0.71 are achieved for the damage detection. The obtained results indicate the efficiency and accuracy of the proposed approach.

Keywords: SVM classifier, disaster management, road damage detection, quickBird images

Procedia PDF Downloads 595