Search results for: computer aided classifier.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1671

Search results for: computer aided classifier.

1041 Applications of Support Vector Machines on Smart Phone Systems for Emotional Speech Recognition

Authors: Wernhuar Tarng, Yuan-Yuan Chen, Chien-Lung Li, Kun-Rong Hsie, Mingteh Chen

Abstract:

An emotional speech recognition system for the applications on smart phones was proposed in this study to combine with 3G mobile communications and social networks to provide users and their groups with more interaction and care. This study developed a mechanism using the support vector machines (SVM) to recognize the emotions of speech such as happiness, anger, sadness and normal. The mechanism uses a hierarchical classifier to adjust the weights of acoustic features and divides various parameters into the categories of energy and frequency for training. In this study, 28 commonly used acoustic features including pitch and volume were proposed for training. In addition, a time-frequency parameter obtained by continuous wavelet transforms was also used to identify the accent and intonation in a sentence during the recognition process. The Berlin Database of Emotional Speech was used by dividing the speech into male and female data sets for training. According to the experimental results, the accuracies of male and female test sets were increased by 4.6% and 5.2% respectively after using the time-frequency parameter for classifying happy and angry emotions. For the classification of all emotions, the average accuracy, including male and female data, was 63.5% for the test set and 90.9% for the whole data set.

Keywords: Smart phones, emotional speech recognition, socialnetworks, support vector machines, time-frequency parameter, Mel-scale frequency cepstral coefficients (MFCC).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1842
1040 The Size Effects of Keyboards (Keycaps) on Computer Typing Tasks

Authors: Chih-Chun Lai, Jun-Yu Wang

Abstract:

Keyboard is the most important equipment for computer tasks. However, improper design of keyboard would cause some symptoms like ulnar and/or radial deviations. The research goal of this study was to investigate the optimal size(s) of keycaps to increase efficiency. As shown in the questionnaire pre-study with 49 participants aged from 20 to 44, the most commonly used keyboards were 101-key standard keyboards. Most of the keycap sizes (W×L) were 1.3×1.5 cm and 1.5×1.5 cm. The fingertip breadths of most participants were 1.2 cm. Therefore, in the main study with 18 participants, a standard keyboard with each set of the 3-sized (1.2×1.4 cm, 1.3×1.5 cm, and 1.5×1.5 cm) keycaps were used to investigate their typing efficiency, respectively. The results revealed that the differences between the operating times for using 1.3×1.5 cm and 1.2×1.4 cm keycaps was insignificant while operating times for using 1.5×1.5cm keycaps were significantly longer than for using 1.2×1.4 cm or 1.3×1.5 cm, respectively. As for typing error rate, there was no significant difference.

Keywords: Keyboard, Keycap size, Typing efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1544
1039 Fitness Action Recognition Based on MediaPipe

Authors: Zixuan Xu, Yichun Lou, Yang Song, Zihuai Lin

Abstract:

MediaPipe is an open-source machine learning computer vision framework that can be ported into a multi-platform environment, which makes it easier to use it to recognize human activity. Based on this framework, many human recognition systems have been created, but the fundamental issue is the recognition of human behavior and posture. In this paper, two methods are proposed to recognize human gestures based on MediaPipe, the first one uses the Adaptive Boosting algorithm to recognize a series of fitness gestures, and the second one uses the Fast Dynamic Time Warping algorithm to recognize 413 continuous fitness actions. These two methods are also applicable to any human posture movement recognition.

Keywords: Computer Vision, MediaPipe, Adaptive Boosting, Fast Dynamic Time Warping.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 857
1038 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
1037 Computer Based Medicine: I - The Future

Authors: Essam Abd-Elrazek

Abstract:

With the rapid growth in business size, today-s businesses orient Throughout thirty years local, national and international experience in medicine as a medical student, junior doctor and eventually Consultant and Professor in Anaesthesia, Intensive Care and Pain Management, I note significant generalised dissatisfaction among medical students and doctors regarding their medical education and practice. We repeatedly hear complaints from patients about the dysfunctional health care system they are dealing with and subsequently the poor medical service that they are receiving. Medical students are bombarded with lectures, tutorials, clinical rounds and various exams. Clinicians are weighed down with a never-ending array of competing duties. Patients are extremely unhappy about the long waiting lists, loss of their records and the continuous deterioration of the health care service. This problem has been reported in different countries by several authors [1,2,3]. In a trial to solve this dilemma, a genuine idea has been suggested implementing computer technology in medicine [2,3]. Computers in medicine are a medium of international communication of the revolutionary advances being made in the application of the computer to the fields of bioscience and medicine [4,5]. The awareness about using computers in medicine has recently increased all over the world. In Misr University for Science & Technology (MUST), Egypt, medical students are now given hand-held computers (Laptop) with Internet facility making their medical education accessible, convenient and up to date. However, this trial still needs to be validated. Helping the readers to catch up with the on going fast development in this interesting field, the author has decided to continue reviewing the literature, exploring the state-of-art in computer based medicine and up dating the medical professionals especially the local trainee Doctors in Egypt. In part I of this review article we will give a general background discussing the potential use of computer technology in the various aspects of the medical field including education, research, clinical practice and the health care service given to patients. Hope this will help starting changing the culture, promoting the awareness about the importance of implementing information technology (IT) in medicine, which is a field in which such help is needed. An international collaboration is recommended supporting the emerging countries achieving this target.

Keywords: Medical Informatics, telemedicine, e-health systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2633
1036 The Analysis of Internet and Social Media Behaviors of the Students in the Higher School of Vocational and Technical Sciences

Authors: Mehmet Balci, Sakir Tasdemir, Mustafa Altin, Ozlem Bozok

Abstract:

Our globalizing world has become almost a small village and everyone can access any information at any time. Everyone lets each other know who does whatever in which place. We can learn which social events occur in which place in the world. From the perspective of education, the course notes that a lecturer use in lessons in a university in any state of America can be examined by a student studying in a city of Africa or the Far East. This dizzying communication we have mentioned happened thanks to fast developments in computer and internet technologies. While these developments occur in the world, Turkey that has a very large young population and whose electronic infrastructure rapidly improves has also been affected by these developments. Nowadays, mobile devices have become common and thus, it causes to increase data traffic in social networks. This study was carried out on students in the different age groups in Selcuk University Vocational School of Technical Sciences, the Department of Computer Technology. Students’ opinions about the use of internet and social media were obtained. The features such as using the Internet and social media skills, purposes, operating frequency, accessing facilities and tools, social life and effects on vocational education and so forth were explored. The positive effects and negative effects of both internet and social media use on the students in this department and findings are evaluated from different perspectives and results are obtained. In addition, relations and differences were found out statistically.

Keywords: Computer technologies, internet use, social network, higher vocational school.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
1035 An Optimized Multi-block Method for Turbulent Flows

Authors: M. Goodarzi, P. Lashgari

Abstract:

A major part of the flow field involves no complicated turbulent behavior in many turbulent flows. In this research work, in order to reduce required memory and CPU time, the flow field was decomposed into several blocks, each block including its special turbulence. A two dimensional backward facing step was considered here. Four combinations of the Prandtl mixing length and standard k- E models were implemented as well. Computer memory and CPU time consumption in addition to numerical convergence and accuracy of the obtained results were mainly investigated. Observations showed that, a suitable combination of turbulence models in different blocks led to the results with the same accuracy as the high order turbulence model for all of the blocks, in addition to the reductions in memory and CPU time consumption.

Keywords: Computer memory, CPU time, Multi-block method, Turbulence modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1565
1034 Evaluation Factors of Clinical Decision Support System in u_Healthcare Service

Authors: Sun K. Yoo, Ki-Chang Nam, Hyun-Young Shin, Ho-Seong Moon, Hee Cheol Kang

Abstract:

Automated intelligent, clinical decision support systems generally promote to help or to assist physicians and patients regarding to prevention of diseases or treatment of illnesses using computer represented knowledge and information. In this paper, assessment factors affecting the proper design of clinical decision support system were investigated. The required procedure steps for gathering the data from clinical trial and extracting the information from large volume of healthcare repositories were listed, which are necessary for validation and verification of evidence-based implementation of clinical decision support system. The goal of this paper is to extract useful evaluation factors affecting the quality of the clinical decision support system in the design, development, and implementation of a computer-based decision support system.

Keywords: Evaluation, Clinical Decision Support System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2237
1033 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: Communication signal, feature extraction, holder coefficient, improved cloud model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
1032 Widening Students Perspective: Empowering Them with Systems Methodologies

Authors: Albertus G. Joubert, Roelien Goede

Abstract:

Benefits to the organisation are just as important as technical ability when it comes to software success. The challenge is to provide industry with professionals who understand this. In other words: How to teach computer engineering students to look beyond technology, and at the benefits of software to organizations? This paper reports on the conceptual design of a section of the computer networks module aimed to sensitize the students to the organisational context. Checkland focuses on different worldviews represented by various role players in the organisation. He developed the Soft Systems Methodology that guides purposeful action in organisations, while incorporating different worldviews in the modeling process. If we can sensitize students to these methods, they are likely to appreciate the wider context of application of system software. This paper will provide literature on these concepts as well as detail on how the students will be guided to adopt these concepts.

Keywords: Checkland, Soft Systems Methodology, Systems Approach, System Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1440
1031 Security Design of Root of Trust Based on RISC-V

Authors: Kang Huang, Wanting Zhou, Shiwei Yuan, Lei Li

Abstract:

Since information technology develops rapidly, the security issue has become an increasingly critical for computer system. In particular, as cloud computing and the Internet of Things (IoT) continue to gain widespread adoption, computer systems need to new security threats and attacks. The Root of Trust (RoT) is the foundation for providing basic trusted computing, which is used to verify the security and trustworthiness of other components. Designing a reliable RoT and guaranteeing its own security are essential for improving the overall security and credibility of computer systems. In this paper, we discuss the implementation of self-security technology based on the RISC-V RoT at the hardware level. To effectively safeguard the security of the RoT, researches on security safeguard technology on the RoT have been studied. At first, a lightweight and secure boot framework is proposed as a secure mechanism. Secondly, two kinds of memory protection mechanism are built to against memory attacks. Moreover, hardware implementation of proposed method has been also investigated. A series of experiments and tests have been carried on to verify to effectiveness of the proposed method. The experimental results demonstrated that the proposed approach is effective in verifying the integrity of the RoT’s own boot rom, user instructions, and data, ensuring authenticity and enabling the secure boot of the RoT’s own system. Additionally, our approach provides memory protection against certain types of memory attacks, such as cache leaks and tampering, and ensures the security of root-of-trust sensitive information, including keys.

Keywords: Root of Trust, secure boot, memory protection, hardware security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 80
1030 Computer-based Alarm Processing and Presentation Methods in Nuclear Power Plants

Authors: Jung-Woon Lee, Jung-Taek Kim, Jae-Chang Park, In-Koo Hwang, Sung-Pil Lyu

Abstract:

Computerized alarm systems have been applied increasingly to nuclear power plants. For existing plants, an add-on computer alarm system is often installed to the control rooms. Alarm avalanches during the plant transients are major problems with the alarm systems in nuclear power plants. Computerized alarm systems can process alarms to reduce the number of alarms during the plant transients. This paper describes various alarm processing methods, an alarm cause tracking function, and various alarm presentation schemes to show alarm information to the operators effectively which are considered during the development of several computerized alarm systems for Korean nuclear power plants and are found to be helpful to the operators.

Keywords: Alarm processing, Alarm presentation, Alarm causetracking, Alarm logic diagram computerization, Alarm patternrecognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2373
1029 Localization of Geospatial Events and Hoax Prediction in the UFO Database

Authors: Harish Krishnamurthy, Anna Lafontant, Ren Yi

Abstract:

Unidentified Flying Objects (UFOs) have been an interesting topic for most enthusiasts and hence people all over the United States report such findings online at the National UFO Report Center (NUFORC). Some of these reports are a hoax and among those that seem legitimate, our task is not to establish that these events confirm that they indeed are events related to flying objects from aliens in outer space. Rather, we intend to identify if the report was a hoax as was identified by the UFO database team with their existing curation criterion. However, the database provides a wealth of information that can be exploited to provide various analyses and insights such as social reporting, identifying real-time spatial events and much more. We perform analysis to localize these time-series geospatial events and correlate with known real-time events. This paper does not confirm any legitimacy of alien activity, but rather attempts to gather information from likely legitimate reports of UFOs by studying the online reports. These events happen in geospatial clusters and also are time-based. We look at cluster density and data visualization to search the space of various cluster realizations to decide best probable clusters that provide us information about the proximity of such activity. A random forest classifier is also presented that is used to identify true events and hoax events, using the best possible features available such as region, week, time-period and duration. Lastly, we show the performance of the scheme on various days and correlate with real-time events where one of the UFO reports strongly correlates to a missile test conducted in the United States.

Keywords: Time-series clustering, feature extraction, hoax prediction, geospatial events.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 851
1028 The Use of KREISIG Computer Simulation Program to Optimize Signalized Roundabout

Authors: Ahmad Munawar

Abstract:

KREISIG is a computer simulation program, firstly developed by Munawar (1994) in Germany to optimize signalized roundabout. The traffic movement is based on the car following theory. Turbine method has been implemented for signal setting. The program has then been further developed in Indonesia to meet the traffic characteristics in Indonesia by adjusting the sensitivity of the drivers. Trial and error method has been implemented to adjust the saturation flow. The saturation flow output has also been compared to the calculation method according to 1997 Indonesian Highway Capacity Manual. It has then been implemented to optimize signalized roundabout at Kleringan roundabout in Malioboro area, Yogyakarta, Indonesia. It is found that this method can optimize the signal setting of this roundabout. Therefore, it is recommended to use this program to optimize signalized roundabout.

Keywords: KREISIG, signalized roundabout, traffic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
1027 The Use of Computer Simulation as Technological Education for Crisis Management Staff

Authors: Jiří Barta, Josef Krahulec, Jiří F. Urbánek

Abstract:

Education and practical training crisis management members are a topical issue nowadays. The paper deals with the perspectives and possibilities of "smart solutions" to education for crisis management staff. Currently, there is a large number of simulation tools, which notes that they are suitable for practical training of crisis management staff. The first part of the paper is focused on the introduction of the technology simulation tools. The simulators aim is to create a realistic environment for the practical training of extending units of crisis staff. The second part of the paper concerns the possibilities of using the simulation technology to the education process. The aim of this section is to introduce the practical capabilities and potential of the simulation programs for practical training of crisis management staff.

Keywords: Crisis management staff, computer simulation, software, technological education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1566
1026 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: Brain-computer interface, speech recognition, electroencephalography EEG, Wernicke area, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918
1025 The Use of Ontology Framework for Automation Digital Forensics Investigation

Authors: Ahmad Luthfi

Abstract:

One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as Ontology Framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely Analysis Tools and Operating System. By using the concept of Ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing Forward Chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the Ontology.

Keywords: Ontology, Framework, Automation, Forensics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2754
1024 Hi-Fi Traffic Clearance Technique for Life Saving Vehicles using Differential GPS System

Authors: N. Yuvaraj, V. B. Prakash, D. Venkatraj

Abstract:

This paper may be considered as combination of both pervasive computing and Differential GPS (global positioning satellite) which relates to control automatic traffic signals in such a way as to pre-empt normal signal operation and permit lifesaving vehicles. Before knowing the arrival of the lifesaving vehicles from the signal there is a chance of clearing the traffic. Traffic signal preemption system includes a vehicle equipped with onboard computer system capable of capturing diagnostic information and estimated location of the lifesaving vehicle using the information provided by GPS receiver connected to the onboard computer system and transmitting the information-s using a wireless transmitter via a wireless network. The fleet management system connected to a wireless receiver is capable of receiving the information transmitted by the lifesaving vehicle .A computer is also located at the intersection uses corrected vehicle position, speed & direction measurements, in conjunction with previously recorded data defining approach routes to the intersection, to determine the optimum time to switch a traffic light controller to preemption mode so that lifesaving vehicles can pass safely. In case when the ambulance need to take a “U" turn in a heavy traffic area we suggest a solution. Now we are going to make use of computerized median which uses LINKED BLOCKS (removable) to solve the above problem.

Keywords: Ubiquitous computing, differential GPS, fleet management system, wireless transmitter and receiver computerized median i.e. linked blocks (removable).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1990
1023 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 977
1022 Detection of Moving Images Using Neural Network

Authors: P. Latha, L. Ganesan, N. Ramaraj, P. V. Hari Venkatesh

Abstract:

Motion detection is a basic operation in the selection of significant segments of the video signals. For an effective Human Computer Intelligent Interaction, the computer needs to recognize the motion and track the moving object. Here an efficient neural network system is proposed for motion detection from the static background. This method mainly consists of four parts like Frame Separation, Rough Motion Detection, Network Formation and Training, Object Tracking. This paper can be used to verify real time detections in such a way that it can be used in defense applications, bio-medical applications and robotics. This can also be used for obtaining detection information related to the size, location and direction of motion of moving objects for assessment purposes. The time taken for video tracking by this Neural Network is only few seconds.

Keywords: Frame separation, Correlation Network, Neural network training, Radial Basis Function, object tracking, Motion Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3150
1021 Error-Robust Nature of Genome Profiling Applied for Clustering of Species Demonstrated by Computer Simulation

Authors: Shamim Ahmed Koichi Nishigaki

Abstract:

Genome profiling (GP), a genotype based technology, which exploits random PCR and temperature gradient gel electrophoresis, has been successful in identification/classification of organisms. In this technology, spiddos (Species identification dots) and PaSS (Pattern similarity score) were employed for measuring the closeness (or distance) between genomes. Based on the closeness (PaSS), we can buildup phylogenetic trees of the organisms. We noticed that the topology of the tree is rather robust against the experimental fluctuation conveyed by spiddos. This fact was confirmed quantitatively in this study by computer-simulation, providing the limit of the reliability of this highly powerful methodology. As a result, we could demonstrate the effectiveness of the GP approach for identification/classification of organisms.

Keywords: Fluctuation, Genome profiling (GP), Pattern similarity score (PaSS), Robustness, Spiddos-shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
1020 Skills Development: The Active Learning Model of a French Computer Science Institute

Authors: N. Paparisteidi, D. Rodamitou

Abstract:

This article focuses on the skills development and path planning of students studying computer science at EPITECH: French private institute of higher education. We examine students’ points of view and experience in a blended learning model based on a skills development curriculum. The study is based on the collection of four main categories of data: semi-participant observation, distribution of questionnaires, interviews, and analysis of internal school databases. The findings seem to indicate that a skills-based program on active learning enables students to develop their learning strategies as well as their personal skills and to actively engage in the creation of their career path and contribute to providing additional information to curricula planners and decision-makers about learning design in higher education.

Keywords: Active learning, blended learning, higher education, skills development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 221
1019 Markov Random Field-Based Segmentation Algorithm for Detection of Land Cover Changes Using Uninhabited Aerial Vehicle Synthetic Aperture Radar Polarimetric Images

Authors: Mehrnoosh Omati, Mahmod Reza Sahebi

Abstract:

The information on land use/land cover changing plays an essential role for environmental assessment, planning and management in regional development. Remotely sensed imagery is widely used for providing information in many change detection applications. Polarimetric Synthetic aperture radar (PolSAR) image, with the discrimination capability between different scattering mechanisms, is a powerful tool for environmental monitoring applications. This paper proposes a new boundary-based segmentation algorithm as a fundamental step for land cover change detection. In this method, first, two PolSAR images are segmented using integration of marker-controlled watershed algorithm and coupled Markov random field (MRF). Then, object-based classification is performed to determine changed/no changed image objects. Compared with pixel-based support vector machine (SVM) classifier, this novel segmentation algorithm significantly reduces the speckle effect in PolSAR images and improves the accuracy of binary classification in object-based level. The experimental results on Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) polarimetric images show a 3% and 6% improvement in overall accuracy and kappa coefficient, respectively. Also, the proposed method can correctly distinguish homogeneous image parcels.

Keywords: Coupled Markov random field, environment, object-based analysis, Polarimetric SAR images.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 863
1018 Identification of Spam Keywords Using Hierarchical Category in C2C E-commerce

Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park

Abstract:

Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like ebay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C E-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C E-commerce.

Keywords: Spam Keyword, E-commerce, keyword features, spam filtering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508
1017 Virtual Training, Human-Computer and Software Interactions, and Social-Based Embodiness

Authors: Philippe Fauquet-Alekhine

Abstract:

For professions of high risk industries, simulation training has always been thought in terms of high degree of fidelity regarding the real operational situation. Due to the recent progress, this way of training is changing, modifying the human-computer and software interactions: the interactions between trainees during simulation training session tend to become virtual, transforming the social-based embodiness (the way subjects integrate social skills for interpersonal relationship with co-workers). On the basis of the analysis of eight different profession trainings, a categorization of interactions has help to produce an analytical tool, the social interactions table. This tool may be very valuable to point out the changes of social interactions when the training sessions are skipping from a high fidelity simulator to a virtual simulator. In this case, it helps the designers of professional training to analyze and to assess the consequences of the potential lack the social-based embodiness.

Keywords: Interface, interaction, simulator, virtual training.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
1016 Modeling of Bio Scaffolds: Structural and Fluid Transport Characterization

Authors: Sahba Sadir, M. R. A. Kadir, A. Öchsner, M. N. Harun

Abstract:

Scaffolds play a key role in tissue engineering and can be produced in many different ways depending on the applications and the materials used. Most researchers used an experimental trialand- error approach into new biomaterials but computer simulation applied to tissue engineering can offer a more exhaustive approach to test and screen out biomaterials. This paper develops the model of scaffolds and Computational Fluid Dynamics that show the value of computer simulations in determining the influence of the geometrical scaffold parameter porosity, pore size and shape on the permeability of scaffolds, magnitude of velocity, drop pressure, shear stress distribution and level and the proper design of the geometry of the scaffold. This creates a need for more advanced studies that include aspects of dynamic conditions of a micro fluid passing through the scaffold were characterized for tissue engineering applications and differentiation of tissues within scaffolds.

Keywords: Scaffold engineering, Tissue engineering, Cellularstructure, Biomaterial, Computational fluid dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039
1015 Automatic Removal of Ocular Artifacts using JADE Algorithm and Neural Network

Authors: V Krishnaveni, S Jayaraman, A Gunasekaran, K Ramadoss

Abstract:

The ElectroEncephaloGram (EEG) is useful for clinical diagnosis and biomedical research. EEG signals often contain strong ElectroOculoGram (EOG) artifacts produced by eye movements and eye blinks especially in EEG recorded from frontal channels. These artifacts obscure the underlying brain activity, making its visual or automated inspection difficult. The goal of ocular artifact removal is to remove ocular artifacts from the recorded EEG, leaving the underlying background signals due to brain activity. In recent times, Independent Component Analysis (ICA) algorithms have demonstrated superior potential in obtaining the least dependent source components. In this paper, the independent components are obtained by using the JADE algorithm (best separating algorithm) and are classified into either artifact component or neural component. Neural Network is used for the classification of the obtained independent components. Neural Network requires input features that exactly represent the true character of the input signals so that the neural network could classify the signals based on those key characters that differentiate between various signals. In this work, Auto Regressive (AR) coefficients are used as the input features for classification. Two neural network approaches are used to learn classification rules from EEG data. First, a Polynomial Neural Network (PNN) trained by GMDH (Group Method of Data Handling) algorithm is used and secondly, feed-forward neural network classifier trained by a standard back-propagation algorithm is used for classification and the results show that JADE-FNN performs better than JADEPNN.

Keywords: Auto Regressive (AR) Coefficients, Feed Forward Neural Network (FNN), Joint Approximation Diagonalisation of Eigen matrices (JADE) Algorithm, Polynomial Neural Network (PNN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1890
1014 Distributed Motion Control Real-Time Contouring Algorithm Implementation and Performance Test

Authors: Francisco J. Lopez-Jaquez, Sandra E. Ramirez-Jara

Abstract:

This paper presents an implementation and performance test of a distributed motion control system based on a master-slave configuration used to move a plasma-cutting torch over a predefined trajectory. The master is a general-purpose computer running on an open source operating system platform and software developer. Software running in the master computer generates commands on real time and we measure performance based on a selected set of differences between expected and observed distances. We are testing the null hypothesis that the outcome trajectory is identical to the input against the alternative hypothesis that there is a shift to the right or left of the input one. We used the Wilcoxon signed ranks test method for the hypothesis test.

Keywords: Distributed, motion, control, real-time, contouring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490
1013 Implementation of Virtual Reality in the Conceptual Design of a Tractor Trailer

Authors: Arunesh Chandra, Pankaj Chandna

Abstract:

Virtual reality (VR) is a rapidly emerging computer interface that attempts to immerse the user completely within an experimental recreation; thereby, greatly enhancing the overall impact and providing a much more intuitive link between the computer and the human participants. The main objective of this study is to design tractor trailer capable of meeting the customers’ requirements and suitable for rough conditions to be used in combination with a farm tractor in India. The final concept is capable of providing arrangements for attaching the trailer to the tractor easily by pickup hitch, stronger and lighter supporting frame, option of spare tyre etc. Furthermore, the resulting product design can be sent via the Internet to customers for comments or marketing purposes. The virtual prototyping (VP) system therefore facilitates advanced product design and helps reduce product development time and cost significantly.

Keywords: Conceptual design, Trailer, Virtual prototyping, Virtual reality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2216
1012 Experimental Parallel Architecture for Rendering 3D Model into MPEG-4 Format

Authors: Ajay Joshi, Surya Ismail

Abstract:

This paper will present the initial findings of a research into distributed computer rendering. The goal of the research is to create a distributed computer system capable of rendering a 3D model into an MPEG-4 stream. This paper outlines the initial design, software architecture and hardware setup for the system. Distributed computing means designing and implementing programs that run on two or more interconnected computing systems. Distributed computing is often used to speed up the rendering of graphical imaging. Distributed computing systems are used to generate images for movies, games and simulations. A topic of interest is the application of distributed computing to the MPEG-4 standard. During the course of the research, a distributed system will be created that can render a 3D model into an MPEG-4 stream. It is expected that applying distributed computing principals will speed up rendering, thus improving the usefulness and efficiency of the MPEG-4 standard

Keywords: Cluster, parallel architecture, rendering, MPEG-4.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1462