Search results for: learning ability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2981

Search results for: learning ability

431 A Local Decisional Algorithm Using Agent- Based Management in Constrained Energy Environment

Authors: C. Adam, G. Henri, T. Levent, J-B Mauro, A-L Mayet

Abstract:

Energy Efficiency Management is the heart of a worldwide problem. The capability of a multi-agent system as a technology to manage the micro-grid operation has already been proved. This paper deals with the implementation of a decisional pattern applied to a multi-agent system which provides intelligence to a distributed local energy network considered at local consumer level. Development of multi-agent application involves agent specifications, analysis, design, and realization. Furthermore, it can be implemented by following several decisional patterns. The purpose of present article is to suggest a new approach for a decisional pattern involving a multi-agent system to control a distributed local energy network in a decentralized competitive system. The proposed solution is the result of a dichotomous approach based on environment observation. It uses an iterative process to solve automatic learning problems and converges monotonically very fast to system attracting operation point.

Keywords: Energy Efficiency Management, Distributed Smart- Grid, Multi-Agent System, Decisional Decentralized Competitive System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1413
430 In vitro Effects of Salvia officinalis on Bovine Spermatozoa

Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč

Abstract:

In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.

Keywords: Bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
429 Inter-Organizational Knowledge Transfer Through Malaysia E-government IT Outsourcing: A Theoretical Review

Authors: Nor Aziati Abdul Hamid, Juhana Salim

Abstract:

The main objective of this paper is to contribute the existing knowledge transfer and IT Outsourcing literature specifically in the context of Malaysia by reviewing the current practices of e-government IT outsourcing in Malaysia including the issues and challenges faced by the public agencies in transferring the knowledge during the engagement. This paper discusses various factors and different theoretical model of knowledge transfer starting from the traditional model to the recent model suggested by the scholars. The present paper attempts to align organizational knowledge from the knowledge-based view (KBV) and organizational learning (OL) lens. This review could help shape the direction of both future theoretical and empirical studies on inter-firm knowledge transfer specifically on how KBV and OL perspectives could play significant role in explaining the complex relationships between the client and vendor in inter-firm knowledge transfer and the role of organizational management information system and Transactive Memory System (TMS) to facilitate the organizational knowledge transferring process. Conclusion is drawn and further research is suggested.

Keywords: E-government, IT Outsourcing, Knowledge Management, Knowledge Transfer

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2368
428 Probiotic Potential and Antimicrobial Activity of Enterococcus faecium Isolated from Chicken Caecal and Fecal Samples

Authors: Salma H. Abu Hafsa, A. Mendonca, B. Brehm-Stecher, A. A. Hassan, S. A. Ibrahim

Abstract:

Enterococci are important inhabitants of the animal intestine and are widely used in probiotic products. A probiotic strain is expected to possess several desirable properties in order to exert beneficial effects. Therefore, the objective of this study was to isolate, characterize and identify Enterococcus sp. from chicken cecal and fecal samples to determine potential probiotic properties. Enterococci were isolated from chicken ceca and feces of thirty three clinically healthy chickens from a local farm. In vitro studies were performed to assess antibacterial activity of the isolated LAB (using agar well diffusion and cell free supernatant broth technique against Salmonella enterica serotype Enteritidis), survival in acidic conditions, resistance to bile salts, and their survival during simulated gastric juice conditions at pH 2.5. Isolates were identified by biochemical carbohydrate fermentation patterns using an API 50 CHL kit and API ZYM kits and by sequenced 16S rDNA. An isolate belonging to E. faecium species exhibited inhibitory effect against S. enteritidis. This isolate producing a clear zone as large as 10.30 mm or greater and was able to grow in the coculture medium and at the same time, inhibited the growth S. enteritidis. In addition, E. faecium exhibited significant resistance under highly acidic conditions at pH 2.5 for 8 h and survived well in bile salt at 0.2% for 24 h and showing ability to survive in the presence of simulated gastric juice at pH 2.5. Based on these results, E. faecium isolate fulfills some of the criteria to be considered as a probiotic strain and therefore, could be used as a feed additive with good potential for controlling S. Enteritidis in chickens. However, in vivo studies are needed to determine the safety of the strain.

Keywords: Acid tolerance, antimicrobial activity, Enterococcus faecium, probiotic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2889
427 Investigation on Performance of Change Point Algorithm in Time Series Dynamical Regimes and Effect of Data Characteristics

Authors: Farhad Asadi, Mohammad Javad Mollakazemi

Abstract:

In this paper, Bayesian online inference in models of data series are constructed by change-points algorithm, which separated the observed time series into independent series and study the change and variation of the regime of the data with related statistical characteristics. variation of statistical characteristics of time series data often represent separated phenomena in the some dynamical system, like a change in state of brain dynamical reflected in EEG signal data measurement or a change in important regime of data in many dynamical system. In this paper, prediction algorithm for studying change point location in some time series data is simulated. It is verified that pattern of proposed distribution of data has important factor on simpler and smother fluctuation of hazard rate parameter and also for better identification of change point locations. Finally, the conditions of how the time series distribution effect on factors in this approach are explained and validated with different time series databases for some dynamical system.

Keywords: Time series, fluctuation in statistical characteristics, optimal learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1812
426 The Application of Line Balancing Technique and Simulation Program to Increase Productivity in Hard Disk Drive Components

Authors: Alonggot Limcharoen, Jintana Wannarat, Vorawat Panich

Abstract:

This study aims to investigate the balancing of the number of operators (Line Balancing technique) in the production line of hard disk drive components in order to increase efficiency. At present, the trend of using hard disk drives has continuously declined leading to limits in a company’s revenue potential. It is important to improve and develop the production process to create market share and to have the ability to compete with competitors with a higher value and quality. Therefore, an effective tool is needed to support such matters. In this research, the Arena program was applied to analyze the results both before and after the improvement. Finally, the precedent was used before proceeding with the real process. There were 14 work stations with 35 operators altogether in the RA production process where this study was conducted. In the actual process, the average production time was 84.03 seconds per product piece (by timing 30 times in each work station) along with a rating assessment by implementing the Westinghouse principles. This process showed that the rating was 123% underlying an assumption of 5% allowance time. Consequently, the standard time was 108.53 seconds per piece. The Takt time was calculated from customer needs divided by working duration in one day; 3.66 seconds per piece. Of these, the proper number of operators was 30 people. That meant five operators should be eliminated in order to increase the production process. After that, a production model was created from the actual process by using the Arena program to confirm model reliability; the outputs from imitation were compared with the original (actual process) and this comparison indicated that the same output meaning was reliable. Then, worker numbers and their job responsibilities were remodeled into the Arena program. Lastly, the efficiency of production process enhanced from 70.82% to 82.63% according to the target.

Keywords: Hard disk drive, line balancing, simulation, Arena program.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1186
425 A Study on Early Prediction of Fault Proneness in Software Modules using Genetic Algorithm

Authors: Parvinder S. Sandhu, Sunil Khullar, Satpreet Singh, Simranjit K. Bains, Manpreet Kaur, Gurvinder Singh

Abstract:

Fault-proneness of a software module is the probability that the module contains faults. To predict faultproneness of modules different techniques have been proposed which includes statistical methods, machine learning techniques, neural network techniques and clustering techniques. The aim of proposed study is to explore whether metrics available in the early lifecycle (i.e. requirement metrics), metrics available in the late lifecycle (i.e. code metrics) and metrics available in the early lifecycle (i.e. requirement metrics) combined with metrics available in the late lifecycle (i.e. code metrics) can be used to identify fault prone modules using Genetic Algorithm technique. This approach has been tested with real time defect C Programming language datasets of NASA software projects. The results show that the fusion of requirement and code metric is the best prediction model for detecting the faults as compared with commonly used code based model.

Keywords: Genetic Algorithm, Fault Proneness, Software Faultand Software Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1984
424 A Real-Time Bayesian Decision-Support System for Predicting Suspect Vehicle’s Intended Target Using a Sparse Camera Network

Authors: Payam Mousavi, Andrew L. Stewart, Huiwen You, Aryeh F. G. Fayerman

Abstract:

We present a decision-support tool to assist an operator in the detection and tracking of a suspect vehicle traveling to an unknown target destination. Multiple data sources, such as traffic cameras, traffic information, weather, etc., are integrated and processed in real-time to infer a suspect’s intended destination chosen from a list of pre-determined high-value targets. Previously, we presented our work in the detection and tracking of vehicles using traffic and airborne cameras. Here, we focus on the fusion and processing of that information to predict a suspect’s behavior. The network of cameras is represented by a directional graph, where the edges correspond to direct road connections between the nodes and the edge weights are proportional to the average time it takes to travel from one node to another. For our experiments, we construct our graph based on the greater Los Angeles subset of the Caltrans’s “Performance Measurement System” (PeMS) dataset. We propose a Bayesian approach where a posterior probability for each target is continuously updated based on detections of the suspect in the live video feeds. Additionally, we introduce the concept of ‘soft interventions’, inspired by the field of Causal Inference. Soft interventions are herein defined as interventions that do not immediately interfere with the suspect’s movements; rather, a soft intervention may induce the suspect into making a new decision, ultimately making their intent more transparent. For example, a soft intervention could be temporarily closing a road a few blocks from the suspect’s current location, which may require the suspect to change their current course. The objective of these interventions is to gain the maximum amount of information about the suspect’s intent in the shortest possible time. Our system currently operates in a human-on-the-loop mode where at each step, a set of recommendations are presented to the operator to aid in decision-making. In principle, the system could operate autonomously, only prompting the operator for critical decisions, allowing the system to significantly scale up to larger areas and multiple suspects. Once the intended target is identified with sufficient confidence, the vehicle is reported to the authorities to take further action. Other recommendations include a selection of road closures, i.e., soft interventions, or to continue monitoring. We evaluate the performance of the proposed system using simulated scenarios where the suspect, starting at random locations, takes a noisy shortest path to their intended target. In all scenarios, the suspect’s intended target is unknown to our system. The decision thresholds are selected to maximize the chances of determining the suspect’s intended target in the minimum amount of time and with the smallest number of interventions. We conclude by discussing the limitations of our current approach to motivate a machine learning approach, based on reinforcement learning in order to relax some of the current limiting assumptions.

Keywords: Autonomous surveillance, Bayesian reasoning, decision-support, interventions, patterns-of-life, predictive analytics, predictive insights.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
423 Preparation and Characterization of Calcium Phosphate Cement

Authors: W. Thepsuwan, N. Monmaturapoj

Abstract:

Calcium phosphate cement (CPC) is one of the most attractive bioceramics due to its moldable and shape ability to fill complicated bony cavities or small dental defect positions. In this study, CPC was produced by using mixture of tetracalcium phosphate (TTCP, Ca4O(PO4)2) and dicalcium phosphate anhydrous (DCPA, CaHPO4) in equimolar ratio (1/1) with aqueous solutions of acetic acid (C2H4O2) and disodium hydrogen phosphate dehydrate (Na2HPO4.2H2O) in combination with sodium alginate in order to improve theirs moldable characteristic. The concentration of the aqueous solutions and sodium alginate were varied to investigate the effect of different aqueous solutions and alginate on properties of the cements. The cement paste was prepared by mixing cement powder (P) with aqueous solution (L) in a P/L ratio of 1.0g/0.35ml. X-ray diffraction (XRD) was used to analyses phase formation of the cements. Setting time and compressive strength of the set CPCs were measured using the Gilmore apparatus and Universal testing machine, respectively. The results showed that CPCs could be produced by using both basic (Na2HPO4.2H2O) and acidic (C2H4O2) solutions. XRD results show the precipitation of hydroxyapatite in all cement samples. No change in phase formation among cements using difference concentrations of Na2HPO4.2H2O solutions. With increasing concentration of acidic solutions, samples obtained less hydroxyapatite with a high dicalcium phosphate dehydrate leaded to a shorter setting time. Samples with sodium alginate exhibited higher crystallization of hydroxyapatite than that of without alginate as a result of shorten setting time in a basic solution but a longer setting time in an acidic solution. The stronger cement was attained from samples using the acidic solution with sodium alginate; however the strength was lower than that of using the basic solution.

Keywords: Calcium phosphate cements, TTCP, DCPA, hydroxyapatite, properties.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2568
422 Suitability of Entry into the Euro Area: An Excursion in Selected Economies

Authors: Ludk Benada, Jindika Sedova

Abstract:

The current situation in the eurozone raises a number of topics for discussion and to help in finding an answer to the question of whether a common currency is a more suitable means of coping with the impact of the financial crisis or whether national currencies are better suited to this. The economic situation in the EU is now considerably volatile and, due to problems with the fulfilment of the Maastricht convergence criteria, it is now being considered whether, in their further development, new member states will decide to distance themselves from the euro or will, in an attempt to overcome the crisis, speed up the adoption of the euro. The Czech Republic is one country with little interest in adopting the euro, justified by the fact that a better alternative to dealing with this crisis is an independent monetary policy and its ability to respond flexibly to the economic situation not only in Europe, but around the world. One attribute of the crisis in the Czech Republic and its mitigation is the freely floating exchange rate of the national currency. It is not only the Czech Republic that is attempting to alleviate the impact of the crisis, but also new EU member countries facing fresh questions to which theory have yet to provide wholly satisfactory answers. These questions undoubtedly include the problem of inflation targeting and the choice of appropriate instruments for achieving financial stability. The difficulty lies in the fact that these objectives may be contradictory and may require more than one means of achieving them. In this respect we may assume that membership of the euro zone might not in itself mitigate the development of the recession or protect the nation from future crises. We are of the opinion that the decisive factor in the development of any economy will continue to be the domestic economic policy and the operability of market economic mechanisms. We attempt to document this fact using selected countries as examples, these being the Czech Republic, Poland, Hungary, and Slovakia.

Keywords: Currency exchange rate, Maastricht convergence criteria, monetary union, public finances.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1681
421 Hybrid Anomaly Detection Using Decision Tree and Support Vector Machine

Authors: Elham Serkani, Hossein Gharaee Garakani, Naser Mohammadzadeh, Elaheh Vaezpour

Abstract:

Intrusion detection systems (IDS) are the main components of network security. These systems analyze the network events for intrusion detection. The design of an IDS is through the training of normal traffic data or attack. The methods of machine learning are the best ways to design IDSs. In the method presented in this article, the pruning algorithm of C5.0 decision tree is being used to reduce the features of traffic data used and training IDS by the least square vector algorithm (LS-SVM). Then, the remaining features are arranged according to the predictor importance criterion. The least important features are eliminated in the order. The remaining features of this stage, which have created the highest level of accuracy in LS-SVM, are selected as the final features. The features obtained, compared to other similar articles which have examined the selected features in the least squared support vector machine model, are better in the accuracy, true positive rate, and false positive. The results are tested by the UNSW-NB15 dataset.

Keywords: Intrusion detection system, decision tree, support vector machine, feature selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
420 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
419 Towards an Enhanced Quality of IPTV Media Server Architecture over Software Defined Networking

Authors: Esmeralda Hysenbelliu

Abstract:

The aim of this paper is to present the QoE (Quality of Experience) IPTV SDN-based media streaming server enhanced architecture for configuring, controlling, management and provisioning the improved delivery of IPTV service application with low cost, low bandwidth, and high security. Furthermore, it is given a virtual QoE IPTV SDN-based topology to provide an improved IPTV service based on QoE Control and Management of multimedia services functionalities. Inside OpenFlow SDN Controller there are enabled in high flexibility and efficiency Service Load-Balancing Systems; based on the Loading-Balance module and based on GeoIP Service. This two Load-balancing system improve IPTV end-users Quality of Experience (QoE) with optimal management of resources greatly. Through the key functionalities of OpenFlow SDN controller, this approach produced several important features, opportunities for overcoming the critical QoE metrics for IPTV Service like achieving incredible Fast Zapping time (Channel Switching time) < 0.1 seconds. This approach enabled Easy and Powerful Transcoding system via FFMPEG encoder. It has the ability to customize streaming dimensions bitrates, latency management and maximum transfer rates ensuring delivering of IPTV streaming services (Audio and Video) in high flexibility, low bandwidth and required performance. This QoE IPTV SDN-based media streaming architecture unlike other architectures provides the possibility of Channel Exchanging between several IPTV service providers all over the word. This new functionality brings many benefits as increasing the number of TV channels received by end –users with low cost, decreasing stream failure time (Channel Failure time < 0.1 seconds) and improving the quality of streaming services.

Keywords: Improved QoE, OpenFlow SDN controller, IPTV service application, softwarization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1030
418 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kr. Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with specific focus on Infrared (IR) and Visible image (VI) fusion for various applications including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like Visible camera & IR Thermal Imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (IR) that may be reflected or self-emitted. A digital color camera captures the visible source image and a thermal IR camera acquires the thermal source image. In this paper, some image fusion algorithms based upon Multi-Scale Transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, but they also make it hard to become deployed in system and applications that require real-time operation, high flexibility and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: Image fusion, IR thermal imager, multi-sensor, Multi-Scale Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
417 A Web-Based Self-Learning Grammar for Spoken Language Understanding

Authors: S. M. Biondi, V. Catania, R. Di Natale, A. R. Intilisano, D. Panno

Abstract:

One of the major goals of Spoken Dialog Systems (SDS) is to understand what the user utters. In the SDS domain, the Spoken Language Understanding (SLU) Module classifies user utterances by means of a pre-definite conceptual knowledge. The SLU module is able to recognize only the meaning previously included in its knowledge base. Due the vastity of that knowledge, the information storing is a very expensive process. Updating and managing the knowledge base are time-consuming and error-prone processes because of the rapidly growing number of entities like proper nouns and domain-specific nouns. This paper proposes a solution to the problem of Name Entity Recognition (NER) applied to a SDS domain. The proposed solution attempts to automatically recognize the meaning associated with an utterance by using the PANKOW (Pattern based Annotation through Knowledge On the Web) method at runtime. The method being proposed extracts information from the Web to increase the SLU knowledge module and reduces the development effort. In particular, the Google Search Engine is used to extract information from the Facebook social network.

Keywords: Spoken Dialog System, Spoken Language Understanding, Web Semantic, Name Entity Recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1776
416 Machine Learning Approach for Identifying Dementia from MRI Images

Authors: S. K. Aruna, S. Chitra

Abstract:

This research paper presents a framework for classifying Magnetic Resonance Imaging (MRI) images for Dementia. Dementia, an age-related cognitive decline is indicated by degeneration of cortical and sub-cortical structures. Characterizing morphological changes helps understand disease development and contributes to early prediction and prevention of the disease. Modelling, that captures the brain’s structural variability and which is valid in disease classification and interpretation is very challenging. Features are extracted using Gabor filter with 0, 30, 60, 90 orientations and Gray Level Co-occurrence Matrix (GLCM). It is proposed to normalize and fuse the features. Independent Component Analysis (ICA) selects features. Support Vector Machine (SVM) classifier with different kernels is evaluated, for efficiency to classify dementia. This study evaluates the presented framework using MRI images from OASIS dataset for identifying dementia. Results showed that the proposed feature fusion classifier achieves higher classification accuracy.

Keywords: Magnetic resonance imaging, dementia, Gabor filter, gray level co-occurrence matrix, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2115
415 Predicting the Three Major Dimensions of the Learner-s Emotions from Brainwaves

Authors: Alicia Heraz, Claude Frasson

Abstract:

This paper investigates how the use of machine learning techniques can significantly predict the three major dimensions of learner-s emotions (pleasure, arousal and dominance) from brainwaves. This study has adopted an experimentation in which participants were exposed to a set of pictures from the International Affective Picture System (IAPS) while their electrical brain activity was recorded with an electroencephalogram (EEG). The pictures were already rated in a previous study via the affective rating system Self-Assessment Manikin (SAM) to assess the three dimensions of pleasure, arousal, and dominance. For each picture, we took the mean of these values for all subjects used in this previous study and associated them to the recorded brainwaves of the participants in our study. Correlation and regression analyses confirmed the hypothesis that brainwave measures could significantly predict emotional dimensions. This can be very useful in the case of impassive, taciturn or disabled learners. Standard classification techniques were used to assess the reliability of the automatic detection of learners- three major dimensions from the brainwaves. We discuss the results and the pertinence of such a method to assess learner-s emotions and integrate it into a brainwavesensing Intelligent Tutoring System.

Keywords: Algorithms, brainwaves, emotional dimensions, performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2205
414 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: Food (Ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1500
413 Reform-Oriented Teaching of Introductory Statistics in the Health, Social and Behavioral Sciences – Historical Context and Rationale

Authors: Rossi A. Hassad

Abstract:

There is widespread emphasis on reform in the teaching of introductory statistics at the college level. Underpinning this reform is a consensus among educators and practitioners that traditional curricular materials and pedagogical strategies have not been effective in promoting statistical literacy, a competency that is becoming increasingly necessary for effective decision-making and evidence-based practice. This paper explains the historical context of, and rationale for reform-oriented teaching of introductory statistics (at the college level) in the health, social and behavioral sciences (evidence-based disciplines). A firm understanding and appreciation of the basis for change in pedagogical approach is important, in order to facilitate commitment to reform, consensus building on appropriate strategies, and adoption and maintenance of best practices. In essence, reform-oriented pedagogy, in this context, is a function of the interaction among content, pedagogy, technology, and assessment. The challenge is to create an appropriate balance among these domains.

Keywords: Reform-oriented, reform, introductory statistics, health, behavioral sciences, evidence-based, psychology, teaching, learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 996
412 Hearing Aids Maintenance Training for Hearing-Impaired Preschool Children with the Help of Motion Graphic Tools

Authors: M. Mokhtarzadeh, M. Taheri Qomi, M. Nikafrooz, A. Atashafrooz

Abstract:

The purpose of the present study was to investigate the effectiveness of using motion graphics as a learning medium on training hearing aids maintenance skills to hearing-impaired children. The statistical population of this study consisted of all children with hearing loss in Ahvaz city, at age 4 to 7 years old. As the sample, 60, whom were selected by multistage random sampling, were randomly assigned to two groups; experimental (30 children) and control (30 children) groups. The research method was experimental and the design was pretest-posttest with the control group. The intervention consisted of a 2-minute motion graphics clip to train hearing aids maintenance skills. Data were collected using a 9-question researcher-made questionnaire. The data were analyzed by using one-way analysis of covariance. Results showed that the training of hearing aids maintenance skills with motion graphics was significantly effective for those children. The results of this study can be used by educators, teachers, professionals, and parents to train children with disabilities or normal students.

Keywords: Hearing-impaired children, hearing aids, hearing aids maintenance skill, and motion graphics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 575
411 Grid-Connected Inverter Experimental Simulation and Droop Control Implementation

Authors: Nur Aisyah Jalalludin, Arwindra Rizqiawan, Goro Fujita

Abstract:

In this study, we aim to demonstrate a microgrid system experimental simulation for an easy understanding of a large-scale microgrid system. This model is required for industrial training and learning environments. However, in order to create an exact representation of a microgrid system, the laboratory-scale system must fulfill the requirements of a grid-connected inverter, in which power values are assigned to the system to cope with the intermittent output from renewable energy sources. Aside from that, during fluctuations in load capacity, the grid-connected system must be able to supply power from the utility grid side and microgrid side in a balanced manner. Therefore, droop control is installed in the inverter’s control board to maintain a balanced power sharing in both sides. This power control in a stand-alone condition and droop control in a grid-connected condition must be implemented in order to maintain a stabilized system. Based on the experimental results, power control and droop control can both be applied in the system by comparing the experimental and reference values.

Keywords: Droop control, droop characteristic, grid-connected inverter, microgrid, power control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3075
410 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: Over-parameterization, Rectified Linear Units (ReLU), convergence, gradient descent, neural networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 897
409 Knowledge Reactor: A Contextual Computing Work in Progress for Eldercare

Authors: Scott N. Gerard, Aliza Heching, Susann M. Keohane, Samuel S. Adams

Abstract:

The world-wide population of people over 60 years of age is growing rapidly. The explosion is placing increasingly onerous demands on individual families, multiple industries and entire countries. Current, human-intensive approaches to eldercare are not sustainable, but IoT and AI technologies can help. The Knowledge Reactor (KR) is a contextual, data fusion engine built to address this and other similar problems. It fuses and centralizes IoT and System of Record/Engagement data into a reactive knowledge graph. Cognitive applications and services are constructed with its multiagent architecture. The KR can scale-up and scaledown, because it exploits container-based, horizontally scalable services for graph store (JanusGraph) and pub-sub (Kafka) technologies. While the KR can be applied to many domains that require IoT and AI technologies, this paper describes how the KR specifically supports the challenging domain of cognitive eldercare. Rule- and machine learning-based analytics infer activities of daily living from IoT sensor readings. KR scalability, adaptability, flexibility and usability are demonstrated.

Keywords: Ambient sensing, AI, artificial intelligence, eldercare, IoT, internet of things, knowledge graph.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1044
408 Comparison of Composite Programming and Compromise Programming for Aircraft Selection Problem Using Multiple Criteria Decision Making Analysis Method

Authors: C. Ardil

Abstract:

In this paper, the comparison of composite programming and compromise programming for the aircraft selection problem is discussed using the multiple criteria decision analysis method. The decision making process requires the prior definition and fulfillment of certain factors, especially when it comes to complex areas such as aircraft selection problems. The proposed technique gives more efficient results by extending the composite programming and compromise programming, which are widely used in modeling multiple criteria decisions. The proposed model is applied to a practical decision problem for evaluating and selecting aircraft problems.A selection of aircraft was made based on the proposed approach developed in the field of multiple criteria decision making. The model presented is solved by using the following methods: composite programming, and compromise programming. The importance values of the weight coefficients of the criteria are calculated using the mean weight method. The evaluation and ranking of aircraft are carried out using the composite programming and compromise programming methods. In order to determine the stability of the model and the ability to apply the developed composite programming and compromise programming approach, the paper analyzes its sensitivity, which involves changing the value of the coefficient λ and q in the first part. The second part of the sensitivity analysis relates to the application of different multiple criteria decision making methods, composite programming and compromise programming. In addition, in the third part of the sensitivity analysis, the Spearman correlation coefficient of the ranks obtained was calculated which confirms the applicability of all the proposed approaches.

Keywords: composite programming, compromise programming, additive weighted model, multiplicative weighted model, multiple criteria decision making analysis, MCDMA, aircraft selection

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
407 A Study of Primary School Parents’ Interaction with Teachers’ in Malaysia

Authors: Shireen Simon

Abstract:

This study explores the interactions between primary school parents-teachers in Malaysia. Schools in the country are organized to promote participation between parents and teachers. Exchanges of dialogue are most valued between parents and teachers because teachers are in daily contact with pupils’ and the first line of communication with parents. Teachers are considered by parents as the most important connection to improve children learning and well-being. Without a good communication, interaction or involvement between parent-teacher might tarnish a pupils’ performance in school. This study tries to find out multiple emotions among primary school parents-teachers, either estranged or cordial, when they communicate in a multi-cultured society in Malaysia. Important issues related to parent-teacher interactions are discussed further. Parents’ involvement in an effort to boost better education in school is significantly more effective with parents’ involvement. Lastly, this article proposes some suggestions for parents and teachers to build a positive relationship with effective communication and establish more democratic open door policy.

Keywords: Multi-cultured society, parental involvement, parent-teacher relationships, parents’ interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2683
406 Preferred Character Size for Oblique Angles

Authors: Photjanat Phimnom, Haruetai Lohasiriwat

Abstract:

In today’s world, the LED display has been used for presenting visual information under various circumstances. Such information is an important intermediary in the human information processing. Researchers have been investigated diverse factors that influence this process effectiveness. The letter size is undoubtedly one major factor that has been tested and recommended by many standards and guidelines. However, viewing information on the display from direct perpendicular position is a typical assumption whereas many actual events are required viewing from the angles. This current research aims to study the effect of oblique viewing angle and viewing distance on ability to recognize alphabet, number, and English word. The total of ten participants was volunteered to our 3 x 4 x 4 within subject study. Independent variables include three distance levels (2, 6, and 12 m), four oblique angles (0, 45, 60, 75 degree), and four target types (alphabet, number, short word, and long word). Following the method of constant stimuli our study suggests that the larger oblique angle, ranging from 0 to 75 degree from the line of sight, results in significant higher legibility threshold or larger font size required (p-value < 0.05). Viewing distance factor also shows to have significant effect on the threshold (p-value < 0.05). However, the effect from distance factor is expected to be confounded by the quality of the screen used in our experiment. Lastly, our results show that single alphabet as well as single number are recognized at significant lower threshold (smaller font size) as compared to both short and long words (p-value < 0.05). Therefore, it is recommended that when designs information to be presented on LED display, understanding of all possible ranges of oblique angle should be taken into account in order to specify the preferred letter size. Additionally, the recommendation of letter size for 100% legibility in our tested conditions is provided in the paper.

Keywords: Letter Size, Oblique Angle, Viewing Distance, Legibility Threshold.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1330
405 Low Cost Real Time Robust Identification of Impulsive Signals

Authors: R. Biondi, G. Dys, G. Ferone, T. Renard, M. Zysman

Abstract:

This paper describes an automated implementable system for impulsive signals detection and recognition. The system uses a Digital Signal Processing device for the detection and identification process. Here the system analyses the signals in real time in order to produce a particular response if needed. The system analyses the signals in real time in order to produce a specific output if needed. Detection is achieved through normalizing the inputs and comparing the read signals to a dynamic threshold and thus avoiding detections linked to loud or fluctuating environing noise. Identification is done through neuronal network algorithms. As a setup our system can receive signals to “learn” certain patterns. Through “learning” the system can recognize signals faster, inducing flexibility to new patterns similar to those known. Sound is captured through a simple jack input, and could be changed for an enhanced recording surface such as a wide-area recorder. Furthermore a communication module can be added to the apparatus to send alerts to another interface if needed.

Keywords: Sound Detection, Impulsive Signal, Background Noise, Neural Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2334
404 Using Speech Emotion Recognition as a Longitudinal Biomarker for Alzheimer’s Disease

Authors: Yishu Gong, Liangliang Yang, Jianyu Zhang, Zhengyu Chen, Sihong He, Xusheng Zhang, Wei Zhang

Abstract:

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide and is characterized by cognitive decline and behavioral changes. People living with Alzheimer’s disease often find it hard to complete routine tasks. However, there are limited objective assessments that aim to quantify the difficulty of certain tasks for AD patients compared to non-AD people. In this study, we propose to use speech emotion recognition (SER), especially the frustration level as a potential biomarker for quantifying the difficulty patients experience when describing a picture. We build an SER model using data from the IEMOCAP dataset and apply the model to the DementiaBank data to detect the AD/non-AD group difference and perform longitudinal analysis to track the AD disease progression. Our results show that the frustration level detected from the SER model can possibly be used as a cost-effective tool for objective tracking of AD progression in addition to the Mini-Mental State Examination (MMSE) score.

Keywords: Alzheimer’s disease, Speech Emotion Recognition, longitudinal biomarker, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 274
403 Multi-Modal Visualization of Working Instructions for Assembly Operations

Authors: Josef Wolfartsberger, Michael Heiml, Georg Schwarz, Sabrina Egger

Abstract:

Growing individualization and higher numbers of variants in industrial assembly products raise the complexity of manufacturing processes. Technical assistance systems considering both procedural and human factors allow for an increase in product quality and a decrease in required learning times by supporting workers with precise working instructions. Due to varying needs of workers, the presentation of working instructions leads to several challenges. This paper presents an approach for a multi-modal visualization application to support assembly work of complex parts. Our approach is integrated within an interconnected assistance system network and supports the presentation of cloud-streamed textual instructions, images, videos, 3D animations and audio files along with multi-modal user interaction, customizable UI, multi-platform support (e.g. tablet-PC, TV screen, smartphone or Augmented Reality devices), automated text translation and speech synthesis. The worker benefits from more accessible and up-to-date instructions presented in an easy-to-read way.

Keywords: Assembly, assistive technologies, augmented reality, manufacturing, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 917
402 Text Mining Technique for Data Mining Application

Authors: M. Govindarajan

Abstract:

Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.

Keywords: C5.0, Error Ratio, text mining, training data, test data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2489