Search results for: Cognitive complexity metric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1291

Search results for: Cognitive complexity metric

991 A Survey on Performance Tools for OpenMP

Authors: Mubarak S. Mohsen, Rosni Abdullah, Yong M. Teo

Abstract:

Advances in processors architecture, such as multicore, increase the size of complexity of parallel computer systems. With multi-core architecture there are different parallel languages that can be used to run parallel programs. One of these languages is OpenMP which embedded in C/Cµ or FORTRAN. Because of this new architecture and the complexity, it is very important to evaluate the performance of OpenMP constructs, kernels, and application program on multi-core systems. Performance is the activity of collecting the information about the execution characteristics of a program. Performance tools consists of at least three interfacing software layers, including instrumentation, measurement, and analysis. The instrumentation layer defines the measured performance events. The measurement layer determines what performance event is actually captured and how it is measured by the tool. The analysis layer processes the performance data and summarizes it into a form that can be displayed in performance tools. In this paper, a number of OpenMP performance tools are surveyed, explaining how each is used to collect, analyse, and display data collection.

Keywords: Parallel performance tools, OpenMP, multi-core.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1921
990 Exploration of Autistic Children using Case Based Reasoning System with Cognitive Map

Authors: Ebtehal Alawi Alsaggaf, Shehab A. Gamalel-Din

Abstract:

Exploring an autistic child in Elementary school is a difficult task that must be fully thought out and the teachers should be aware of the many challenges they face raising their child especially the behavioral problems of autistic children. Hence there arises a need for developing Artificial intelligence (AI) Contemporary Techniques to help diagnosis to discover autistic people. In this research, we suggest designing architecture of expert system that combine Cognitive Maps (CM) with Case Based Reasoning technique (CBR) in order to reduce time and costs of traditional diagnosis process for the early detection to discover autistic children. The teacher is supposed to enter child's information for analyzing by CM module. Then, the reasoning processor would translate the output into a case to be solved a current problem by CBR module. We will implement a prototype for the model as a proof of concept using java and MYSQL. This will be provided a new hybrid approach that will achieve new synergies and improve problem solving capabilities in AI. And we will predict that will reduce time, costs, the number of human errors and make expertise available to more people who want who want to serve autistic children and their families.

Keywords: Autism, Cognitive Maps (CM), Case Based Reasoning technique (CBR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
989 Towards an AS Level Network Performance Model

Authors: Huan Xiong, Ming Chen

Abstract:

In order to research Internet quantificationally and better model the performance of network, this paper proposes a novel AS level network performance model (MNPM), it takes autonomous system (AS) as basic modeling unit, measures E2E performance between any two outdegrees of an AS and organizes measurement results into matrix form which called performance matrix (PM). Inter-AS performance calculation is defined according to performance information stored in PM. Simulation has been implemented to verify the correctness of MNPM and a practical application of MNPM (network congestion detection) is given.

Keywords: AS, network performance, model, metric, congestion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
988 Physical Activity and Cognitive Functioning Relationship in Children

Authors: Comfort Mokgothu

Abstract:

This study investigated the relation between processing information and fitness level of active (fit) and sedentary (unfit) children drawn from rural and urban areas in Botswana. It was hypothesized that fit children would display faster simple reaction time (SRT), choice reaction times (CRT) and movement times (SMT). 60, third grade children (7.0 – 9.0 years) were initially selected and based upon fitness testing, 45 participated in the study (15 each of fit urban, unfit urban, fit rural). All children completed anthropometric measures, skinfold testing and submaximal cycle ergometer testing. The cognitive testing included SRT, CRT, SMT and Choice Movement Time (CMT) and memory sequence length. Results indicated that the rural fit group exhibited faster SMT than the urban fit and unfit groups. For CRT, both fit groups were faster than the unfit group. Collectively, the study shows that the relationship that exists between physical fitness and cognitive function amongst the elderly can tentatively be extended to the pediatric population. Physical fitness could be a factor in the speed at which we process information, including decision making, even in children.

Keywords: Decision making, fitness, information processing, reaction time, cognition movement time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
987 The Mechanism Underlying Empathy-Related Helping Behavior: An Investigation of Empathy-Attitude- Action Model

Authors: Wan-Ting Liao, Angela K. Tzeng

Abstract:

Empathy has been an important issue in psychology, education, as well as cognitive neuroscience. Empathy has two major components: cognitive and emotional. Cognitive component refers to the ability to understand others’ perspectives, thoughts, and actions, whereas emotional component refers to understand how others feel. Empathy can be induced, attitude can then be changed, and with enough attitude change, helping behavior can occur. This finding leads us to two questions: is attitude change really necessary for prosocial behavior? And, what roles cognitive and affective empathy play? For the second question, participants with different psychopathic personality (PP) traits are critical because high PP people were found to suffer only affective empathy deficit. Their cognitive empathy shows no significant difference from the control group. 132 college students voluntarily participated in the current three-stage study. Stage 1 was to collect basic information including Interpersonal Reactivity Index (IRI), Psychopathic Personality Inventory-Revised (PPI-R), Attitude Scale, Visual Analogue Scale (VAS), and demographic data. Stage two was for empathy induction with three controversial scenarios, namely domestic violence, depression with a suicide attempt, and an ex-offender. Participants read all three stories and then rewrite the stories by one of two perspectives (empathetic vs. objective). They would then complete the VAS and Attitude Scale one more time for their post-attitude and emotional status. Three IVs were introduced for data analysis: PP (High vs. Low), Responsibility (whether or not the character is responsible for what happened), and Perspective-taking (Empathic vs. Objective). Stage 3 was for the action. Participants were instructed to freely use the 17 tokens they received as donations. They were debriefed and interviewed at the end of the experiment. The major findings were people with higher empathy tend to take more action in helping. Attitude change is not necessary for prosocial behavior. The controversy of the scenarios and how familiar participants are towards target groups play very important roles. Finally, people with high PP tend to show more public prosocial behavior due to their affective empathy deficit. Pre-existing value and belief as well as recent dramatic social events seem to have a big impact and possibly reduce the effect of the independent variables (IV) in our paradigm.

Keywords: Affective empathy, attitude, cognitive empathy, prosocial behavior, psychopathic traits.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 709
986 Quantitative Ranking Evaluation of Wine Quality

Authors: A. Brunel, A. Kernevez, F. Leclere, J. Trenteseaux

Abstract:

Today, wine quality is only evaluated by wine experts with their own different personal tastes, even if they may agree on some common features. So producers do not have any unbiased way to independently assess the quality of their products. A tool is here proposed to evaluate wine quality by an objective ranking based upon the variables entering wine elaboration, and analysed through principal component analysis (PCA) method. Actual climatic data are compared by measuring the relative distance between each considered wine, out of which the general ranking is performed.

Keywords: Wine, grape, vine, weather conditions, rating, climate, principal component analysis, metric analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2130
985 A Portable Cognitive Tool for Engagement Level and Activity Identification

Authors: T. Teo, S. W. Lye, Y. F. Li, Z. Zakaria

Abstract:

Wearable devices such as Electroencephalography (EEG) hold immense potential in the monitoring and assessment of a person’s task engagement. This is especially so in remote or online sites. Research into its use in measuring an individual's cognitive state while performing task activities is therefore expected to increase. Despite the growing number of EEG research into brain functioning activities of a person, key challenges remain in adopting EEG for real-time operations. These include limited portability, long preparation time, high number of channel dimensionality, intrusiveness, as well as level of accuracy in acquiring neurological data. This paper proposes an approach using a 4-6 EEG channels to determine the cognitive states of a subject when undertaking a set of passive and active monitoring tasks of a subject. Air traffic controller (ATC) dynamic-tasks are used as a proxy. The work found that using a developed channel reduction and identifier algorithm, good trend adherence of 89.1% can be obtained between a commercially available brain computer interface (BCI) 14 channel Emotiv EPOC+ EEG headset and that of a carefully selected set of reduced 4-6 channels. The approach can also identify different levels of engagement activities ranging from general monitoring, ad hoc and repeated active monitoring activities involving information search, extraction, and memory activities.

Keywords: Neurophysiology, monitoring, EEG, outliers, electroencephalography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87
984 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity and aflatoxinogenic capacity of the strains, topography, soil and climate parameters of the fig orchards are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high-performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques i.e., dimensionality reduction on the original dataset (Principal Component Analysis), metric learning (Mahalanobis Metric for Clustering) and K-nearest Neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson Correlation Coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 647
983 Evaluation of Video Quality Metrics and Performance Comparison on Contents Taken from Most Commonly Used Devices

Authors: Pratik Dhabal Deo, Manoj P.

Abstract:

With the increasing number of social media users, the amount of video content available has also significantly increased. Currently, the number of smartphone users is at its peak, and many are increasingly using their smartphones as their main photography and recording devices. There have been a lot of developments in the field of video quality assessment in since the past years and more research on various other aspects of video and image are being done. Datasets that contain a huge number of videos from different high-end devices make it difficult to analyze the performance of the metrics on the content from most used devices even if they contain contents taken in poor lighting conditions using lower-end devices. These devices face a lot of distortions due to various factors since the spectrum of contents recorded on these devices is huge. In this paper, we have presented an analysis of the objective Video Quality Analysis (VQA) metrics on contents taken only from most used devices and their performance on them, focusing on full-reference metrics. To carry out this research, we created a custom dataset containing a total of 90 videos that have been taken from three most commonly used devices, and Android smartphone, an iOS smartphone and a Digital Single-Lens Reflex (DSLR) camera. On the videos taken on each of these devices, the six most common types of distortions that users face have been applied in addition to already existing H.264 compression based on four reference videos. These six applied distortions have three levels of degradation each. A total of the five most popular VQA metrics have been evaluated on this dataset and the highest values and the lowest values of each of the metrics on the distortions have been recorded. Finally, it is found that blur is the artifact on which most of the metrics did not perform well. Thus, in order to understand the results better the amount of blur in the data set has been calculated and an additional evaluation of the metrics was done using High Efficiency Video Coding (HEVC) codec, which is the next version of H.264 compression, on the camera that proved to be the sharpest among the devices. The results have shown that as the resolution increases, the performance of the metrics tends to become more accurate and the best performing metric among them is VQM with very few inconsistencies and inaccurate results when the compression applied is H.264, but when the compression is applied is HEVC, Structural Similarity (SSIM) metric and Video Multimethod Assessment Fusion (VMAF) have performed significantly better.

Keywords: Distortion, metrics, recording, frame rate, video quality assessment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 364
982 Optimizing the Capacity of a Convolutional Neural Network for Image Segmentation and Pattern Recognition

Authors: Yalong Jiang, Zheru Chi

Abstract:

In this paper, we study the factors which determine the capacity of a Convolutional Neural Network (CNN) model and propose the ways to evaluate and adjust the capacity of a CNN model for best matching to a specific pattern recognition task. Firstly, a scheme is proposed to adjust the number of independent functional units within a CNN model to make it be better fitted to a task. Secondly, the number of independent functional units in the capsule network is adjusted to fit it to the training dataset. Thirdly, a method based on Bayesian GAN is proposed to enrich the variances in the current dataset to increase its complexity. Experimental results on the PASCAL VOC 2010 Person Part dataset and the MNIST dataset show that, in both conventional CNN models and capsule networks, the number of independent functional units is an important factor that determines the capacity of a network model. By adjusting the number of functional units, the capacity of a model can better match the complexity of a dataset.

Keywords: CNN, capsule network, capacity optimization, character recognition, data augmentation; semantic segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 700
981 MPSO based Model Order Formulation Technique for SISO Continuous Systems

Authors: S. N. Deepa, G. Sugumaran

Abstract:

This paper proposes a new version of the Particle Swarm Optimization (PSO) namely, Modified PSO (MPSO) for model order formulation of Single Input Single Output (SISO) linear time invariant continuous systems. In the General PSO, the movement of a particle is governed by three behaviors namely inertia, cognitive and social. The cognitive behavior helps the particle to remember its previous visited best position. In Modified PSO technique split the cognitive behavior into two sections like previous visited best position and also previous visited worst position. This modification helps the particle to search the target very effectively. MPSO approach is proposed to formulate the higher order model. The method based on the minimization of error between the transient responses of original higher order model and the reduced order model pertaining to the unit step input. The results obtained are compared with the earlier techniques utilized, to validate its ease of computation. The proposed method is illustrated through numerical example from literature.

Keywords: Continuous System, Model Order Formulation, Modified Particle Swarm Optimization, Single Input Single Output, Transfer Function Approach

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1781
980 An Improved Quality Adaptive Rate Filtering Technique Based on the Level Crossing Sampling

Authors: Saeed Mian Qaisar, Laurent Fesquet, Marc Renaudin

Abstract:

Mostly the systems are dealing with time varying signals. The Power efficiency can be achieved by adapting the system activity according to the input signal variations. In this context an adaptive rate filtering technique, based on the level crossing sampling is devised. It adapts the sampling frequency and the filter order by following the input signal local variations. Thus, it correlates the processing activity with the signal variations. Interpolation is required in the proposed technique. A drastic reduction in the interpolation error is achieved by employing the symmetry during the interpolation process. Processing error of the proposed technique is calculated. The computational complexity of the proposed filtering technique is deduced and compared to the classical one. Results promise a significant gain of the computational efficiency and hence of the power consumption.

Keywords: Level Crossing Sampling, Activity Selection, Rate Filtering, Computational Complexity, Interpolation Error.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557
979 Generalized Chaplygin Gas and Varying Bulk Viscosity in Lyra Geometry

Authors: A. K. Sethi, R. N. Patra, B. Nayak

Abstract:

In this paper, we have considered Friedmann-Robertson-Walker (FRW) metric with generalized Chaplygin gas which has viscosity in the context of Lyra geometry. The viscosity is considered in two different ways (i.e. zero viscosity, non-constant r (rho)-dependent bulk viscosity) using constant deceleration parameter which concluded that, for a special case, the viscous generalized Chaplygin gas reduces to modified Chaplygin gas. The represented model indicates on the presence of Chaplygin gas in the Universe. Observational constraints are applied and discussed on the physical and geometrical nature of the Universe.

Keywords: Bulk viscosity, Lyra geometry, generalized Chaplygin gas, cosmology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 794
978 Evaluation of Cognitive Benefits among Differently Abled Subjects with Video Game as Intervention

Authors: H. Nagendra, Vinod Kumar, S. Mukherjee

Abstract:

In this study, the potential benefits of playing action video game among congenitally deaf and dumb subjects is reported in terms of EEG ratio indices. The frontal and occipital lobes are associated with development of motor skills, cognition, and visual information processing and color recognition. The sixteen hours of First-Person shooter action video game play resulted in the increase of the ratios β/(α+θ) and β/θ in frontal and occipital lobes. This can be attributed to the enhancement of certain aspect of cognition among deaf and dumb subjects.

Keywords: Cognitive enhancement, video games, EEG band powers, Deaf and Dumb subjects.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1767
977 AC Signals Estimation from Irregular Samples

Authors: Predrag B. Petrović

Abstract:

The paper deals with the estimation of amplitude and phase of an analogue multi-harmonic band-limited signal from irregularly spaced sampling values. To this end, assuming the signal fundamental frequency is known in advance (i.e., estimated at an independent stage), a complexity-reduced algorithm for signal reconstruction in time domain is proposed. The reduction in complexity is achieved owing to completely new analytical and summarized expressions that enable a quick estimation at a low numerical error. The proposed algorithm for the calculation of the unknown parameters requires O((2M+1)2) flops, while the straightforward solution of the obtained equations takes O((2M+1)3) flops (M is the number of the harmonic components). It is applied in signal reconstruction, spectral estimation, system identification, as well as in other important signal processing problems. The proposed method of processing can be used for precise RMS measurements (for power and energy) of a periodic signal based on the presented signal reconstruction. The paper investigates the errors related to the signal parameter estimation, and there is a computer simulation that demonstrates the accuracy of these algorithms.

Keywords: Band-limited signals, Fourier coefficient estimation, analytical solutions, signal reconstruction, time.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1748
976 Study of Qualitative and Quantitative Metric for Pixel Factor Mapping and Extended Pixel Mapping Method

Authors: Indradip Banerjee, Souvik Bhattacharyya, Gautam Sanyal

Abstract:

In this paper, an approach is presented to investigate the performance of Pixel Factor Mapping (PFM) and Extended PMM (Pixel Mapping Method) through the qualitative and quantitative approach. These methods are tested against a number of well-known image similarity metrics and statistical distribution techniques. The PFM has been performed in spatial domain as well as frequency domain and the Extended PMM has also been performed in spatial domain through large set of images available in the internet.

Keywords: Qualitative, quantitative, PFM, EXTENDED PMM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1062
975 Creating a Space for Teaching Problem Solving Skills to Engineering Students through English Language Teaching

Authors: Mimi N. A. Mohamed

Abstract:

The complexity of teaching English in higher institutions by non-native speakers within a second/foreign language setting has created continuous discussions and research about teaching approaches and teaching practises, professional identities and challenges. In addition, there is a growing awareness that teaching English within discipline-specific contexts adds up to the existing complexity. This awareness leads to reassessments, discussions and suggestions on course design and content and teaching approaches and techniques. In meeting expectations teaching at a university specified in a particular discipline such as engineering, English language educators are not only required to teach students to be able to communicate in English effectively but also to teach soft skills such as problem solving skills. This paper is part of a research conducted to investigate how English language educators negotiate with the complexities of teaching problem solving skills through English language teaching at a technical university. This paper reports the way an English language educator identified himself and the way he approached his teaching in this institutional context.

Keywords: English Language Teaching, Teacher Agency, Problem Solving Skills, Professional Identities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2478
974 Using the Technology Acceptance Model to Examine Seniors’ Attitudes toward Facebook

Authors: Chien-Jen Liu, Shu Ching Yang

Abstract:

Using the technology acceptance model (TAM), this study examined the external variables of technological complexity (TC) to acquire a better understanding of the factors that influence the acceptance of computer application courses by learners at Active Aging Universities. After the learners in this study had completed a 27-hour Facebook course, 44 learners responded to a modified TAM survey. Data were collected to examine the path relationships among the variables that influence the acceptance of Facebook-mediated community learning. The partial least squares (PLS) method was used to test the measurement and the structural model. The study results demonstrated that attitudes toward Facebook use directly influence behavioral intentions (BI) with respect to Facebook use, evincing a high prediction rate of 58.3%. In addition to the perceived usefulness (PU) and perceived ease of use (PEOU) measures that are proposed in the TAM, other external variables, such as TC, also indirectly influence BI. These four variables can explain 88% of the variance in BI and demonstrate a high level of predictive ability. Finally, limitations of this investigation and implications for further research are discussed.

Keywords: Technology acceptance model (TAM), technological complexity, partial least squares (PLS), perceived usefulness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3193
973 A Framework for Early Differential Diagnosis of Tropical Confusable Diseases Using the Fuzzy Cognitive Map Engine

Authors: Faith-Michael E. Uzoka, Boluwaji A. Akinnuwesi, Taiwo Amoo, Flora Aladi, Stephen Fashoto, Moses Olaniyan, Joseph Osuji

Abstract:

The overarching aim of this study is to develop a soft-computing system for the differential diagnosis of tropical diseases. These conditions are of concern to health bodies, physicians, and the community at large because of their mortality rates, and difficulties in early diagnosis due to the fact that they present with symptoms that overlap, and thus become ‘confusable’. We report on the first phase of our study, which focuses on the development of a fuzzy cognitive map model for early differential diagnosis of tropical diseases. We used malaria as a case disease to show the effectiveness of the FCM technology as an aid to the medical practitioner in the diagnosis of tropical diseases. Our model takes cognizance of manifested symptoms and other non-clinical factors that could contribute to symptoms manifestations. Our model showed 85% accuracy in diagnosis, as against the physicians’ initial hypothesis, which stood at 55% accuracy. It is expected that the next stage of our study will provide a multi-disease, multi-symptom model that also improves efficiency by utilizing a decision support filter that works on an algorithm, which mimics the physician’s diagnosis process.

Keywords: Medical diagnosis, tropical diseases, fuzzy cognitive map, decision support filters, malaria differential diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2098
972 Increasing the Efficiency of Rake Receivers for Ultra-Wideband Applications

Authors: Aimilia P. Doukeli, Athanasios S. Lioumpas, George K. Karagiannidis, Panayiotis V. Frangos, P. Takis Mathiopoulos

Abstract:

In diversity rich environments, such as in Ultra- Wideband (UWB) applications, the a priori determination of the number of strong diversity branches is difficult, because of the considerably large number of diversity paths, which are characterized by a variety of power delay profiles (PDPs). Several Rake implementations have been proposed in the past, in order to reduce the number of the estimated and combined paths. To this aim, we introduce two adaptive Rake receivers, which combine a subset of the resolvable paths considering simultaneously the quality of both the total combining output signal-to-noise ratio (SNR) and the individual SNR of each path. These schemes achieve better adaptation to channel conditions compared to other known receivers, without further increasing the complexity. Their performance is evaluated in different practical UWB channels, whose models are based on extensive propagation measurements. The proposed receivers compromise between the power consumption, complexity and performance gain for the additional paths, resulting in important savings in power and computational resources.

Keywords: Adaptive Rake receivers, diversity techniques, fading channels, UWB channel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545
971 Tree Based Data Aggregation to Resolve Funneling Effect in Wireless Sensor Network

Authors: G. Rajesh, B. Vinayaga Sundaram, C. Aarthi

Abstract:

In wireless sensor network, sensor node transmits the sensed data to the sink node in multi-hop communication periodically. This high traffic induces congestion at the node which is present one-hop distance to the sink node. The packet transmission and reception rate of these nodes should be very high, when compared to other sensor nodes in the network. Therefore, the energy consumption of that node is very high and this effect is known as the “funneling effect”. The tree based-data aggregation technique (TBDA) is used to reduce the energy consumption of the node. The throughput of the overall performance shows a considerable decrease in the number of packet transmissions to the sink node. The proposed scheme, TBDA, avoids the funneling effect and extends the lifetime of the wireless sensor network. The average case time complexity for inserting the node in the tree is O(n log n) and for the worst case time complexity is O(n2).

Keywords: Data Aggregation, Funneling Effect, Traffic Congestion, Wireless Sensor Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314
970 Fast Intra Prediction Algorithm for H.264/AVC Based on Quadratic and Gradient Model

Authors: A. Elyousfi, A. Tamtaoui, E. Bouyakhf

Abstract:

The H.264/AVC standard uses an intra prediction, 9 directional modes for 4x4 luma blocks and 8x8 luma blocks, 4 directional modes for 16x16 macroblock and 8x8 chroma blocks, respectively. It means that, for a macroblock, it has to perform 736 different RDO calculation before a best RDO modes is determined. With this Multiple intra-mode prediction, intra coding of H.264/AVC offers a considerably higher improvement in coding efficiency compared to other compression standards, but computational complexity is increased significantly. This paper presents a fast intra prediction algorithm for H.264/AVC intra prediction based a characteristic of homogeneity information. In this study, the gradient prediction method used to predict the homogeneous area and the quadratic prediction function used to predict the nonhomogeneous area. Based on the correlation between the homogeneity and block size, the smaller block is predicted by gradient prediction and quadratic prediction, so the bigger block is predicted by gradient prediction. Experimental results are presented to show that the proposed method reduce the complexity by up to 76.07% maintaining the similar PSNR quality with about 1.94%bit rate increase in average.

Keywords: Intra prediction, H.264/AVC, video coding, encodercomplexity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
969 Another Formal Proposal For Stealth

Authors: Adrien Derock, Pascal Veron

Abstract:

Taking into account the link between the efficiency of a detector and the complexity of a stealth mechanism, we propose in this paper a new formalism for stealth using graph theory.

Keywords: Detection, eradication, graph, rootkit, stealth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1225
968 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: Imperfect channel state information, outage probability, multiuser- multi input single output.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1113
967 Routing Algorithm for a Clustered Network

Authors: Hemanth KumarA.R, Sudhakara G., Satyanarayana B.S.

Abstract:

The Cluster Dimension of a network is defined as, which is the minimum cardinality of a subset S of the set of nodes having the property that for any two distinct nodes x and y, there exist the node Si, s2 (need not be distinct) in S such that ld(x,s1) — d(y, s1)1 > 1 and d(x,s2) < d(x,$) for all s E S — {s2}. In this paper, strictly non overlap¬ping clusters are constructed. The concept of LandMarks for Unique Addressing and Clustering (LMUAC) routing scheme is developed. With the help of LMUAC routing scheme, It is shown that path length (upper bound)PLN,d < PLD, Maximum memory space requirement for the networkMSLmuAc(Az) < MSEmuAc < MSH3L < MSric and Maximum Link utilization factor MLLMUAC(i=3) < MLLMUAC(z03) < M Lc

Keywords: Metric dimension, Cluster dimension, Cluster.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1224
966 A Cooperative Weighted Discriminator Energy Detector Technique in Fading Environment

Authors: Muhammad R. Alrabeiah, Ibrahim S. Alnomay

Abstract:

The need in cognitive radio system for a simple, fast, and independent technique to sense the spectrum occupancy has led to the energy detection approach. Energy detector is known by its dependency on noise variation in the system which is one of its major drawbacks. In this paper, we are aiming to improve its performance by utilizing a weighted collaborative spectrum sensing, it is similar to the collaborative spectrum sensing methods introduced previously in the literature. These weighting methods give more improvement for collaborative spectrum sensing as compared to no weighting case. There is two method proposed in this paper: the first one depends on the channel status between each sensor and the primary user while the second depends on the value of the energy measured in each sensor.

Keywords: Cognitive radio, Spectrum sensing, Collaborative sensors, Weighted Decisions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730
965 Distribution Centers Reliability Cost in Capacitated Facility Location Problem

Authors: Mehdi Seifbarghy, Sajjad Jalali, Seyed Habib A. Rahmati

Abstract:

Recently studies in area of supply chain network (SCN) have focused on the disruption issues in distribution systems. Also this paper extends the previous literature by providing a new biobjective model for cost minimization of designing a three echelon SCN across normal and failure scenarios with considering multi capacity option for manufacturers and distribution centers. Moreover, in order to solve the problem by means of LINGO software, novel model will be reformulated through a branch of LP-Metric method called Min-Max approach.

Keywords: Scenario programming, Distribution, Multi-echelon supply chain design, Reliable facility

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744
964 Effective Personal Knowledge Management: A Proposed Online Framework

Authors: Shahrinaz Ismail, Mohd Sharifuddin Ahmad

Abstract:

This paper presents an analytical framework for an effective online personal knowledge management (PKM) of knowledge workers. The development of this framework is prompted by our qualitative research on the PKM processes and cognitive enablers of knowledge workers in eight organisations selected from three main industries in Malaysia. This multiple-case research identifies the relationships between the effectiveness of four online PKM processes: get/retrieve, understand/analyse, share, and connect. It also establishes the importance of cognitive enablers that mediate this relationship, namely, method, identify, decide and drive. Qualitative analysis is presented as the findings, supported by the preceded quantitative analysis on an exploratory questionnaire survey.

Keywords: Bottom-up approach, knowledge organisation, organisational knowledge management, personal knowledge management, software agent technology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2486
963 Pathogenetic Mechanism of Alcohol's Effect on Academic Performance

Authors: M. O. Welcome, E. V. Pereverzeva, V. A. Pereverzev

Abstract:

The regulatory competence of blood glucose homeostasis might determine the degree of academic performance. The aim of this study was to produce a model of students' alcohol use based on glucose homeostasis control and cognitive functions that might define the pathogenetic mechanism of alcohol's effect on academic performance. The study took six hours and thirty minutes on fasting, involving thirteen male students. Disturbances in cognitive functions, precisely a decrease in the effectiveness of active attention and a faster development of fatigue after four to six hours of mental work in alcohol users, compared to abstainers was statistically proven. These disturbances in alcohol users were retained even after seven to ten days of moderate alcohol use and might be the reason for the low academic performances among students who use alcoholic beverages.

Keywords: Alcohol, academic performance, pathogenetic mechanism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1527
962 A Hybrid Multi Objective Algorithm for Flexible Job Shop Scheduling

Authors: Parviz Fattahi

Abstract:

Scheduling for the flexible job shop is very important in both fields of production management and combinatorial optimization. However, it quit difficult to achieve an optimal solution to this problem with traditional optimization approaches owing to the high computational complexity. The combining of several optimization criteria induces additional complexity and new problems. In this paper, a Pareto approach to solve the multi objective flexible job shop scheduling problems is proposed. The objectives considered are to minimize the overall completion time (makespan) and total weighted tardiness (TWT). An effective simulated annealing algorithm based on the proposed approach is presented to solve multi objective flexible job shop scheduling problem. An external memory of non-dominated solutions is considered to save and update the non-dominated solutions during the solution process. Numerical examples are used to evaluate and study the performance of the proposed algorithm. The proposed algorithm can be applied easily in real factory conditions and for large size problems. It should thus be useful to both practitioners and researchers.

Keywords: Flexible job shop, Scheduling, Hierarchical approach, simulated annealing, tabu search, multi objective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2009