Search results for: inherent feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2212

Search results for: inherent feature

1102 Vehicle Detection and Tracking Using Deep Learning Techniques in Surveillance Image

Authors: Abe D. Desta

Abstract:

This study suggests a deep learning-based method for identifying and following moving objects in surveillance video. The proposed method uses a fast regional convolution neural network (F-RCNN) trained on a substantial dataset of vehicle images to first detect vehicles. A Kalman filter and a data association technique based on a Hungarian algorithm are then used to monitor the observed vehicles throughout time. However, in general, F-RCNN algorithms have been shown to be effective in achieving high detection accuracy and robustness in this research study. For example, in one study The study has shown that the vehicle detection and tracking, the system was able to achieve an accuracy of 97.4%. In this study, the F-RCNN algorithm was compared to other popular object detection algorithms and was found to outperform them in terms of both detection accuracy and speed. The presented system, which has application potential in actual surveillance systems, shows the usefulness of deep learning approaches in vehicle detection and tracking.

Keywords: artificial intelligence, computer vision, deep learning, fast-regional convolutional neural networks, feature extraction, vehicle tracking

Procedia PDF Downloads 126
1101 Investigating (Im)Politeness Strategies in Email Communication: The Case Algerian PhD Supervisees and Irish Supervisors

Authors: Zehor Ktitni

Abstract:

In pragmatics, politeness is regarded as a feature of paramount importance to successful interpersonal relationships. On the other hand, emails have recently become one of the indispensable means of communication in educational settings. This research puts email communication at the core of the study and analyses it from a politeness perspective. More specifically, it endeavours to look closely at how the concept of (im)politeness is reflected through students’ emails. To this end, a corpus of Algerian supervisees’ email threads, exchanged with their Irish supervisors, was compiled. Leech’s model of politeness (2014) was selected as the main theoretical framework of this study, in addition to making reference to Brown and Levinson’s model (1987) as it is one of the most influential models in the area of pragmatic politeness. Further, some follow-up interviews are to be conducted with Algerian students to reinforce the results derived from the corpus. Initial findings suggest that Algerian Ph.D. students’ emails tend to include more politeness markers than impoliteness ones, they heavily make use of academic titles when addressing their supervisors (Dr. or Prof.), and they rely on hedging devices in order to sound polite.

Keywords: politeness, email communication, corpus pragmatics, Algerian PhD supervisees, Irish supervisors

Procedia PDF Downloads 70
1100 Internal Auditing and the Performance of State-Owned Enterprises in Emerging Markets

Authors: Jobo Dubihlela, Kofi Boamah

Abstract:

The inimitable role of the internal auditing, challenges and the predicament of state-owned enterprises in emerging markets are acknowledged. Study sought to address the inter-related questions, about how does IAF complement the performance and sustainability of SOEs? How can effective IA control systems be implemented to improve the performance results and culture of SOEs in Namibia? The weaknesses inherent in the SOE sector, unfortunately, impacts on the IAF ability to effectively support the SOEs. Despite these challenges, the study has unearthed IAF potential capabilities to contribute to SOE survival in Namibia by complementing the governance practices of the sector. Using a quantitative research approach, the dataset was collected and analysed from SOEs to confirm the role of the internal auditing function (IAF) as an indispensable concomitant of SOE performance. The study adopted a data approach supported by the literary evidence, which enabled generalisation and connectedness of the issues being addressed. The outcome of the data analysis contributed to achieving the results, which are discussed and eventually support the conclusions reached. Results show that the intractable task of internal auditing depends on the leadership of the board of directors of the SOEs. Study also revealed critical priorities needed to influence policymakers and oversight bodies to overcome the iniquities influencing SOE operations, understand and embrace IAF to salvage a sector that has a lot to offer and yet is severely mismanaged. Results support literature on IA’s contribution to SOE development from a developing country’s point of view and is the first of its kind in Namibia. Findings suggest ways to possibly enhance knowledge development of future researchers and ‘wet their appetite’ for further research in emerging markets and on a global scale.

Keywords: internal auditing activity, state-owned enterprises, emerging markets, auditing function

Procedia PDF Downloads 103
1099 Bag of Words Representation Based on Fusing Two Color Local Descriptors and Building Multiple Dictionaries

Authors: Fatma Abdedayem

Abstract:

We propose an extension to the famous method called Bag of words (BOW) which proved a successful role in the field of image categorization. Practically, this method based on representing image with visual words. In this work, firstly, we extract features from images using Spatial Pyramid Representation (SPR) and two dissimilar color descriptors which are opponent-SIFT and transformed-color-SIFT. Secondly, we fuse color local features by joining the two histograms coming from these descriptors. Thirdly, after collecting of all features, we generate multi-dictionaries coming from n random feature subsets that obtained by dividing all features into n random groups. Then, by using these dictionaries separately each image can be represented by n histograms which are lately concatenated horizontally and form the final histogram, that allows to combine Multiple Dictionaries (MDBoW). In the final step, in order to classify image we have applied Support Vector Machine (SVM) on the generated histograms. Experimentally, we have used two dissimilar image datasets in order to test our proposition: Caltech 256 and PASCAL VOC 2007.

Keywords: bag of words (BOW), color descriptors, multi-dictionaries, MDBoW

Procedia PDF Downloads 297
1098 Rules in Policy Integration, Case Study: Victoria Catchment Management

Authors: Ratri Werdiningtyas, Yongping Wei, Andrew Western

Abstract:

This paper contributes to on-going attempts at bringing together land, water and environmental policy in catchment management. A tension remains in defining the boundaries of policy integration. Most of Integrated Water Resource Management is valued as rhetoric policy. It is far from being achieved on the ground because the socio-ecological system has not been understood and developed into complete and coherent problem representation. To clarify the feature of integration, this article draws on institutional fit for public policy integration and uses these insights in an empirical setting to identify the mechanism that can facilitate effective public integration for catchment management. This research is based on the journey of Victoria’s government from 1890-2016. A total of 274 Victorian Acts related to land, water, environment management published in those periods has been investigated. Four conditions of integration have been identified in their co-evolution: (1) the integration policy based on reserves, (2) the integration policy based on authority interest, (3) policy based on integrated information and, (4) policy based coordinated resource, authority and information. Results suggest that policy coordination among their policy instrument is superior rather than policy integration in the case of catchment management.

Keywords: catchment management, co-evolution, policy integration, phase

Procedia PDF Downloads 247
1097 The Impact of Cognitive Load on Deceit Detection and Memory Recall in Children’s Interviews: A Meta-Analysis

Authors: Sevilay Çankaya

Abstract:

The detection of deception in children’s interviews is essential for statement veracity. The widely used method for deception detection is building cognitive load, which is the logic of the cognitive interview (CI), and its effectiveness for adults is approved. This meta-analysis delves into the effectiveness of inducing cognitive load as a means of enhancing veracity detection during interviews with children. Additionally, the effectiveness of cognitive load on children's total number of events recalled is assessed as a second part of the analysis. The current meta-analysis includes ten effect sizes from search using databases. For the effect size calculation, Hedge’s g was used with a random effect model by using CMA version 2. Heterogeneity analysis was conducted to detect potential moderators. The overall result indicated that cognitive load had no significant effect on veracity outcomes (g =0.052, 95% CI [-.006,1.25]). However, a high level of heterogeneity was found (I² = 92%). Age, participants’ characteristics, interview setting, and characteristics of the interviewer were coded as possible moderators to explain variance. Age was significant moderator (β = .021; p = .03, R2 = 75%) but the analysis did not reveal statistically significant effects for other potential moderators: participants’ characteristics (Q = 0.106, df = 1, p = .744), interview setting (Q = 2.04, df = 1, p = .154), and characteristics of interviewer (Q = 2.96, df = 1, p = .086). For the second outcome, the total number of events recalled, the overall effect was significant (g =4.121, 95% CI [2.256,5.985]). The cognitive load was effective in total recalled events when interviewing with children. All in all, while age plays a crucial role in determining the impact of cognitive load on veracity, the surrounding context, interviewer attributes, and inherent participant traits may not significantly alter the relationship. These findings throw light on the need for more focused, age-specific methods when using cognitive load measures. It may be possible to improve the precision and dependability of deceit detection in children's interviews with the help of more studies in this field.

Keywords: deceit detection, cognitive load, memory recall, children interviews, meta-analysis

Procedia PDF Downloads 55
1096 MindFlow: A Collective Intelligence-Based System for Helping Stress Pattern Diagnosis

Authors: Andres Frederic

Abstract:

We present the MindFlow system supporting the detection and the diagnosis of stresses. The heart of the system is a knowledge synthesis engine allowing occupational health stakeholders (psychologists, occupational therapists and human resource managers) to formulate queries related to stress and responding to users requests by recommending a pattern of stress if one exists. The stress pattern diagnosis is based on expert knowledge stored in the MindFlow stress ontology including stress feature vector. The query processing may involve direct access to the MindFlow system by occupational health stakeholders, online communication between the MindFlow system and the MindFlow domain experts, or direct dialog between a occupational health stakeholder and a MindFlow domain expert. The MindFlow knowledge model is generic in the sense that it supports the needs of psychologists, occupational therapists and human resource managers. The system presented in this paper is currently under development as part of a Dutch-Japanese project and aims to assist organisation in the quick diagnosis of stress patterns.

Keywords: occupational stress, stress management, physiological measurement, accident prevention

Procedia PDF Downloads 430
1095 Adaptive Kaman Filter for Fault Diagnosis of Linear Parameter-Varying Systems

Authors: Rajamani Doraiswami, Lahouari Cheded

Abstract:

Fault diagnosis of Linear Parameter-Varying (LPV) system using an adaptive Kalman filter is proposed. The LPV model is comprised of scheduling parameters, and the emulator parameters. The scheduling parameters are chosen such that they are capable of tracking variations in the system model as a result of changes in the operating regimes. The emulator parameters, on the other hand, simulate variations in the subsystems during the identification phase and have negligible effect during the operational phase. The nominal model and the influence vectors, which are the gradient of the feature vector respect to the emulator parameters, are identified off-line from a number of emulator parameter perturbed experiments. A Kalman filter is designed using the identified nominal model. As the system varies, the Kalman filter model is adapted using the scheduling variables. The residual is employed for fault diagnosis. The proposed scheme is successfully evaluated on simulated system as well as on a physical process control system.

Keywords: identification, linear parameter-varying systems, least-squares estimation, fault diagnosis, Kalman filter, emulators

Procedia PDF Downloads 499
1094 Operating System Based Virtualization Models in Cloud Computing

Authors: Dev Ras Pandey, Bharat Mishra, S. K. Tripathi

Abstract:

Cloud computing is ready to transform the structure of businesses and learning through supplying the real-time applications and provide an immediate help for small to medium sized businesses. The ability to run a hypervisor inside a virtual machine is important feature of virtualization and it is called nested virtualization. In today’s growing field of information technology, many of the virtualization models are available, that provide a convenient approach to implement, but decision for a single model selection is difficult. This paper explains the applications of operating system based virtualization in cloud computing with an appropriate/suitable model with their different specifications and user’s requirements. In the present paper, most popular models are selected, and the selection was based on container and hypervisor based virtualization. Selected models were compared with a wide range of user’s requirements as number of CPUs, memory size, nested virtualization supports, live migration and commercial supports, etc. and we identified a most suitable model of virtualization.

Keywords: virtualization, OS based virtualization, container based virtualization, hypervisor based virtualization

Procedia PDF Downloads 329
1093 A Feasibility Study of Waste (d) Potential: Synergistic Effect Evaluation by Co-digesting Organic Wastes and Kinetics of Biogas Production

Authors: Kunwar Paritosh, Sanjay Mathur, Monika Yadav, Paras Gandhi, Subodh Kumar, Nidhi Pareek, Vivekanand Vivekanand

Abstract:

A significant fraction of energy is wasted every year managing the biodegradable organic waste inadequately as development and sustainability are the inherent enemies. The management of these waste is indispensable to boost its optimum utilization by converting it to renewable energy resource (here biogas) through anaerobic digestion and to mitigate greenhouse gas emission. Food and yard wastes may prove to be appropriate and potential feedstocks for anaerobic co-digestion for biogas production. The present study has been performed to explore the synergistic effect of co-digesting food waste and yard trimmings from MNIT campus for enhanced biogas production in different ratios in batch tests (37±10C, 90 rpm, 45 days). The results were overwhelming and showed that blending two different organic waste in proper ratio improved the biogas generation considerably, with the highest biogas yield (2044±24 mLg-1VS) that was achieved at 75:25 of food waste to yard waste ratio on volatile solids (VS) basis. The yield was 1.7 and 2.2 folds higher than the mono-digestion of food or yard waste (1172±34, 1016±36mLg-1VS) respectively. The increase in biogas production may be credited to optimum C/N ratio resulting in higher yield. Also Adding TiO2 nanoparticles showed virtually no effect on biogas production as sometimes nanoparticles enhance biogas production. ICP-MS, FTIR analysis was carried out to gain an insight of feedstocks. Modified Gompertz and logistics models were applied for the kinetic study of biogas production where modified Gompertz model showed goodness-of-fit (R2=0.9978) with the experimental results.

Keywords: anaerobic co-digestion, biogas, kinetics, nanoparticle, organic waste

Procedia PDF Downloads 387
1092 Simplified INS\GPS Integration Algorithm in Land Vehicle Navigation

Authors: Othman Maklouf, Abdunnaser Tresh

Abstract:

Land vehicle navigation is subject of great interest today. Global Positioning System (GPS) is the main navigation system for positioning in such systems. GPS alone is incapable of providing continuous and reliable positioning, because of its inherent dependency on external electromagnetic signals. Inertial Navigation (INS) is the implementation of inertial sensors to determine the position and orientation of a vehicle. The availability of low-cost Micro-Electro-Mechanical-System (MEMS) inertial sensors is now making it feasible to develop INS using an inertial measurement unit (IMU). INS has unbounded error growth since the error accumulates at each step. Usually, GPS and INS are integrated with a loosely coupled scheme. With the development of low-cost, MEMS inertial sensors and GPS technology, integrated INS/GPS systems are beginning to meet the growing demands of lower cost, smaller size, and seamless navigation solutions for land vehicles. Although MEMS inertial sensors are very inexpensive compared to conventional sensors, their cost (especially MEMS gyros) is still not acceptable for many low-end civilian applications (for example, commercial car navigation or personal location systems). An efficient way to reduce the expense of these systems is to reduce the number of gyros and accelerometers, therefore, to use a partial IMU (ParIMU) configuration. For land vehicular use, the most important gyroscope is the vertical gyro that senses the heading of the vehicle and two horizontal accelerometers for determining the velocity of the vehicle. This paper presents a field experiment for a low-cost strap down (ParIMU)\GPS combination, with data post processing for the determination of 2-D components of position (trajectory), velocity and heading. In the present approach, we have neglected earth rotation and gravity variations, because of the poor gyroscope sensitivities of our low-cost IMU (Inertial Measurement Unit) and because of the relatively small area of the trajectory.

Keywords: GPS, IMU, Kalman filter, materials engineering

Procedia PDF Downloads 421
1091 Collaborative and Context-Aware Learning Approach Using Mobile Technology

Authors: Sameh Baccari, Mahmoud Neji

Abstract:

In recent years, the rapid developments on mobile devices and wireless technologies enable new dimension capabilities for the learning domain. This dimension facilitates people daily activities and shortens the distances between individuals. When these technologies have been used in learning, a new paradigm has been emerged giving birth to mobile learning. Because of the mobility feature, m-learning courses have to be adapted dynamically to the learner’s context. The main challenge in context-aware mobile learning is to develop an approach building the best learning resources according to dynamic learning situations. In this paper, we propose a context-aware mobile learning system called Collaborative and Context-aware Mobile Learning System (CCMLS). It takes into account the requirements of Mobility, Collaboration and Context-Awareness. This system is based on the semantic modeling of the learning context and the learning content. The adaptation part of this approach is made up of adaptation rules to propose and select relevant resources, learning partners and learning activities based not only on the user’s needs, but also on its current context.

Keywords: mobile learning, mobile technologies, context-awareness, collaboration, semantic web, adaptation engine, adaptation strategy, learning object, learning context

Procedia PDF Downloads 308
1090 Criminal Laws Associated with Cyber-Medicine and Telemedicine in Current Law Systems in the World

Authors: Shahryar Eslamitabar

Abstract:

Currently, the internet plays an important role in the various scientific, commercial and service practices. Thanks to information and communication technology, the healthcare industry via the internet, generally known as cyber-medicine, can offer professional medical service in a wider geographical area. Having some appealing benefits such as convenience in offering healthcare services, improved accessibility to the services, enhanced information exchange, cost-effectiveness, time-saving, etc. Tele-health has increasingly developed innovative models of healthcare delivery. However, it presents many potential hazards to cyber-patients, inherent in the use of the system. First, there are legal issues associated with the communication and transfer of information on the internet. These include licensure, malpractice, liabilities and jurisdictions as well as privacy, confidentiality and security of personal data as the most important challenge brought about by this system. Additional items of concern are technological and ethical. Although, there are some rules to deal with pitfalls associated with cyber-medicine practices in the USA and some European countries, yet for all developments, it is being practiced in a legal vacuum in many countries. In addition to the domestic legislations to deal with potential problems arisen from the system, it is also imperative that some international or regional agreement should be developed to achieve the harmonization of laws among countries and states. This article discusses some implications posed by the practice of cyber-medicine in the healthcare system according to the experience of some developed countries using a comparative study of laws. It will also review the status of tele-health laws in Iran. Finally, it is intended to pave the way to outline a plan for countries like Iran, with newly-established judicial system for health laws, to develop appropriate regulations through providing some recommendations.

Keywords: tele-health, cyber-medicine, telemedicine, criminal laws, legislations, time-saving

Procedia PDF Downloads 661
1089 The Impact of E-Commerce on the Physical Space of Traditional Retail System

Authors: Sumayya S.

Abstract:

Making cities adaptive and inclusive is one among the inherent goal and challenge for contemporary cities. This is a serious concern when the urban transformations occur in varying magnitude due to visible and invisible factors. One type of visibly invisible factor is ecommerce and its expanding operation that is understood to cause changes to the conventional spatial structure positively and negatively. With the continued growth in e-commerce activities and its future potential, market analysts, media, and even retailers have questioned the importance of a future presence of traditional Brick-and-mortar stores in cities as a critical element, with some even referring to the repeated announcement of the closure of some store chains as the end of the online shopping era. Essentially this raises the question of how adaptive and inclusive the cities are to the dynamics of transformative changes that are often unseen. People have become more comfortable with seating inside and door delivery systems, and this increased change in usage of public spaces, especially the commercial corridors. Through this research helped in presetting a new approach for planning and designing commercial activities centers and also presents the impact of ecommerce on the urban fabric, such as division and fragmentation of space, showroom syndrome, reconceptualization of space, etc., in a critical way. The changes are understood by analyzing the e-commerce logistic process. Based on the inferences reach at the conclusion for the need of an integrated approach in the field of planning and designing of public spaces for the sustainable omnichannel retailing. This study was carried out with the following objectives Monitoring the impact of e commerce on the traditional shopping space. Explore the new challenges and opportunities faced by the urban form. Explore how adaptive and inclusive our cities are to the dynamics of transformative changes caused by ecommerce.

Keywords: E-commerce, shopping streets, online environment, offline environment, shopping factors

Procedia PDF Downloads 88
1088 Exergy Analysis of a Vapor Absorption Refrigeration System Using Carbon Dioxide as Refrigerant

Authors: Samsher Gautam, Apoorva Roy, Bhuvan Aggarwal

Abstract:

Vapor absorption refrigeration systems can replace vapor compression systems in many applications as they can operate on a low-grade heat source and are environment-friendly. Widely used refrigerants such as CFCs and HFCs cause significant global warming. Natural refrigerants can be an alternative to them, among which carbon dioxide is promising for use in automotive air conditioning systems. Its inherent safety, ability to withstand high pressure and high heat transfer coefficient coupled with easy availability make it a likely choice for refrigerant. Various properties of the ionic liquid [bmim][PF₆], such as non-toxicity, stability over a wide temperature range and ability to dissolve gases like carbon dioxide, make it a suitable absorbent for a vapor absorption refrigeration system. In this paper, an absorption chiller consisting of a generator, condenser, evaporator and absorber was studied at an operating temperature of 70⁰C. A thermodynamic model was set up using the Peng-Robinson equations of state to predict the behavior of the refrigerant and absorbent pair at different points in the system. A MATLAB code was used to obtain the values of enthalpy and entropy at selected points in the system. The exergy destruction in each component and exergetic coefficient of performance (ECOP) of the system were calculated by performing an exergy analysis based on the second law of thermodynamics. Graphs were plotted between varying operating conditions and the ECOP obtained in each case. The effect of every component on the ECOP was examined. The exergetic coefficient of performance was found to be lesser than the coefficient of performance based on the first law of thermodynamics.

Keywords: [bmim][PF₆] as absorbent, carbon dioxide as refrigerant, exergy analysis, Peng-Robinson equations of state, vapor absorption refrigeration

Procedia PDF Downloads 287
1087 Enhancement of X-Rays Images Intensity Using Pixel Values Adjustments Technique

Authors: Yousif Mohamed Y. Abdallah, Razan Manofely, Rajab M. Ben Yousef

Abstract:

X-Ray images are very popular as a first tool for diagnosis. Automating the process of analysis of such images is important in order to help physician procedures. In this practice, teeth segmentation from the radiographic images and feature extraction are essential steps. The main objective of this study was to study correction preprocessing of x-rays images using local adaptive filters in order to evaluate contrast enhancement pattern in different x-rays images such as grey color and to evaluate the usage of new nonlinear approach for contrast enhancement of soft tissues in x-rays images. The data analyzed by using MatLab program to enhance the contrast within the soft tissues, the gray levels in both enhanced and unenhanced images and noise variance. The main techniques of enhancement used in this study were contrast enhancement filtering and deblurring images using the blind deconvolution algorithm. In this paper, prominent constraints are firstly preservation of image's overall look; secondly, preservation of the diagnostic content in the image and thirdly detection of small low contrast details in diagnostic content of the image.

Keywords: enhancement, x-rays, pixel intensity values, MatLab

Procedia PDF Downloads 485
1086 A Comparative Analysis on QRS Peak Detection Using BIOPAC and MATLAB Software

Authors: Chandra Mukherjee

Abstract:

The present paper is a representation of the work done in the field of ECG signal analysis using MATLAB 7.1 Platform. An accurate and simple ECG feature extraction algorithm is presented in this paper and developed algorithm is validated using BIOPAC software. To detect the QRS peak, ECG signal is processed by following mentioned stages- First Derivative, Second Derivative and then squaring of that second derivative. Efficiency of developed algorithm is tested on ECG samples from different database and real time ECG signals acquired using BIOPAC system. Firstly we have lead wise specified threshold value the samples above that value is marked and in the original signal, where these marked samples face change of slope are spotted as R-peak. On the left and right side of the R-peak, faces change of slope identified as Q and S peak, respectively. Now the inbuilt Detection algorithm of BIOPAC software is performed on same output sample and both outputs are compared. ECG baseline modulation correction is done after detecting characteristics points. The efficiency of the algorithm is tested using some validation parameters like Sensitivity, Positive Predictivity and we got satisfied value of these parameters.

Keywords: first derivative, variable threshold, slope reversal, baseline modulation correction

Procedia PDF Downloads 411
1085 Social Networking Application: What Is Their Quality and How Can They Be Adopted in Open Distance Learning Environments?

Authors: Asteria Nsamba

Abstract:

Social networking applications and tools have become compelling platforms for generating and sharing knowledge across the world. Social networking applications and tools refer to a variety of social media platforms which include Facebook, Twitter WhatsApp, blogs and Wikis. The most popular of these platforms are Facebook, with 2.41 billion active users on a monthly basis, followed by WhatsApp with 1.6 billion users and Twitter with 330 million users. These communication platforms have not only impacted social lives but have also impacted students’ learning, across different delivery modes in higher education: distance, conventional and blended learning modes. With this amount of interest in these platforms, knowledge sharing has gained importance within the context in which it is required. In open distance learning (ODL) contexts, social networking platforms can offer students and teachers the platform on which to create and share knowledge, and form learning collaborations. Thus, they can serve as support mechanisms to increase interactions and reduce isolation and loneliness inherent in ODL. Despite this potential and opportunity, research indicates that many ODL teachers are not inclined to using social media tools in learning. Although it is unclear why these tools are uncommon in these environments, concerns raised in the literature have indicated that many teachers have not mastered the art of teaching with technology. Using technological, pedagogical content knowledge (TPCK) and product quality theory, and Bloom’s Taxonomy as lenses, this paper is aimed at; firstly, assessing the quality of three social media applications: Facebook, Twitter and WhatsApp, in order to determine the extent to which they are suitable platforms for teaching and learning, in terms of content generation, information sharing and learning collaborations. Secondly, the paper demonstrates the application of teaching, learning and assessment using Bloom’s Taxonomy.

Keywords: distance education, quality, social networking tools, TPACK

Procedia PDF Downloads 124
1084 KSVD-SVM Approach for Spontaneous Facial Expression Recognition

Authors: Dawood Al Chanti, Alice Caplier

Abstract:

Sparse representations of signals have received a great deal of attention in recent years. In this paper, the interest of using sparse representation as a mean for performing sparse discriminative analysis between spontaneous facial expressions is demonstrated. An automatic facial expressions recognition system is presented. It uses a KSVD-SVM approach which is made of three main stages: A pre-processing and feature extraction stage, which solves the problem of shared subspace distribution based on the random projection theory, to obtain low dimensional discriminative and reconstructive features; A dictionary learning and sparse coding stage, which uses the KSVD model to learn discriminative under or over dictionaries for sparse coding; Finally a classification stage, which uses a SVM classifier for facial expressions recognition. Our main concern is to be able to recognize non-basic affective states and non-acted expressions. Extensive experiments on the JAFFE static acted facial expressions database but also on the DynEmo dynamic spontaneous facial expressions database exhibit very good recognition rates.

Keywords: dictionary learning, random projection, pose and spontaneous facial expression, sparse representation

Procedia PDF Downloads 305
1083 Intrusion Detection System Using Linear Discriminant Analysis

Authors: Zyad Elkhadir, Khalid Chougdali, Mohammed Benattou

Abstract:

Most of the existing intrusion detection systems works on quantitative network traffic data with many irrelevant and redundant features, which makes detection process more time’s consuming and inaccurate. A several feature extraction methods, such as linear discriminant analysis (LDA), have been proposed. However, LDA suffers from the small sample size (SSS) problem which occurs when the number of the training samples is small compared with the samples dimension. Hence, classical LDA cannot be applied directly for high dimensional data such as network traffic data. In this paper, we propose two solutions to solve SSS problem for LDA and apply them to a network IDS. The first method, reduce the original dimension data using principal component analysis (PCA) and then apply LDA. In the second solution, we propose to use the pseudo inverse to avoid singularity of within-class scatter matrix due to SSS problem. After that, the KNN algorithm is used for classification process. We have chosen two known datasets KDDcup99 and NSLKDD for testing the proposed approaches. Results showed that the classification accuracy of (PCA+LDA) method outperforms clearly the pseudo inverse LDA method when we have large training data.

Keywords: LDA, Pseudoinverse, PCA, IDS, NSL-KDD, KDDcup99

Procedia PDF Downloads 226
1082 Tapered Double Cantilever Beam: Evaluation of the Test Set-up for Self-Healing Polymers

Authors: Eleni Tsangouri, Xander Hillewaere, David Garoz Gómez, Dimitrios Aggelis, Filip Du Prez, Danny Van Hemelrijck

Abstract:

Tapered Double Cantilever Beam (TDCB) is the most commonly used test set-up to evaluate the self-healing feature of thermoset polymers autonomously activated in the presence of crack. TDCB is a modification of the established fracture mechanics set-up of Double Cantilever Beam and is designed to provide constant strain energy release rate with crack length under stable load evolution (mode-I). In this study, the damage of virgin and autonomously healed TDCB polymer samples is evaluated considering the load-crack opening diagram, the strain maps provided by Digital Image Correlation technique and the fractography maps given by optical microscopy. It is shown that the pre-crack introduced prior to testing (razor blade tapping), the loading rate and the length of the side groove are the features that dominate the crack propagation and lead to inconstant fracture energy release rate.

Keywords: polymers, autonomous healing, fracture, tapered double cantilever beam

Procedia PDF Downloads 351
1081 Contrast Enhancement of Color Images with Color Morphing Approach

Authors: Javed Khan, Aamir Saeed Malik, Nidal Kamel, Sarat Chandra Dass, Azura Mohd Affandi

Abstract:

Low contrast images can result from the wrong setting of image acquisition or poor illumination conditions. Such images may not be visually appealing and can be difficult for feature extraction. Contrast enhancement of color images can be useful in medical area for visual inspection. In this paper, a new technique is proposed to improve the contrast of color images. The RGB (red, green, blue) color image is transformed into normalized RGB color space. Adaptive histogram equalization technique is applied to each of the three channels of normalized RGB color space. The corresponding channels in the original image (low contrast) and that of contrast enhanced image with adaptive histogram equalization (AHE) are morphed together in proper proportions. The proposed technique is tested on seventy color images of acne patients. The results of the proposed technique are analyzed using cumulative variance and contrast improvement factor measures. The results are also compared with decorrelation stretch. Both subjective and quantitative analysis demonstrates that the proposed techniques outperform the other techniques.

Keywords: contrast enhacement, normalized RGB, adaptive histogram equalization, cumulative variance.

Procedia PDF Downloads 376
1080 Analyzing Mexican Adaptation of Shakespeare: A Study of Onstage Violence in Richard III and Its Impact on Mexican Viewers

Authors: Nelya Babynets

Abstract:

Shakespeare and Mexican theatregoers have enjoyed quite a complex relationship. Shakespearean plays have appeared on the Mexican stage with remarkable perseverance, yet with mixed success. Although Shakespeare has long been a part of the global cultural marketplace and his works are celebrated all around the world, the adaptation of his plays on the contemporary Mexican stage is always an adventure, since the works of this early modern author are frequently seen as the legacy of a ‘high’, but obsolete, culture, one that is quite distant from the present-day viewers’ daily experiences and concerns. Moreover, Mexican productions of Shakespeare are presented mostly in Peninsular Spanish, a language similar yet alien to the language spoken in Mexico, one that does not wholly fit into the viewers’ cultural praxis. This is the reason why Mexican dramatic adaptations of Shakespearean plays tend to replace the cultural references of the original piece with ones that are more significant and innate to Latin American spectators. This paper analyses the new Mexican production of Richard III adapted and directed by Mauricio Garcia Lozano, which employs onstage violence - a cultural force that is inherent to all human beings regardless of their beliefs, ethnic background or nationality - as the means to make this play more relevant to a present-day audience. Thus, this paper addresses how the bloody bombast of staged murders helps to avoid the tyranny of a rigid framework of fixed meanings that denies the possibility of an intercultural appropriation of this European play written over four hundred years ago. The impact of violence displayed in Garcia Lozano’s adaptation of Richard III on Mexican audiences will also be examined. This study is particularly relevant in Mexico where the term ‘tragedy’ has become a commonplace and where drug wars and state-sanctioned violence have already taken the lives of many people.

Keywords: audience, dramatic adaptation, Shakespeare, viewer

Procedia PDF Downloads 459
1079 Multimedia Data Fusion for Event Detection in Twitter by Using Dempster-Shafer Evidence Theory

Authors: Samar M. Alqhtani, Suhuai Luo, Brian Regan

Abstract:

Data fusion technology can be the best way to extract useful information from multiple sources of data. It has been widely applied in various applications. This paper presents a data fusion approach in multimedia data for event detection in twitter by using Dempster-Shafer evidence theory. The methodology applies a mining algorithm to detect the event. There are two types of data in the fusion. The first is features extracted from text by using the bag-ofwords method which is calculated using the term frequency-inverse document frequency (TF-IDF). The second is the visual features extracted by applying scale-invariant feature transform (SIFT). The Dempster - Shafer theory of evidence is applied in order to fuse the information from these two sources. Our experiments have indicated that comparing to the approaches using individual data source, the proposed data fusion approach can increase the prediction accuracy for event detection. The experimental result showed that the proposed method achieved a high accuracy of 0.97, comparing with 0.93 with texts only, and 0.86 with images only.

Keywords: data fusion, Dempster-Shafer theory, data mining, event detection

Procedia PDF Downloads 410
1078 “Ethical Porn” and the Right to Withdraw Consent

Authors: Nathan Elvidge

Abstract:

This paper offers a philosophical argument against the possibility of so-called “ethical porn,” that is, pornographic material produced in a way attempting to remain consistent with feminist principles and female empowerment. One key feature of such material is the requirement for the material to be consensual on the part of the actors or those involved in the material. However, in the contemporary pornography industry, this typically amounts to a single historic act of consent given in exchange for a lump-sum payment which grants the producer lifetime property rights over the explicit material. This paper argues that, by the lights of feminist principles, this situation is inherently unjust and that, as a consequence, the pornography industry requires a radical systematic upheaval before any material produced within it can be considered genuinely ethical. These feminist principles require that for the consumption of pornography to be genuinely ethical, the actors must consent not only to the acts recorded in the material but also to the consumption of that material. This paper argues that this consent to consumption should be treated as on par with other matters of sexual consent and, therefore, that actors should have the right to withdraw consent to the consumption of their material. From this, it is argued to follow that the system of third-party ownership of property rights over someone else’s sexually explicit material legally nullifies this right and therefore is inherently unjust.

Keywords: consent, feminism, pornography, sex work

Procedia PDF Downloads 116
1077 A Local Invariant Generalized Hough Transform Method for Integrated Circuit Visual Positioning

Authors: Wei Feilong

Abstract:

In this study, an local invariant generalized Houghtransform (LI-GHT) method is proposed for integrated circuit (IC) visual positioning. The original generalized Hough transform (GHT) is robust to external noise; however, it is not suitable for visual positioning of IC chips due to the four-dimensionality (4D) of parameter space which leads to the substantial storage requirement and high computational complexity. The proposed LI-GHT method can reduce the dimensionality of parameter space to 2D thanks to the rotational invariance of local invariant geometric feature and it can estimate the accuracy position and rotation angle of IC chips in real-time under noise and blur influence. The experiment results show that the proposed LI-GHT can estimate position and rotation angle of IC chips with high accuracy and fast speed. The proposed LI-GHT algorithm was implemented in IC visual positioning system of radio frequency identification (RFID) packaging equipment.

Keywords: Integrated Circuit Visual Positioning, Generalized Hough Transform, Local invariant Generalized Hough Transform, ICpacking equipment

Procedia PDF Downloads 264
1076 Gender and Sustainable Rural Tourism: A Study into the Experiences and the Roles of Local Women in the Sundarbans Area of Bangladesh

Authors: Jakia Rajoana

Abstract:

The key aim of this research is to achieve Sustainable Rural Tourism (SRT) through women’s empowerment in the Sundarbans area of Bangladesh. Women in rural areas in developing countries depend on biomass for their survival and that of their family. Yet they have an unequal access to resources as well as decision making, thus making them more vulnerable to any changes in the environment. Women in the developing countries experience gender inequality which is culturally embedded resulting into women having less access to and control over financial and material resources, information, and also a lack of recognition of their contribution as compared to men. Their disadvantaged social position is augmented by their extreme poverty, little or no power they have over their own lives vis-à-vis the disproportionate burden they bear in reproduction and child-raising. Despite the significance of the need to pay attention to gender related issues in sustainable rural tourism (SRT), research remains rather scant. For instance, there is very little research that illustrates the role of women in tourism in the Sundarbans area. Thus empirically, this research seeks to fill a significant gap by focusing on rural areas and in particular focus on considerably under-researched area, namely the Sundarbans women’s role in tourism. In order to fully comprehend their experiences and life stories, this research will apply the empowerment theory and consider it along with the research on sustainable rural tourism. Since, women’s empowerment can act as a potential tool for SRT development and also examine the role tourism plays in the lives of Sundarbans’ women. Methodologically, this study will follow a qualitative research design using an ethnographic approach. Participant observation, semi- structured interviews, and documentation will be the primary data collection instruments in four communities – Shayamnagar, Koyra, Mongla and Sarankhola – in the Sundarbans area. It is hoped that by focusing on the life stories of these invisible women, research is better able to engage with nuances inherent in marginal and significantly under-researched communities.

Keywords: gender, sustainable rural tourism, women empowerment, Sundarbans

Procedia PDF Downloads 299
1075 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 119
1074 Improved Performance in Content-Based Image Retrieval Using Machine Learning Approach

Authors: B. Ramesh Naik, T. Venugopal

Abstract:

This paper presents a novel approach which improves the high-level semantics of images based on machine learning approach. The contemporary approaches for image retrieval and object recognition includes Fourier transforms, Wavelets, SIFT and HoG. Though these descriptors helpful in a wide range of applications, they exploit zero order statistics, and this lacks high descriptiveness of image features. These descriptors usually take benefit of primitive visual features such as shape, color, texture and spatial locations to describe images. These features do not adequate to describe high-level semantics of the images. This leads to a gap in semantic content caused to unacceptable performance in image retrieval system. A novel method has been proposed referred as discriminative learning which is derived from machine learning approach that efficiently discriminates image features. The analysis and results of proposed approach were validated thoroughly on WANG and Caltech-101 Databases. The results proved that this approach is very competitive in content-based image retrieval.

Keywords: CBIR, discriminative learning, region weight learning, scale invariant feature transforms

Procedia PDF Downloads 181
1073 A Critical Exploration of Dominant Perspectives Regarding Inclusion and Disability: Shifts Toward Meaningful Approaches

Authors: Luigi Iannacci

Abstract:

This study critically explores how disability and disability are presently and problematically configured within education. As such, pedagogies, discourses, and practices that shape this configuration are examined to forward a reconceptualization of disability as it relates to education and the inclusion of students with special needs in mainstream classroom contexts. The study examines how the dominant medical/deficit model of disability positions students with special needs and advocates for a shift towards a social/critical model of disability as applied to education and classrooms. This is demonstrated through a critical look at how language, processes, and ‘interventions’ name and address deficits people who have a disability are presumed to have and, as such, conceptualize these deficits as inherent flaws that are in need of ‘fixing.’ The study will demonstrate the necessary shifts in thinking, language and practice required to forward a critical/social model of disability. The ultimate aim of this research is to offer a much-needed reconceptualization of inclusion that recognizes disability as epistemology, identity, and diversity through a critical exploration of dominant discourses that impact language, policy, instruction and ultimately, the experiences students with disabilities have within mainstream classrooms. The presentation seeks to explore disability as neurodiversity and therefore elucidate how people with disabilities can demonstrate these ways of knowing within inclusive education that avoids superficial approaches that are not responsive to their needs. This research is, therefore, of interest and use to educators teaching at the elementary, secondary, and in-service levels as well as graduate students and scholars working in the areas of inclusion, special education, and literacy. Ultimately the presentation attempts to foster a social justice and human rights-focused approach to inclusion that is responsive to students with disabilities and, as such ensures a reconceptualization of present language, understandings and practices that continue to configure disability in problematic ways.

Keywords: inclusion, disability, critical approach, social justice

Procedia PDF Downloads 75