Search results for: computer video game
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3634

Search results for: computer video game

574 Perceived and Performed E-Health Literacy: Survey and Simulated Performance Test

Authors: Efrat Neter, Esther Brainin, Orna Baron-Epel

Abstract:

Background: Connecting end-users to newly developed ICT technologies and channeling patients to new products requires an assessment of compatibility. End user’s assessment is conveyed in the concept of eHealth literacy. The study examined the association between perceived and performed eHealth literacy (EHL) in a heterogeneous age sample in Israel. Methods: Participants included 100 Israeli adults (mean age 43,SD 13.9) who were first phone interviewed and then tested on a computer simulation of health-related Internet tasks. Performed, perceived and evaluated EHL were assessed. Levels of successful completion of tasks represented EHL performance and evaluated EHL included observed motivation, confidence, and amount of help provided. Results: The skills of accessing, understanding, appraising, applying, and generating new information had a decreasing successful completion rate with increase in complexity of the task. Generating new information, though highly correlated with all other skills, was least correlated with the other skills. Perceived and performed EHL were correlated (r=.40, P=.001), while facets of performance (i.e, digital literacy and EHL) were highly correlated (r=.89, P<.001). Participants low and high in performed EHL were significantly different: low performers were older, had attained less education, used the Internet for less time and perceived themselves as less healthy. They also encountered more difficulties, required more assistance, were less confident in their conduct and exhibited less motivation than high performers. Conclusions: The association in this age-hetrogenous ample was larger than in previous age-homogenous samples. The moderate association between perceived and performed EHL indicates that the two are associated yet distinct, the latter requiring separate assessment. Features of future rapid performed EHL tools are discussed.

Keywords: eHealth, health literacy, performance, simulation

Procedia PDF Downloads 214
573 Hybrid Approach for Face Recognition Combining Gabor Wavelet and Linear Discriminant Analysis

Authors: A: Annis Fathima, V. Vaidehi, S. Ajitha

Abstract:

Face recognition system finds many applications in surveillance and human computer interaction systems. As the applications using face recognition systems are of much importance and demand more accuracy, more robustness in the face recognition system is expected with less computation time. In this paper, a hybrid approach for face recognition combining Gabor Wavelet and Linear Discriminant Analysis (HGWLDA) is proposed. The normalized input grayscale image is approximated and reduced in dimension to lower the processing overhead for Gabor filters. This image is convolved with bank of Gabor filters with varying scales and orientations. LDA, a subspace analysis techniques are used to reduce the intra-class space and maximize the inter-class space. The techniques used are 2-dimensional Linear Discriminant Analysis (2D-LDA), 2-dimensional bidirectional LDA ((2D)2LDA), Weighted 2-dimensional bidirectional Linear Discriminant Analysis (Wt (2D)2 LDA). LDA reduces the feature dimension by extracting the features with greater variance. k-Nearest Neighbour (k-NN) classifier is used to classify and recognize the test image by comparing its feature with each of the training set features. The HGWLDA approach is robust against illumination conditions as the Gabor features are illumination invariant. This approach also aims at a better recognition rate using less number of features for varying expressions. The performance of the proposed HGWLDA approaches is evaluated using AT&T database, MIT-India face database and faces94 database. It is found that the proposed HGWLDA approach provides better results than the existing Gabor approach.

Keywords: face recognition, Gabor wavelet, LDA, k-NN classifier

Procedia PDF Downloads 449
572 Graphene Based Materials as Novel Membranes for Water Desalination and Boron Separation

Authors: Francesca Risplendi, Li-Chiang Lin, Jeffrey C. Grossman, Giancarlo Cicero

Abstract:

Desalination is one of the most employed approaches to supply water in the context of a rapidly growing global water shortage. However, the most popular water filtration method available is the reverse osmosis (RO) technique, still suffers from important drawbacks, such as a large energy demands and high process costs. In addition some serious limitations have been recently discovered, among them, the boron problem seems to have a critical meaning. Boron has been found to have a dual effect on the living systems on Earth and the difference between boron deficiency and boron toxicity levels is quite small. The aim of this project is to develop a new generation of RO membranes based on porous graphene or reduced graphene oxide (rGO) able to remove salts from seawater and to reduce boron concentrations in the permeate to the level that meets the drinking or process water requirements, by means of a theoretical approach based on density functional theory and classical molecular dynamics. Computer simulations have been employed to investigate the relationship between the atomic structure of nanoporous graphene or rGO monolayer and its membrane properties in RO applications (i.e. water permeability and resilience at RO pressures). In addition, an emphasis has been given to multilayer nanoporous rGO and rGO flakes based membranes. By means of non-equilibrium MD simulations, we investigated the water transport mechanism permeating through such multilayer membrane focusing on the effect of slit widths and sheet geometries. These simulations allowed us to establish the implications of these graphene based materials as promising membrane properties for desalination plants and as boron filtration.

Keywords: boron filtration, desalination, graphene membrane, reduced graphene oxide membrane

Procedia PDF Downloads 272
571 Hydrological Modelling of Geological Behaviours in Environmental Planning for Urban Areas

Authors: Sheetal Sharma

Abstract:

Runoff,decreasing water levels and recharge in urban areas have been a complex issue now a days pointing defective urban design and increasing demography as cause. Very less has been discussed or analysed for water sensitive Urban Master Plans or local area plans. Land use planning deals with land transformation from natural areas into developed ones, which lead to changes in natural environment. Elaborated knowledge of relationship between the existing patterns of land use-land cover and recharge with respect to prevailing soil below is less as compared to speed of development. The parameters of incompatibility between urban functions and the functions of the natural environment are becoming various. Changes in land patterns due to built up, pavements, roads and similar land cover affects surface water flow seriously. It also changes permeability and absorption characteristics of the soil. Urban planners need to know natural processes along with modern means and best technologies available,as there is a huge gap between basic knowledge of natural processes and its requirement for balanced development planning leading to minimum impact on water recharge. The present paper analyzes the variations in land use land cover and their impacts on surface flows and sub-surface recharge in study area. The methodology adopted was to analyse the changes in land use and land cover using GIS and Civil 3d auto cad. The variations were used in  computer modeling using Storm-water Management Model to find out the runoff for various soil groups and resulting recharge observing water levels in POW data for last 40 years of the study area. Results were anlayzed again to find best correlations for sustainable recharge in urban areas.

Keywords: geology, runoff, urban planning, land use-land cover

Procedia PDF Downloads 289
570 Competency and Strategy Formulation in Automobile Industry

Authors: Chandan Deep Singh

Abstract:

In present days, companies are facing the rapid competition in terms of customer requirements to be satisfied, new technologies to be integrated into future products, new safety regulations to be followed, new computer-based tools to be introduced into design activities that becomes more scientific. In today’s highly competitive market, survival focuses on various factors such as quality, innovation, adherence to standards, and rapid response as the basis for competitive advantage. For competitive advantage, companies have to produce various competencies: for improving the capability of suppliers and for strengthening the process of integrating technology. For more competitiveness, organizations should operate in a strategy driven way and have a strategic architecture for developing core competencies. Traditional ways to take such experience and develop competencies tend to take a lot of time and they are expensive. A new learning environment, which is built around a gaming engine, supports the development of competences in specific subject areas. Technology competencies have a significant role in firm innovation and competitiveness; they interact with the competitive environment. Technological competencies vary according to the type of competitive environment, thus enhancing firm innovativeness. Technological competency is gained through extensive experimentation and learning in its research, development and employment in manufacturing. This is a review paper based on competency and strategic success of automobile industry. The aim here is to study strategy formulation and competency tools in the industry. This work is a review of literature related to competency and strategy in automobile industry. This study involves review of 34 papers related to competency and strategy.

Keywords: manufacturing competency, strategic success, competitiveness, strategy formulation

Procedia PDF Downloads 288
569 Orthogonal Metal Cutting Simulation of Steel AISI 1045 via Smoothed Particle Hydrodynamic Method

Authors: Seyed Hamed Hashemi Sohi, Gerald Jo Denoga

Abstract:

Machining or metal cutting is one of the most widely used production processes in industry. The quality of the process and the resulting machined product depends on parameters like tool geometry, material, and cutting conditions. However, the relationships of these parameters to the cutting process are often based mostly on empirical knowledge. In this study, computer modeling and simulation using LS-DYNA software and a Smoothed Particle Hydrodynamic (SPH) methodology, was performed on the orthogonal metal cutting process to analyze three-dimensional deformation of AISI 1045 medium carbon steel during machining. The simulation was performed using the following constitutive models: the Power Law model, the Johnson-Cook model, and the Zerilli-Armstrong models (Z-A). The outcomes were compared against the simulated results obtained by Cenk Kiliçaslan using the Finite Element Method (FEM) and the empirical results of Jaspers and Filice. The analysis shows that the SPH method combined with the Zerilli-Armstrong constitutive model is a viable alternative to simulating the metal cutting process. The tangential force was overestimated by 7%, and the normal force was underestimated by 16% when compared with empirical values. The simulation values for flow stress versus strain at various temperatures were also validated against empirical values. The SPH method using the Z-A model has also proven to be robust against issues of time-scaling. Experimental work was also done to investigate the effects of friction, rake angle and tool tip radius on the simulation.

Keywords: metal cutting, smoothed particle hydrodynamics, constitutive models, experimental, cutting forces analyses

Procedia PDF Downloads 237
568 The Morphological and Morphometrical Evaluation of the Bores That Transmit Emissary Veins in Terms of Surgery

Authors: Fikri Turk, Sahika Pinar Akyer, Mevci Ozdemir, Mehmet Bulent Ozdemir, Ilgaz Akdogan

Abstract:

The complications such as bleeding, thrombosis and air embolism depend on injuries emissary veins is often encountered in surgery. Detailed descriptions of the mastoid foramen, occipital foramen, parietal foramen, posterior condylar canal and foramen vesalius are lacking in the literature. For this reason, the purpose of our study was to explore and represent the morphology and morphometry of these emissary foramina in order to prevent complications and to guide for surgeons. The present study was made on 60 dry human skull in the laboratories of Pamukkale University, Faculty of Medicine Department of Anatomy. After taken photograph of emissary foramens by Canon 650D professional camera, the evaluation and measurement’s these foramens made with Matlab program by computer. The overall prevalence of mastoid foramen was 90.52%, occipital foramen was 72.52%, parietal foramen was 42.85%, posterior condylar canal was 91.25% and foramen vesalius was 78.26%. The mean diameter of the mastoid foramen was 1.81±0.76 mm, occipital foramen was 1.20±0.25 mm, parietal foramen was 1.49±0.46 mm, posterior condylar canal was 2.83±1.33 mm and foramen vesalius was 1.74±0.60 mm. Distances between emissary foramina and fixed bony landmarks were measured. Emissary veins are important in clinic practice and surgical procedures because they act a route of spread of exracranial infection to the intracranial structures and these veins may be a significant bleeding during surgery of the skull and they can be source of thrombosis and air embolism. The detailed anatomical knowledge of these veins and foraminas may help to prevent complications and to guide for surgeons.

Keywords: emissary foramina, mastoid foramen, occipital foramen, parietal foramen, posterior condylar canal, foramen vesalius, morphology, morphometry

Procedia PDF Downloads 340
567 The Influence of Cultural Perceptions in the Preference and Choice of STEM Programs

Authors: Priscilla Adoley Moffat

Abstract:

This study explored perceptions rooted in and acquired from the cultures of many developing countries and how they impact applicants’ preferences and choices of STEM programs. The context of developing countries was chosen for this study because gender role socialization continues to maintain an important place in most of these cultures. This study’s relevance rests in the fact that, as the world takes steps to encourage and promote the choice and study of STEM programs, especially among females, there is a need for efforts towards understanding various cultural perceptions towards some programs of study, particularly STEM programs, which have diverse gender attributions in many developing cultures. Also, as the world strives to achieve gender equity in education, such a study comes in handy, as it provides a useful understanding of the underlying cultural factors that affect study program preferences of applicants, particularly in developing countries like Ghana as well as others in Africa. The study analyzed the admission application data of five public universities in Ghana. 1600 randomly-sampled final-year students of 32 randomly-selected senior high schools from the 16 regions of Ghana were interviewed. Since parents and teachers often guide and influence the study program choices of applicants, the study examined the perceptions of 180 teachers and 360 parents. The study found, among other things, that STEM programs are commonly perceived to pose much more difficulty to females than they do to males. As a result, many female applicants are discouraged from choosing these programs. While nursing programs are perceived more as programs for females, with the justification that females are better caregivers, males are perceived to be better medical doctors, engineers, and computer technicians. Thus, many females are less encouraged to choose Technology and Engineering programs.

Keywords: culture, perceptions, STEM, choice, preference

Procedia PDF Downloads 55
566 A Relative Entropy Regularization Approach for Fuzzy C-Means Clustering Problem

Authors: Ouafa Amira, Jiangshe Zhang

Abstract:

Clustering is an unsupervised machine learning technique; its aim is to extract the data structures, in which similar data objects are grouped in the same cluster, whereas dissimilar objects are grouped in different clusters. Clustering methods are widely utilized in different fields, such as: image processing, computer vision , and pattern recognition, etc. Fuzzy c-means clustering (fcm) is one of the most well known fuzzy clustering methods. It is based on solving an optimization problem, in which a minimization of a given cost function has been studied. This minimization aims to decrease the dissimilarity inside clusters, where the dissimilarity here is measured by the distances between data objects and cluster centers. The degree of belonging of a data point in a cluster is measured by a membership function which is included in the interval [0, 1]. In fcm clustering, the membership degree is constrained with the condition that the sum of a data object’s memberships in all clusters must be equal to one. This constraint can cause several problems, specially when our data objects are included in a noisy space. Regularization approach took a part in fuzzy c-means clustering technique. This process introduces an additional information in order to solve an ill-posed optimization problem. In this study, we focus on regularization by relative entropy approach, where in our optimization problem we aim to minimize the dissimilarity inside clusters. Finding an appropriate membership degree to each data object is our objective, because an appropriate membership degree leads to an accurate clustering result. Our clustering results in synthetic data sets, gaussian based data sets, and real world data sets show that our proposed model achieves a good accuracy.

Keywords: clustering, fuzzy c-means, regularization, relative entropy

Procedia PDF Downloads 242
565 An End-to-end Piping and Instrumentation Diagram Information Recognition System

Authors: Taekyong Lee, Joon-Young Kim, Jae-Min Cha

Abstract:

Piping and instrumentation diagram (P&ID) is an essential design drawing describing the interconnection of process equipment and the instrumentation installed to control the process. P&IDs are modified and managed throughout a whole life cycle of a process plant. For the ease of data transfer, P&IDs are generally handed over from a design company to an engineering company as portable document format (PDF) which is hard to be modified. Therefore, engineering companies have to deploy a great deal of time and human resources only for manually converting P&ID images into a computer aided design (CAD) file format. To reduce the inefficiency of the P&ID conversion, various symbols and texts in P&ID images should be automatically recognized. However, recognizing information in P&ID images is not an easy task. A P&ID image usually contains hundreds of symbol and text objects. Most objects are pretty small compared to the size of a whole image and are densely packed together. Traditional recognition methods based on geometrical features are not capable enough to recognize every elements of a P&ID image. To overcome these difficulties, state-of-the-art deep learning models, RetinaNet and connectionist text proposal network (CTPN) were used to build a system for recognizing symbols and texts in a P&ID image. Using the RetinaNet and the CTPN model carefully modified and tuned for P&ID image dataset, the developed system recognizes texts, equipment symbols, piping symbols and instrumentation symbols from an input P&ID image and save the recognition results as the pre-defined extensible markup language format. In the test using a commercial P&ID image, the P&ID information recognition system correctly recognized 97% of the symbols and 81.4% of the texts.

Keywords: object recognition system, P&ID, symbol recognition, text recognition

Procedia PDF Downloads 126
564 Developing an Exhaustive and Objective Definition of Social Enterprise through Computer Aided Text Analysis

Authors: Deepika Verma, Runa Sarkar

Abstract:

One of the prominent debates in the social entrepreneurship literature has been to establish whether entrepreneurial work for social well-being by for-profit organizations can be classified as social entrepreneurship or not. Of late, the scholarship has reached a consensus. It concludes that there seems little sense in confining social entrepreneurship to just non-profit organizations. Boosted by this research, increasingly a lot of businesses engaged in filling the social infrastructure gaps in developing countries are calling themselves social enterprise. These organizations are diverse in their ownership, size, objectives, operations and business models. The lack of a comprehensive definition of social enterprise leads to three issues. Firstly, researchers may face difficulty in creating a database for social enterprises because the choice of an entity as a social enterprise becomes subjective or based on some pre-defined parameters by the researcher which is not replicable. Secondly, practitioners who use ‘social enterprise’ in their vision/mission statement(s) may find it difficult to adjust their business models accordingly especially during the times when they face the dilemma of choosing social well-being over business viability. Thirdly, social enterprise and social entrepreneurship attract a lot of donor funding and venture capital. In the paucity of a comprehensive definitional guide, the donors or investors may find assigning grants and investments difficult. It becomes necessary to develop an exhaustive and objective definition of social enterprise and examine whether the understanding of the academicians and practitioners about social enterprise match. This paper develops a dictionary of words often associated with social enterprise or (and) social entrepreneurship. It further compares two lexicographic definitions of social enterprise imputed from the abstracts of academic journal papers and trade publications extracted from the EBSCO database using the ‘tm’ package in R software.

Keywords: EBSCO database, lexicographic definition, social enterprise, text mining

Procedia PDF Downloads 362
563 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 150
562 Aerodynamic Analysis by Computational Fluids Dynamics in Building: Case Study

Authors: Javier Navarro Garcia, Narciso Vazquez Carretero

Abstract:

Eurocode 1, part 1-4, wind actions, includes in its article 1.5 the possibility of using numerical calculation methods to obtain information on the loads acting on a building. On the other hand, the analysis using computational fluids dynamics (CFD) in aerospace, aeronautical, and industrial applications is already in widespread use. The application of techniques based on CFD analysis on the building to study its aerodynamic behavior now opens a whole alternative field of possibilities for civil engineering and architecture; optimization of the results with respect to those obtained by applying the regulations, the possibility of obtaining information on pressures, speeds at any point of the model for each moment, the analysis of turbulence and the possibility of modeling any geometry or configuration. The present work compares the results obtained on a building, with respect to its aerodynamic behavior, from a mathematical model based on the analysis by CFD with the results obtained by applying Eurocode1, part1-4, wind actions. It is verified that the results obtained by CFD techniques suppose an optimization of the wind action that acts on the building with respect to the wind action obtained by applying the Eurocode1, part 1-4, wind actions. In order to carry out this verification, a 45m high square base truncated pyramid building has been taken. The mathematical model on CFD, based on finite volumes, has been calculated using the FLUENT commercial computer application using a scale-resolving simulation (SRS) type large eddy simulation (LES) turbulence model for an atmospheric boundary layer wind with turbulent component in the direction of the flow.

Keywords: aerodynamic, CFD, computacional fluids dynamics, computational mechanics

Procedia PDF Downloads 120
561 Intrusion Detection in Cloud Computing Using Machine Learning

Authors: Faiza Babur Khan, Sohail Asghar

Abstract:

With an emergence of distributed environment, cloud computing is proving to be the most stimulating computing paradigm shift in computer technology, resulting in spectacular expansion in IT industry. Many companies have augmented their technical infrastructure by adopting cloud resource sharing architecture. Cloud computing has opened doors to unlimited opportunities from application to platform availability, expandable storage and provision of computing environment. However, from a security viewpoint, an added risk level is introduced from clouds, weakening the protection mechanisms, and hardening the availability of privacy, data security and on demand service. Issues of trust, confidentiality, and integrity are elevated due to multitenant resource sharing architecture of cloud. Trust or reliability of cloud refers to its capability of providing the needed services precisely and unfailingly. Confidentiality is the ability of the architecture to ensure authorization of the relevant party to access its private data. It also guarantees integrity to protect the data from being fabricated by an unauthorized user. So in order to assure provision of secured cloud, a roadmap or model is obligatory to analyze a security problem, design mitigation strategies, and evaluate solutions. The aim of the paper is twofold; first to enlighten the factors which make cloud security critical along with alleviation strategies and secondly to propose an intrusion detection model that identifies the attackers in a preventive way using machine learning Random Forest classifier with an accuracy of 99.8%. This model uses less number of features. A comparison with other classifiers is also presented.

Keywords: cloud security, threats, machine learning, random forest, classification

Procedia PDF Downloads 296
560 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 251
559 The Effect of Using Mobile Listening Applications on Listening Skills of Iranian Intermediate EFL Learners

Authors: Mahmoud Nabilu

Abstract:

The present study explored the effect of using Mobile listening applications on developing listening skills by Iranian intermediate EFL learners. Fifty male intermediate English learners whose age range was between 15 and 20, participated in the study. The participants were placed in two groups on the basis of their scores on a placement test. Therefore, the participants of the study were homogenized in terms of general proficiency, and groups were assigned as one experimental group and one control group. The experimental group was instructed by the treatment which was using mobile applications to develop their listening skills while the control group received traditional methods. The research data were obtained from the 40-item multiple-choice tests as a pre-test and a post-test. The results of the t-test clearly revealed that the learners in the experimental group performed better in the post-test than the pre-test. This implies that using a mobile application for developing listening skills as a treatment was effective in helping the language learners perform better on post-test. However, a statistically significant difference was found between the post-tests scores of the two groups. The mean of the experimental group was greater compared to the control group. The participants were Iranian and from an Iranian Language Institute, so care should be taken while generalizing the results to the learners of other nationalities. However, in the researcher's view, the findings of this study have valuable implications for teachers and learners, methodologists and syllabus designers, linguists and MALL/CALL (mobile/computer-assisted language learning) experts. Using the result of the present paper is an aim of raising the consciousness of a better technique of developing listening skills in order to make language learning more efficient for the learners.

Keywords: Mobile listening applications, intermediate EFL learners, MALL, CALL

Procedia PDF Downloads 165
558 Text Mining of Veterinary Forums for Epidemiological Surveillance Supplementation

Authors: Samuel Munaf, Kevin Swingler, Franz Brülisauer, Anthony O’Hare, George Gunn, Aaron Reeves

Abstract:

Web scraping and text mining are popular computer science methods deployed by public health researchers to augment traditional epidemiological surveillance. However, within veterinary disease surveillance, such techniques are still in the early stages of development and have not yet been fully utilised. This study presents an exploration into the utility of incorporating internet-based data to better understand the smallholder farming communities within Scotland by using online text extraction and the subsequent mining of this data. Web scraping of the livestock fora was conducted in conjunction with text mining of the data in search of common themes, words, and topics found within the text. Results from bi-grams and topic modelling uncover four main topics of interest within the data pertaining to aspects of livestock husbandry: feeding, breeding, slaughter, and disposal. These topics were found amongst both the poultry and pig sub-forums. Topic modeling appears to be a useful method of unsupervised classification regarding this form of data, as it has produced clusters that relate to biosecurity and animal welfare. Internet data can be a very effective tool in aiding traditional veterinary surveillance methods, but the requirement for human validation of said data is crucial. This opens avenues of research via the incorporation of other dynamic social media data, namely Twitter and Facebook/Meta, in addition to time series analysis to highlight temporal patterns.

Keywords: veterinary epidemiology, disease surveillance, infodemiology, infoveillance, smallholding, social media, web scraping, sentiment analysis, geolocation, text mining, NLP

Procedia PDF Downloads 70
557 Development of a Telemedical Network Supporting an Automated Flow Cytometric Analysis for the Clinical Follow-up of Leukaemia

Authors: Claude Takenga, Rolf-Dietrich Berndt, Erling Si, Markus Diem, Guohui Qiao, Melanie Gau, Michael Brandstoetter, Martin Kampel, Michael Dworzak

Abstract:

In patients with acute lymphoblastic leukaemia (ALL), treatment response is increasingly evaluated with minimal residual disease (MRD) analyses. Flow Cytometry (FCM) is a fast and sensitive method to detect MRD. However, the interpretation of these multi-parametric data requires intensive operator training and experience. This paper presents a pipeline-software, as a ready-to-use FCM-based MRD-assessment tool for the daily clinical practice for patients with ALL. The new tool increases accuracy in assessment of FCM-MRD in samples which are difficult to analyse by conventional operator-based gating since computer-aided analysis potentially has a superior resolution due to utilization of the whole multi-parametric FCM-data space at once instead of step-wise, two-dimensional plot-based visualization. The system developed as a telemedical network reduces the work-load and lab-costs, staff-time needed for training, continuous quality control, operator-based data interpretation. It allows dissemination of automated FCM-MRD analysis to medical centres which have no established expertise for the benefit of an even larger community of diseased children worldwide. We established a telemedical network system for analysis and clinical follow-up and treatment monitoring of Leukaemia. The system is scalable and adapted to link several centres and laboratories worldwide.

Keywords: data security, flow cytometry, leukaemia, telematics platform, telemedicine

Procedia PDF Downloads 950
556 Exploring Utility and Intrinsic Value among UAE Arabic Teachers in Integrating M-Learning

Authors: Dina Tareq Ismail, Alexandria A. Proff

Abstract:

The United Arab Emirates (UAE) is a nation seeking to advance in all fields, particularly education. One area of focus for UAE 2021 agenda is to restructure UAE schools and universities by equipping them with highly developed technology. The agenda also advises educational institutions to prepare students with applicable and transferrable Information and Communication Technology (ICT) skills. Despite the emphasis on ICT and computer literacy skills, there exists limited empirical data on the use of M-Learning in the literature. This qualitative study explores the motivation of higher primary Arabic teachers in private schools toward implementing and integrating M-Learning apps in their classrooms. This research employs a phenomenological approach through the use of semistructured interviews with nine purposefully selected Arabic teachers. The data were analyzed using a content analysis via multiple stages of coding: open, axial, and thematic. Findings reveal three primary themes: (1) Arabic teachers with high levels of procedural knowledge in ICT are more motivated to implement M-Learning; (2) Arabic teachers' perceptions of self-efficacy influence their motivation toward implementation of M-Learning; (3) Arabic teachers implement M-Learning when they possess high utility and/or intrinsic value in these applications. These findings indicate a strong need for further training, equipping, and creating buy-in among Arabic teachers to enhance their ICT skills in implementing M-Learning. Further, given the limited availability of M-Learning apps designed for use in the Arabic language on the market, it is imperative that developers consider designing M-Learning tools that Arabic teachers, and Arabic-speaking students, can use and access more readily. This study contributes to closing the knowledge gap on teacher-motivation for implementing M-Learning in their classrooms in the UAE.

Keywords: ICT skills, m-learning, self-efficacy, teacher-motivation

Procedia PDF Downloads 86
555 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 100
554 Zarit Burden Interview among Informal Caregiver of Person with Dementia: A Systematic Review and Meta-Analysis

Authors: Nuraisyah H. Zulkifley, Suriani Ismail, Rosliza Abdul Manaf, Poh Y. Lim

Abstract:

Taking care of a person with dementia (PWD) is one of the most problematic and challenging caregiving situations. Without proper support, caregiver would need to deal with the impact of caregiving that would lead to caregiver burden. One of the most common tools used to measure caregiver burden among caregivers of PWD is Zarit Burden Interview (ZBI). A systematic review has been conducted through searching Medline, Science Direct, Cochrane Library, Embase, PsycINFO, ProQuest, and Scopus databases to identify relevant articles that elaborate on intervention and outcomes on ZBI among informal caregiver of PWD. The articles were searched in October 2019 with no restriction on language or publication status. Inclusion criteria are randomized control trial (RCT) studies, participants were informal caregivers of PWD, ZBI measured as outcomes, and intervention group was compared with no intervention control or usual care control. Two authors reviewed and extracted the data from the full-text articles. From a total of 344 records, nine studies were selected and included in this narrative review, and eight studies were included in the meta-analysis. The types of interventions that were implemented to ease caregiver burden are psychoeducation, physical activity, psychosocial, and computer-based intervention. The meta-analysis showed that there is a significant difference in the mean score of ZBI (p = 0.006) in the intervention group compared to the control group after implementation of intervention. In conclusion, interventions such as psychoeducation, psychosocial, and physical activity can help to reduce the burden experiencing by the caregivers of PWD.

Keywords: dementia, informal caregiver, randomized control trial, Zarit burden interview

Procedia PDF Downloads 158
553 An Investigation into the Impacts of High-Frequency Electromagnetic Fields Utilized in the 5G Technology on Insects

Authors: Veriko Jeladze, Besarion Partsvania, Levan Shoshiashvili

Abstract:

This paper addresses a very topical issue today. The frequency range 2.5-100 GHz contains frequencies that have already been used or will be used in modern 5G technologies. The wavelengths used in 5G systems will be close to the body dimensions of small size biological objects, particularly insects. Because the body and body parts dimensions of insects at these frequencies are comparable with the wavelength, the high absorption of EMF energy in the body tissues can occur(body resonance) and therefore can cause harmful effects, possibly the extinction of some of them. An investigation into the impact of radio-frequency nonionizing electromagnetic field (EMF) utilized in the future 5G on insects is of great importance as a very high number of 5G network components will increase the total EMF exposure in the environment. All ecosystems of the earth are interconnected. If one component of an ecosystem is disrupted, the whole system will be affected (which could cause cascading effects). The study of these problems is an important challenge for scientists today because the existing studies are incomplete and insufficient. Consequently, the purpose of this proposed research is to investigate the possible hazardous impact of RF-EMFs (including 5G EMFs) on insects. The project will study the effects of these EMFs on various insects that have different body sizes through computer modeling at frequencies from 2.5 to 100 GHz. The selected insects are honey bee, wasp, and ladybug. For this purpose, the detailed 3D discrete models of insects are created for EM and thermal modeling through FDTD and will be evaluated whole-body Specific Absorption Rates (SAR) at selected frequencies. All these studies represent a novelty. The proposed study will promote new investigations about the bio-effects of 5G-EMFs and will contribute to the harmonization of safe exposure levels and frequencies of 5G-EMFs'.

Keywords: electromagnetic field, insect, FDTD, specific absorption rate (SAR)

Procedia PDF Downloads 66
552 Experimental and Numerical Performance Analysis for Steam Jet Ejectors

Authors: Abdellah Hanafi, G. M. Mostafa, Mohamed Mortada, Ahmed Hamed

Abstract:

The steam ejectors are the heart of most of the desalination systems that employ vacuum. The systems that employ low grade thermal energy sources like solar energy and geothermal energy use the ejector to drive the system instead of high grade electric energy. The jet-ejector is used to create vacuum employing the flow of steam or air and using the severe pressure drop at the outlet of the main nozzle. The present work involves developing a one dimensional mathematical model for designing jet-ejectors and transform it into computer code using Engineering Equation solver (EES) software. The model receives the required operating conditions at the inlets and outlet of the ejector as inputs and produces the corresponding dimensions required to reach these conditions. The one-dimensional model has been validated using an existed model working on Abu-Qir power station. A prototype has been designed according to the one-dimensional model and attached to a special test bench to be tested before using it in the solar desalination pilot plant. The tested ejector will be responsible for the startup evacuation of the system and adjusting the vacuum of the evaporating effects. The tested prototype has shown a good agreement with the results of the code. In addition a numerical analysis has been applied on one of the designed geometry to give an image of the pressure and velocity distribution inside the ejector from a side, and from other side, to show the difference in results between the two-dimensional ideal gas model and real prototype. The commercial edition of ANSYS Fluent v.14 software is used to solve the two-dimensional axisymmetric case.

Keywords: solar energy, jet ejector, vacuum, evaporating effects

Procedia PDF Downloads 591
551 Voting Representation in Social Networks Using Rough Set Techniques

Authors: Yasser F. Hassan

Abstract:

Social networking involves use of an online platform or website that enables people to communicate, usually for a social purpose, through a variety of services, most of which are web-based and offer opportunities for people to interact over the internet, e.g. via e-mail and ‘instant messaging’, by analyzing the voting behavior and ratings of judges in a popular comments in social networks. While most of the party literature omits the electorate, this paper presents a model where elites and parties are emergent consequences of the behavior and preferences of voters. The research in artificial intelligence and psychology has provided powerful illustrations of the way in which the emergence of intelligent behavior depends on the development of representational structure. As opposed to the classical voting system (one person – one decision – one vote) a new voting system is designed where agents with opposed preferences are endowed with a given number of votes to freely distribute them among some issues. The paper uses ideas from machine learning, artificial intelligence and soft computing to provide a model of the development of voting system response in a simulated agent. The modeled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structure. We employ agent-based computer simulation to demonstrate the formation and interaction of coalitions that arise from individual voter preferences. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior.

Keywords: voting system, rough sets, multi-agent, social networks, emergence, power indices

Procedia PDF Downloads 373
550 A Preparatory Method for Building Construction Implemented in a Case Study in Brazil

Authors: Aline Valverde Arroteia, Tatiana Gondim do Amaral, Silvio Burrattino Melhado

Abstract:

During the last twenty years, the construction field in Brazil has evolved significantly in response to its market growing and competitiveness. However, this evolving path has faced many obstacles such as cultural barriers and the lack of efforts to achieve quality at the construction site. At the same time, the greatest amount of information generated on the designing or construction phases is lost due to the lack of an effective coordination of these activities. Face this problem, the aim of this research was to implement a French method named PEO which means preparation for building construction (in Portuguese) seeking to understand the design management process and its interface with the building construction phase. The research method applied was qualitative, and it was carried out through two case studies in the city of Goiania, in Goias, Brazil. The research was divided into two stages called pilot study at Company A and implementation of PEO at Company B. After the implementation; the results demonstrated the PEO method's effectiveness and feasibility while a booster on the quality improvement of design management. The analysis showed that the method has a purpose to improve the design and allow the reduction of failures, errors and rework commonly found in the production of buildings. Therefore, it can be concluded that the PEO is feasible to be applied to real estate and building companies. But, companies need to believe in the contribution they can make to the discovery of design failures in conjunction with other stakeholders forming a construction team. The result of PEO can be maximized when adopting the principles of simultaneous engineering and insertion of new computer technologies, which use a three-dimensional model of the building with BIM process.

Keywords: communication, design and construction interface management, preparation for building construction (PEO), proactive coordination (CPA)

Procedia PDF Downloads 137
549 Technical Aspects of Closing the Loop in Depth-of-Anesthesia Control

Authors: Gorazd Karer

Abstract:

When performing a diagnostic procedure or surgery in general anesthesia (GA), a proper introduction and dosing of anesthetic agents are one of the main tasks of the anesthesiologist. However, depth of anesthesia (DoA) also seems to be a suitable process for closed-loop control implementation. To implement such a system, one must be able to acquire the relevant signals online and in real-time, as well as stream the calculated control signal to the infusion pump. However, during a procedure, patient monitors and infusion pumps are purposely unable to connect to an external (possibly medically unapproved) device for safety reasons, thus preventing closed-loop control. The paper proposes a conceptual solution to the aforementioned problem. First, it presents some important aspects of contemporary clinical practice. Next, it introduces the closed-loop-control-system structure and the relevant information flow. Focusing on transferring the data from the patient to the computer, it presents a non-invasive image-based system for signal acquisition from a patient monitor for online depth-of-anesthesia assessment. Furthermore, it introduces a UDP-based communication method that can be used for transmitting the calculated anesthetic inflow to the infusion pump. The proposed system is independent of a medical device manufacturer and is implemented in Matlab-Simulink, which can be conveniently used for DoA control implementation. The proposed scheme has been tested in a simulated GA setting and is ready to be evaluated in an operating theatre. However, the proposed system is only a step towards a proper closed-loop control system for DoA, which could routinely be used in clinical practice.

Keywords: closed-loop control, depth of anesthesia (DoA), modeling, optical signal acquisition, patient state index (PSi), UDP communication protocol

Procedia PDF Downloads 195
548 The Challenges of Cloud Computing Adoption in Nigeria

Authors: Chapman Eze Nnadozie

Abstract:

Cloud computing, a technology that is made possible through virtualization within networks represents a shift from the traditional ownership of infrastructure and other resources by distinct organization to a more scalable pattern in which computer resources are rented online to organizations on either as a pay-as-you-use basis or by subscription. In other words, cloud computing entails the renting of computing resources (such as storage space, memory, servers, applications, networks, etc.) by a third party to its clients on a pay-as-go basis. It is a new innovative technology that is globally embraced because of its renowned benefits, profound of which is its cost effectiveness on the part of organizations engaged with its services. In Nigeria, the services are provided either directly to companies mostly by the key IT players such as Microsoft, IBM, and Google; or in partnership with some other players such as Infoware, Descasio, and Sunnet. This action enables organizations to rent IT resources on a pay-as-you-go basis thereby salvaging them from wastages accruable on acquisition and maintenance of IT resources such as ownership of a separate data centre. This paper intends to appraise the challenges of cloud computing adoption in Nigeria, bearing in mind the country’s peculiarities’ in terms of infrastructural development. The methodologies used in this paper include the use of research questionnaires, formulated hypothesis, and the testing of the formulated hypothesis. The major findings of this paper include the fact that there are some addressable challenges to the adoption of cloud computing in Nigeria. Furthermore, the country will gain significantly if the challenges especially in the area of infrastructural development are well addressed. This is because the research established the fact that there are significant gains derivable by the adoption of cloud computing by organizations in Nigeria. However, these challenges can be overturned by concerted efforts in the part of government and other stakeholders.

Keywords: cloud computing, data centre, infrastructure, it resources, virtualization

Procedia PDF Downloads 329
547 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping

Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.

Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM

Procedia PDF Downloads 66
546 A Two Server Poisson Queue Operating under FCFS Discipline with an ‘m’ Policy

Authors: R. Sivasamy, G. Paulraj, S. Kalaimani, N.Thillaigovindan

Abstract:

For profitable businesses, queues are double-edged swords and hence the pain of long wait times in a queue often frustrates customers. This paper suggests a technical way of reducing the pain of lines through a Poisson M/M1, M2/2 queueing system operated by two heterogeneous servers with an objective of minimising the mean sojourn time of customers served under the queue discipline ‘First Come First Served with an ‘m’ policy, i.e. FCFS-m policy’. Arrivals to the system form a Poisson process of rate λ and are served by two exponential servers. The service times of successive customers at server ‘j’ are independent and identically distributed (i.i.d.) random variables and each of it is exponentially distributed with rate parameter μj (j=1, 2). The primary condition for implementing the queue discipline ‘FCFS-m policy’ on these service rates μj (j=1, 2) is that either (m+1) µ2 > µ1> m µ2 or (m+1) µ1 > µ2> m µ1 must be satisfied. Further waiting customers prefer the server-1 whenever it becomes available for service, and the server-2 should be installed if and only if the queue length exceeds the value ‘m’ as a threshold. Steady-state results on queue length and waiting time distributions have been obtained. A simple way of tracing the optimal service rate μ*2 of the server-2 is illustrated in a specific numerical exercise to equalize the average queue length cost with that of the service cost. Assuming that the server-1 has to dynamically adjust the service rates as μ1 during the system size is strictly less than T=(m+2) while μ2=0, and as μ1 +μ2 where μ2>0 if the system size is more than or equal to T, corresponding steady state results of M/M1+M2/1 queues have been deduced from those of M/M1,M2/2 queues. To conclude this investigation has a viable application, results of M/M1+M2/1 queues have been used in processing of those waiting messages into a single computer node and to measure the power consumption by the node.

Keywords: two heterogeneous servers, M/M1, M2/2 queue, service cost and queue length cost, M/M1+M2/1 queue

Procedia PDF Downloads 343
545 Estimation of Particle Size Distribution Using Magnetization Data

Authors: Navneet Kaur, S. D. Tiwari

Abstract:

Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.

Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism

Procedia PDF Downloads 107