World Academy of Science, Engineering and Technology
[Computer and Information Engineering]
Online ISSN : 1307-6892
2478 A Review of Serious Games Characteristics: Common and Specific Aspects
Authors: B. Ben Amara, H. Mhiri Sellami
Abstract:
Serious games adoption is increasing in multiple fields, including health, education, and business. In the same way, many research studied serious games (SGs) for various purposes such as classification, positive impacts, or learning outcomes. Although most of these research examine SG characteristics (SGCs) for conducting their studies, to author’s best knowledge, there is no consensus about features neither in number not in the description. In this paper, we conduct a literature review to collect essential game attributes regardless of the application areas and the study objectives. Firstly, we aimed to define Common SGCs (CSGCs) that characterize the game aspect, by gathering features having the same meanings. Secondly, we tried to identify specific features related to the application area or to the study purpose as a serious aspect. The findings suggest that any type of SG can be defined by a number of CSGCs depicting the gaming side, such as adaptability and rules. In addition, we outlined a number of specific SGCs describing the 'serious' aspect, including specific needs of the domain and indented outcomes. In conclusion, our review showed that it is possible to bridge the research gap due to the lack of consensus by using CSGCs. Moreover, these features facilitate the design and development of successful serious games in any domain and provide a foundation for further research in this area.Keywords: serious game characteristics, serious games common aspects, serious games features, serious games outcomes
Procedia PDF Downloads 1362477 Frequency Decomposition Approach for Sub-Band Common Spatial Pattern Methods for Motor Imagery Based Brain-Computer Interface
Authors: Vitor M. Vilas Boas, Cleison D. Silva, Gustavo S. Mafra, Alexandre Trofino Neto
Abstract:
Motor imagery (MI) based brain-computer interfaces (BCI) uses event-related (de)synchronization (ERS/ ERD), typically recorded using electroencephalography (EEG), to translate brain electrical activity into control commands. To mitigate undesirable artifacts and noise measurements on EEG signals, methods based on band-pass filters defined by a specific frequency band (i.e., 8 – 30Hz), such as the Infinity Impulse Response (IIR) filters, are typically used. Spatial techniques, such as Common Spatial Patterns (CSP), are also used to estimate the variations of the filtered signal and extract features that define the imagined motion. The CSP effectiveness depends on the subject's discriminative frequency, and approaches based on the decomposition of the band of interest into sub-bands with smaller frequency ranges (SBCSP) have been suggested to EEG signals classification. However, despite providing good results, the SBCSP approach generally increases the computational cost of the filtering step in IM-based BCI systems. This paper proposes the use of the Fast Fourier Transform (FFT) algorithm in the IM-based BCI filtering stage that implements SBCSP. The goal is to apply the FFT algorithm to reduce the computational cost of the processing step of these systems and to make them more efficient without compromising classification accuracy. The proposal is based on the representation of EEG signals in a matrix of coefficients resulting from the frequency decomposition performed by the FFT, which is then submitted to the SBCSP process. The structure of the SBCSP contemplates dividing the band of interest, initially defined between 0 and 40Hz, into a set of 33 sub-bands spanning specific frequency bands which are processed in parallel each by a CSP filter and an LDA classifier. A Bayesian meta-classifier is then used to represent the LDA outputs of each sub-band as scores and organize them into a single vector, and then used as a training vector of an SVM global classifier. Initially, the public EEG data set IIa of the BCI Competition IV is used to validate the approach. The first contribution of the proposed method is that, in addition to being more compact, because it has a 68% smaller dimension than the original signal, the resulting FFT matrix maintains the signal information relevant to class discrimination. In addition, the results showed an average reduction of 31.6% in the computational cost in relation to the application of filtering methods based on IIR filters, suggesting FFT efficiency when applied in the filtering step. Finally, the frequency decomposition approach improves the overall system classification rate significantly compared to the commonly used filtering, going from 73.7% using IIR to 84.2% using FFT. The accuracy improvement above 10% and the computational cost reduction denote the potential of FFT in EEG signal filtering applied to the context of IM-based BCI implementing SBCSP. Tests with other data sets are currently being performed to reinforce such conclusions.Keywords: brain-computer interfaces, fast Fourier transform algorithm, motor imagery, sub-band common spatial patterns
Procedia PDF Downloads 1282476 Automatic Number Plate Recognition System Based on Deep Learning
Authors: T. Damak, O. Kriaa, A. Baccar, M. A. Ben Ayed, N. Masmoudi
Abstract:
In the last few years, Automatic Number Plate Recognition (ANPR) systems have become widely used in the safety, the security, and the commercial aspects. Forethought, several methods and techniques are computing to achieve the better levels in terms of accuracy and real time execution. This paper proposed a computer vision algorithm of Number Plate Localization (NPL) and Characters Segmentation (CS). In addition, it proposed an improved method in Optical Character Recognition (OCR) based on Deep Learning (DL) techniques. In order to identify the number of detected plate after NPL and CS steps, the Convolutional Neural Network (CNN) algorithm is proposed. A DL model is developed using four convolution layers, two layers of Maxpooling, and six layers of fully connected. The model was trained by number image database on the Jetson TX2 NVIDIA target. The accuracy result has achieved 95.84%.Keywords: ANPR, CS, CNN, deep learning, NPL
Procedia PDF Downloads 3062475 Adaptive Multiple Transforms Hardware Architecture for Versatile Video Coding
Authors: T. Damak, S. Houidi, M. A. Ben Ayed, N. Masmoudi
Abstract:
The Versatile Video Coding standard (VVC) is actually under development by the Joint Video Exploration Team (or JVET). An Adaptive Multiple Transforms (AMT) approach was announced. It is based on different transform modules that provided an efficient coding. However, the AMT solution raises several issues especially regarding the complexity of the selected set of transforms. This can be an important issue, particularly for a future industrial adoption. This paper proposed an efficient hardware implementation of the most used transform in AMT approach: the DCT II. The developed circuit is adapted to different block sizes and can reach a minimum frequency of 192 MHz allowing an optimized execution time.Keywords: adaptive multiple transforms, AMT, DCT II, hardware, transform, versatile video coding, VVC
Procedia PDF Downloads 1462474 Analysis of Network Performance Using Aspect of Quantum Cryptography
Authors: Nisarg A. Patel, Hiren B. Patel
Abstract:
Quantum cryptography is described as a point-to-point secure key generation technology that has emerged in recent times in providing absolute security. Researchers have started studying new innovative approaches to exploit the security of Quantum Key Distribution (QKD) for a large-scale communication system. A number of approaches and models for utilization of QKD for secure communication have been developed. The uncertainty principle in quantum mechanics created a new paradigm for QKD. One of the approaches for use of QKD involved network fashioned security. The main goal was point-to-point Quantum network that exploited QKD technology for end-to-end network security via high speed QKD. Other approaches and models equipped with QKD in network fashion are introduced in the literature as. A different approach that this paper deals with is using QKD in existing protocols, which are widely used on the Internet to enhance security with main objective of unconditional security. Our work is towards the analysis of the QKD in Mobile ad-hoc network (MANET).Keywords: cryptography, networking, quantum, encryption and decryption
Procedia PDF Downloads 1842473 Development and Evaluation of Virtual Basketball Game Using Motion Capture Technology
Authors: Shunsuke Aoki, Taku Ri, Tatsuya Yamazaki
Abstract:
These days, along with the development of e-sports, video games as a competitive sport is attracting attention. But, in many cases, action in the screen does not match the real motion of operation. Inclusiveness of player motion is needed to increase reality and excitement for sports games. Therefore, in this study, the authors propose a method to recognize player motion by using the motion capture technology and develop a virtual basketball game. The virtual basketball game consists of a screen with nine targets, players, depth sensors, and no ball. The players pretend a two-handed basketball shot without a ball aiming at one of the nine targets on the screen. Time-series data of three-dimensional coordinates of player joints are captured by the depth sensor. 20 joints data are measured for each player to estimate the shooting motion in real-time. The trajectory of the thrown virtual ball is calculated based on the time-series data and hitting on the target is judged as success or failure. The virtual basketball game can be played by 2 to 4 players as a competitive game among the players. The developed game was exhibited to the public for evaluation on the authors' university open campus days. 339 visitors participated in the exhibition and enjoyed the virtual basketball game over the two days. A questionnaire survey on the developed game was conducted for the visitors who experienced the game. As a result of the survey, about 97.3% of the players found the game interesting regardless of whether they had experienced actual basketball before or not. In addition, it is found that women are easy to comfort for shooting motion. The virtual game with motion capture technology has the potential to become a universal entertainment between e-sports and actual sports.Keywords: basketball, motion capture, questionnaire survey, video ga
Procedia PDF Downloads 1262472 Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
Authors: Hamed Alqahtani, Manolya Kavakli-Thorne
Abstract:
The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.Keywords: disentanglement, face detection, generative adversarial networks, video surveillance
Procedia PDF Downloads 1292471 Non-Invasive Data Extraction from Machine Display Units Using Video Analytics
Authors: Ravneet Kaur, Joydeep Acharya, Sudhanshu Gaur
Abstract:
Artificial Intelligence (AI) has the potential to transform manufacturing by improving shop floor processes such as production, maintenance and quality. However, industrial datasets are notoriously difficult to extract in a real-time, streaming fashion thus, negating potential AI benefits. The main example is some specialized industrial controllers that are operated by custom software which complicates the process of connecting them to an Information Technology (IT) based data acquisition network. Security concerns may also limit direct physical access to these controllers for data acquisition. To connect the Operational Technology (OT) data stored in these controllers to an AI application in a secure, reliable and available way, we propose a novel Industrial IoT (IIoT) solution in this paper. In this solution, we demonstrate how video cameras can be installed in a factory shop floor to continuously obtain images of the controller HMIs. We propose image pre-processing to segment the HMI into regions of streaming data and regions of fixed meta-data. We then evaluate the performance of multiple Optical Character Recognition (OCR) technologies such as Tesseract and Google vision to recognize the streaming data and test it for typical factory HMIs and realistic lighting conditions. Finally, we use the meta-data to match the OCR output with the temporal, domain-dependent context of the data to improve the accuracy of the output. Our IIoT solution enables reliable and efficient data extraction which will improve the performance of subsequent AI applications.Keywords: human machine interface, industrial internet of things, internet of things, optical character recognition, video analytics
Procedia PDF Downloads 1092470 Reframing Service Oriented Architecture Design Principles in Software Design Quality
Authors: Purnomo Yustianto, Robin Doss, Novianto B. Kurniawan Suhardi
Abstract:
Since its inception, the design activities of Service Oriented Architecture (SOA) has been guided with aspects from the Service Design Principles (SDP), such as cohesion, granularity, loose coupling, discoverability, and autonomy, etc. The goal of this paper is two folds. The first is to examine the position of SDP within the context of software quality, and the second is to reframe the aspects of SDP into a more concise terms and relations. This paper is divided into four parts, in which after the introduction, a review on related software quality is provided to determine the quality context of SDP. The third part reviews the original SDP and offers a relation model among the SDP aspects. The fourth part explores the design quality metrics available for SOA and proposes a relationship representing the design quality. Among the aspects of design principles, the cohesion and coupling aspect is determined to be the two important aspects for achieving reusability of a service.Keywords: SOA, software quality, service design principle, reusability, cohesion, coupling
Procedia PDF Downloads 1712469 Identifying Critical Success Factors for Data Quality Management through a Delphi Study
Authors: Maria Paula Santos, Ana Lucas
Abstract:
Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort
Procedia PDF Downloads 2172468 Fast Adjustable Threshold for Uniform Neural Network Quantization
Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev
Abstract:
The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.Keywords: distillation, machine learning, neural networks, quantization
Procedia PDF Downloads 3252467 Effects of Repetitive Strain/Stress Injury on the Human Body
Authors: Mohd Abdullah
Abstract:
This review describes some of the effects of repetitive strain/stress injury (RSI) on the human body especially among computer professionals today that spend extended hours of prolonged sitting in front of a computer day in and day out. The review briefly introduces the main factors that contribute to an increase of RSI among such computer professionals. The review briefly discusses how the human spinal column and knees are mainly affected by the onset of RSI resulting in poor posture. The root and secondary causes and effects of RSI are reviewed. The importance and value of the various breathing techniques are reviewed in an attempt to alleviate some of the effects of RSI. The review concludes with a small sample of suggested office stretches and poses geared towards at reducing RSI follows in this review. Readers will learn about the effects of RSI, as well as ways to cope with it. A better understanding of coping strategies may lead to well-being and a healthier overall lifestyle. Ultimately, the investment of time to connect with oneself with the poses and the power of the breath would promote a well-being that is overall healthier thus resulting in a better ability to cope/manage life stresses.Keywords: health, wellness, repetitive, chairs
Procedia PDF Downloads 1052466 Cybersecurity Protection Structures: The Case of Lesotho
Authors: N. N. Mosola, K. F. Moeketsi, R. Sehobai, N. Pule
Abstract:
The Internet brings increasing use of Information and Communications Technology (ICT) services and facilities. Consequently, new computing paradigms emerge to provide services over the Internet. Although there are several benefits stemming from these services, they pose several risks inherited from the Internet. For example, cybercrime, identity theft, malware etc. To thwart these risks, this paper proposes a holistic approach. This approach involves multidisciplinary interactions. The paper proposes a top-down and bottom-up approach to deal with cyber security concerns in developing countries. These concerns range from regulatory and legislative areas, cyber awareness, research and development, technical dimensions etc. The main focus areas are highlighted and a cybersecurity model solution is proposed. The paper concludes by combining all relevant solutions into a proposed cybersecurity model to assist developing countries in enhancing a cyber-safe environment to instill and promote a culture of cybersecurity.Keywords: cybercrime, cybersecurity, computer emergency response team, computer security incident response team
Procedia PDF Downloads 1562465 Internet Optimization by Negotiating Traffic Times
Authors: Carlos Gonzalez
Abstract:
This paper describes a system to optimize the use of the internet by clients requiring downloading of videos at peak hours. The system consists of a web server belonging to a provider of video contents, a provider of internet communications and a software application running on a client’s computer. The client using the application software will communicate to the video provider a list of the client’s future video demands. The video provider calculates which videos are going to be more in demand for download in the immediate future, and proceeds to request the internet provider the most optimal hours to do the downloading. The times of the downloading will be sent to the application software, which will use the information of pre-established hours negotiated between the video provider and the internet provider to download those videos. The videos will be saved in a special protected section of the user’s hard disk, which will only be accessed by the application software in the client’s computer. When the client is ready to see a video, the application will search the list of current existent videos in the area of the hard disk; if it does exist, it will use this video directly without the need for internet access. We found that the best way to optimize the download traffic of videos is by negotiation between the internet communication provider and the video content provider.Keywords: internet optimization, video download, future demands, secure storage
Procedia PDF Downloads 1362464 Youth Involvement in Cybercrime in Nigeria: A Case Study of Ikeja Local Government Area
Authors: Niyi Adegoke, Saanumi Jimmy Omolou
Abstract:
The prevalence rate of youth involving in cybercrime is alarming, which calls for concern among the government, parents, NGO and religious bodies, hence this paper aims at examining youth involvement in cybercrime in Nigeria. Achievement motivation theory was used to explain the activities of cyber-criminals in Nigerian society. A descriptive survey method was adopted for the study. The sample for the study was one hundred and fifty (150) respondents randomly selected from the population of the study. A questionnaire was used to gather information and data from the respondents. Data collected through the questionnaire were analyzed using percentage tool for the respondents’ bio-data while chi-square was employed to test the hypotheses. Findings from the study have revealed that parental negligence, unemployment, peer influence, and quest for materialism were responsible for cyber-crimes in Nigeria. The study concludes with the following recommendations among which are: creating employment opportunities for the youths and ensure good governance and accountability among other things will go a long way to solve the problem of cybercrime in our society.Keywords: cybercrime, youth, Nigeria, unemployment, information communication technology
Procedia PDF Downloads 2282463 Performance Evaluation of Fingerprint, Auto-Pin and Password-Based Security Systems in Cloud Computing Environment
Authors: Emmanuel Ogala
Abstract:
Cloud computing has been envisioned as the next-generation architecture of Information Technology (IT) enterprise. In contrast to traditional solutions where IT services are under physical, logical and personnel controls, cloud computing moves the application software and databases to the large data centres, where the management of the data and services may not be fully trustworthy. This is due to the fact that the systems are opened to the whole world and as people tries to have access into the system, many people also are there trying day-in day-out on having unauthorized access into the system. This research contributes to the improvement of cloud computing security for better operation. The work is motivated by two problems: first, the observed easy access to cloud computing resources and complexity of attacks to vital cloud computing data system NIC requires that dynamic security mechanism evolves to stay capable of preventing illegitimate access. Second; lack of good methodology for performance test and evaluation of biometric security algorithms for securing records in cloud computing environment. The aim of this research was to evaluate the performance of an integrated security system (ISS) for securing exams records in cloud computing environment. In this research, we designed and implemented an ISS consisting of three security mechanisms of biometric (fingerprint), auto-PIN and password into one stream of access control and used for securing examination records in Kogi State University, Anyigba. Conclusively, the system we built has been able to overcome guessing abilities of hackers who guesses people password or pin. We are certain about this because the added security system (fingerprint) needs the presence of the user of the software before a login access can be granted. This is based on the placement of his finger on the fingerprint biometrics scanner for capturing and verification purpose for user’s authenticity confirmation. The study adopted the conceptual of quantitative design. Object oriented and design methodology was adopted. In the analysis and design, PHP, HTML5, CSS, Visual Studio Java Script, and web 2.0 technologies were used to implement the model of ISS for cloud computing environment. Note; PHP, HTML5, CSS were used in conjunction with visual Studio front end engine design tools and MySQL + Access 7.0 were used for the backend engine and Java Script was used for object arrangement and also validation of user input for security check. Finally, the performance of the developed framework was evaluated by comparing with two other existing security systems (Auto-PIN and password) within the school and the results showed that the developed approach (fingerprint) allows overcoming the two main weaknesses of the existing systems and will work perfectly well if fully implemented.Keywords: performance evaluation, fingerprint, auto-pin, password-based, security systems, cloud computing environment
Procedia PDF Downloads 1402462 WhatsApp as Part of a Blended Learning Model to Help Programming Novices
Authors: Tlou J. Ramabu
Abstract:
Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.Keywords: blended learning, higher education, WhatsApp, programming, novices, lecturers
Procedia PDF Downloads 1722461 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 1482460 The Tourist Satisfaction on Brand Identity Design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province
Authors: Panupong Chanplin, Kathaleeya Chanda., Wilailuk Mepracha
Abstract:
The aims of this research were twofold: 1) to brand identity design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province and 2) to study the level of tourist satisfaction towards brand identity design of Creative Agriculture Community Enterprise, Bang Khonthi District, Samut Songkhram Province. tourist satisfaction was measured using six criteria: clear brand positioning, likeable brand personality, memorable logo, attractive color palette, professional typography and on-brand supporting graphics. The researcher utilized a probability sampling method via simple random sampling. The sample consisted of 30 tourists in the Creative Agriculture Community Enterprise. Statistics utilized for data analysis were percentage, mean, and standard deviation. The results suggest that tourist had high levels of satisfaction towards all six criteria of the brand identity design that was designed to target them. This study proposes that specifically brand identity designed of Creative Agriculture Community Enterprise could also be implemented with other real media already available on the market.Keywords: satisfaction, brand identity, logo, creative agriculture community enterprise
Procedia PDF Downloads 2422459 Designing an Integrated Platform for Real-Time Recommendations Sharing among the Aged and People Living with Cancer
Authors: Adekunle O. Afolabi, Pekka Toivanen
Abstract:
The world is expected to experience growth in the number of ageing population, and this will bring about high cost of providing care for these valuable citizens. In addition, many of these live with chronic diseases that come with old age. Providing adequate care in the face of rising costs and dwindling personnel can be challenging. However, advances in technologies and emergence of the Internet of Things are providing a way to address these challenges while improving care giving. This study proposes the integration of recommendation systems into homecare to provide real-time recommendations for effective management of people receiving care at home and those living with chronic diseases. Using the simplified Training Logic Concept, stakeholders and requirements were identified. Specific requirements were gathered from people living with cancer. The solution designed has two components namely home and community, to enhance recommendations sharing for effective care giving. The community component of the design was implemented with the development of a mobile app called Recommendations Sharing Community for Aged and Chronically Ill People (ReSCAP). This component has illustrated the possibility of real-time recommendations, improved recommendations sharing among care receivers and between a physician and care receivers. Full implementation will increase access to health data for better care decision making.Keywords: recommendation systems, Internet of Things, healthcare, homecare, real-time
Procedia PDF Downloads 1542458 A Bacterial Foraging Optimization Algorithm Applied to the Synthesis of Polyacrylamide Hydrogels
Authors: Florin Leon, Silvia Curteanu
Abstract:
The Bacterial Foraging Optimization (BFO) algorithm is inspired by the behavior of bacteria such as Escherichia coli or Myxococcus xanthus when searching for food, more precisely the chemotaxis behavior. Bacteria perceive chemical gradients in the environment, such as nutrients, and also other individual bacteria, and move toward or in the opposite direction to those signals. The application example considered as a case study consists in establishing the dependency between the reaction yield of hydrogels based on polyacrylamide and the working conditions such as time, temperature, monomer, initiator, crosslinking agent and inclusion polymer concentrations, as well as type of the polymer added. This process is modeled with a neural network which is included in an optimization procedure based on BFO. An experimental study of BFO parameters is performed. The results show that the algorithm is quite robust and can obtain good results for diverse combinations of parameter values.Keywords: bacterial foraging, hydrogels, modeling and optimization, neural networks
Procedia PDF Downloads 1532457 Distorted Document Images Dataset for Text Detection and Recognition
Authors: Ilia Zharikov, Philipp Nikitin, Ilia Vasiliev, Vladimir Dokholyan
Abstract:
With the increasing popularity of document analysis and recognition systems, text detection (TD) and optical character recognition (OCR) in document images become challenging tasks. However, according to our best knowledge, no publicly available datasets for these particular problems exist. In this paper, we introduce a Distorted Document Images dataset (DDI-100) and provide a detailed analysis of the DDI-100 in its current state. To create the dataset we collected 7000 unique document pages, and extend it by applying different types of distortions and geometric transformations. In total, DDI-100 contains more than 100,000 document images together with binary text masks, text and character locations in terms of bounding boxes. We also present an analysis of several state-of-the-art TD and OCR approaches on the presented dataset. Lastly, we demonstrate the usefulness of DDI-100 to improve accuracy and stability of the considered TD and OCR models.Keywords: document analysis, open dataset, optical character recognition, text detection
Procedia PDF Downloads 1732456 Robust Processing of Antenna Array Signals under Local Scattering Environments
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch
Procedia PDF Downloads 1122455 Morphology Operation and Discrete Wavelet Transform for Blood Vessels Segmentation in Retina Fundus
Authors: Rita Magdalena, N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Sofia Saidah, Bima Sakti
Abstract:
Vessel segmentation of retinal fundus is important for biomedical sciences in diagnosing ailments related to the eye. Segmentation can simplify medical experts in diagnosing retinal fundus image state. Therefore, in this study, we designed a software using MATLAB which enables the segmentation of the retinal blood vessels on retinal fundus images. There are two main steps in the process of segmentation. The first step is image preprocessing that aims to improve the quality of the image to be optimum segmented. The second step is the image segmentation in order to perform the extraction process to retrieve the retina’s blood vessel from the eye fundus image. The image segmentation methods that will be analyzed in this study are Morphology Operation, Discrete Wavelet Transform and combination of both. The amount of data that used in this project is 40 for the retinal image and 40 for manually segmentation image. After doing some testing scenarios, the average accuracy for Morphology Operation method is 88.46 % while for Discrete Wavelet Transform is 89.28 %. By combining the two methods mentioned in later, the average accuracy was increased to 89.53 %. The result of this study is an image processing system that can segment the blood vessels in retinal fundus with high accuracy and low computation time.Keywords: discrete wavelet transform, fundus retina, morphology operation, segmentation, vessel
Procedia PDF Downloads 1952454 Principle Component Analysis on Colon Cancer Detection
Authors: N. K. Caecar Pratiwi, Yunendah Nur Fuadah, Rita Magdalena, R. D. Atmaja, Sofia Saidah, Ocky Tiaramukti
Abstract:
Colon cancer or colorectal cancer is a type of cancer that attacks the last part of the human digestive system. Lymphoma and carcinoma are types of cancer that attack human’s colon. Colon cancer causes deaths about half a million people every year. In Indonesia, colon cancer is the third largest cancer case for women and second in men. Unhealthy lifestyles such as minimum consumption of fiber, rarely exercising and lack of awareness for early detection are factors that cause high cases of colon cancer. The aim of this project is to produce a system that can detect and classify images into type of colon cancer lymphoma, carcinoma, or normal. The designed system used 198 data colon cancer tissue pathology, consist of 66 images for Lymphoma cancer, 66 images for carcinoma cancer and 66 for normal / healthy colon condition. This system will classify colon cancer starting from image preprocessing, feature extraction using Principal Component Analysis (PCA) and classification using K-Nearest Neighbor (K-NN) method. Several stages in preprocessing are resize, convert RGB image to grayscale, edge detection and last, histogram equalization. Tests will be done by trying some K-NN input parameter setting. The result of this project is an image processing system that can detect and classify the type of colon cancer with high accuracy and low computation time.Keywords: carcinoma, colorectal cancer, k-nearest neighbor, lymphoma, principle component analysis
Procedia PDF Downloads 2052453 Foot Recognition Using Deep Learning for Knee Rehabilitation
Authors: Rakkrit Duangsoithong, Jermphiphut Jaruenpunyasak, Alba Garcia
Abstract:
The use of foot recognition can be applied in many medical fields such as the gait pattern analysis and the knee exercises of patients in rehabilitation. Generally, a camera-based foot recognition system is intended to capture a patient image in a controlled room and background to recognize the foot in the limited views. However, this system can be inconvenient to monitor the knee exercises at home. In order to overcome these problems, this paper proposes to use the deep learning method using Convolutional Neural Networks (CNNs) for foot recognition. The results are compared with the traditional classification method using LBP and HOG features with kNN and SVM classifiers. According to the results, deep learning method provides better accuracy but with higher complexity to recognize the foot images from online databases than the traditional classification method.Keywords: foot recognition, deep learning, knee rehabilitation, convolutional neural network
Procedia PDF Downloads 1612452 Dido: An Automatic Code Generation and Optimization Framework for Stencil Computations on Distributed Memory Architectures
Authors: Mariem Saied, Jens Gustedt, Gilles Muller
Abstract:
We present Dido, a source-to-source auto-generation and optimization framework for multi-dimensional stencil computations. It enables a large programmer community to easily and safely implement stencil codes on distributed-memory parallel architectures with Ordered Read-Write Locks (ORWL) as an execution and communication back-end. ORWL provides inter-task synchronization for data-oriented parallel and distributed computations. It has been proven to guarantee equity, liveness, and efficiency for a wide range of applications, particularly for iterative computations. Dido consists mainly of an implicitly parallel domain-specific language (DSL) implemented as a source-level transformer. It captures domain semantics at a high level of abstraction and generates parallel stencil code that leverages all ORWL features. The generated code is well-structured and lends itself to different possible optimizations. In this paper, we enhance Dido to handle both Jacobi and Gauss-Seidel grid traversals. We integrate temporal blocking to the Dido code generator in order to reduce the communication overhead and minimize data transfers. To increase data locality and improve intra-node data reuse, we coupled the code generation technique with the polyhedral parallelizer Pluto. The accuracy and portability of the generated code are guaranteed thanks to a parametrized solution. The combination of ORWL features, the code generation pattern and the suggested optimizations, make of Dido a powerful code generation framework for stencil computations in general, and for distributed-memory architectures in particular. We present a wide range of experiments over a number of stencil benchmarks.Keywords: stencil computations, ordered read-write locks, domain-specific language, polyhedral model, experiments
Procedia PDF Downloads 1272451 Blockchain Security in MANETs
Authors: Nada Mouchfiq, Ahmed Habbani, Chaimae Benjbara
Abstract:
The security aspect of the IoT occupies a place of great importance especially after the evolution that has known this field lastly because it must take into account the transformations and the new applications .Blockchain is a new technology dedicated to the data sharing. However, this does not work the same way in the different systems with different operating principles. This article will discuss network security using the Blockchain to facilitate the sending of messages and information, enabling the use of new processes and enabling autonomous coordination of devices. To do this, we will discuss proposed solutions to ensure a high level of security in these networks in the work of other researchers. Finally, our article will propose a method of security more adapted to our needs as a team working in the ad hoc networks, this method is based on the principle of the Blockchain and that we named ”MPR Blockchain”.Keywords: Ad hocs networks, blockchain, MPR, security
Procedia PDF Downloads 1852450 Improved Feature Extraction Technique for Handling Occlusion in Automatic Facial Expression Recognition
Authors: Khadijat T. Bamigbade, Olufade F. W. Onifade
Abstract:
The field of automatic facial expression analysis has been an active research area in the last two decades. Its vast applicability in various domains has drawn so much attention into developing techniques and dataset that mirror real life scenarios. Many techniques such as Local Binary Patterns and its variants (CLBP, LBP-TOP) and lately, deep learning techniques, have been used for facial expression recognition. However, the problem of occlusion has not been sufficiently handled, making their results not applicable in real life situations. This paper develops a simple, yet highly efficient method tagged Local Binary Pattern-Histogram of Gradient (LBP-HOG) with occlusion detection in face image, using a multi-class SVM for Action Unit and in turn expression recognition. Our method was evaluated on three publicly available datasets which are JAFFE, CK, SFEW. Experimental results showed that our approach performed considerably well when compared with state-of-the-art algorithms and gave insight to occlusion detection as a key step to handling expression in wild.Keywords: automatic facial expression analysis, local binary pattern, LBP-HOG, occlusion detection
Procedia PDF Downloads 1692449 Parallel Querying of Distributed Ontologies with Shared Vocabulary
Authors: Sharjeel Aslam, Vassil Vassilev, Karim Ouazzane
Abstract:
Ontologies and various semantic repositories became a convenient approach for implementing model-driven architectures of distributed systems on the Web. SPARQL is the standard query language for querying such. However, although SPARQL is well-established standard for querying semantic repositories in RDF and OWL format and there are commonly used APIs which supports it, like Jena for Java, its parallel option is not incorporated in them. This article presents a complete framework consisting of an object algebra for parallel RDF and an index-based implementation of the parallel query engine capable of dealing with the distributed RDF ontologies which share common vocabulary. It has been implemented in Java, and for validation of the algorithms has been applied to the problem of organizing virtual exhibitions on the Web.Keywords: distributed ontologies, parallel querying, semantic indexing, shared vocabulary, SPARQL
Procedia PDF Downloads 204