Search results for: computer aided detection
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5569

Search results for: computer aided detection

3859 Quick Reference: Cyber Attacks Awareness and Prevention Method for Home Users

Authors: Haydar Teymourlouei

Abstract:

It is important to take security measures to protect your computer information, reduce identify theft, and prevent from malicious cyber-attacks. With cyber-attacks on the continuous rise, people need to understand and learn ways to prevent from these attacks. Cyber-attack is an important factor to be considered if one is to be able to protect oneself from malicious attacks. Without proper security measures, most computer technology would hinder home users more than such technologies would help. Knowledge of how cyber-attacks operate and protective steps that can be taken to reduce chances of its occurrence are key to increasing these security measures. The purpose of this paper is to inform home users on the importance of identifying and taking preventive steps to avoid cyberattacks. Throughout this paper, many aspects of cyber-attacks will be discuss: what a cyber-attack is, the affects of cyber-attack for home users, different types of cyber-attacks, methodology to prevent such attacks; home users can take to fortify security of their computer.

Keywords: cyber-attacks, home user, prevention, security, technology

Procedia PDF Downloads 373
3858 A Comparative Study of the Challenges of E-Learning in Nigerian Universities

Authors: J. N. Anene, A. A. Bello, C. C. Anene

Abstract:

The paper carried out a comparative study of the challenges of e-learning in Nigerian universities. The purpose of the study was to determine if there was a significant difference in the challenges faced by students in e-learning in Nigerian Universities. A total of two hundred and twenty-eight students from nine universities constituted the sample for the study. A simple random sampling technique was employed in selecting thirty–two students from one of each university in the six geo-political zones of Nigeria. The questionnaire based on 'yes or no' and column charts constituted the instrument employed in the study. Percentages were used to analyse 'yes or no' while column charts were used to compare responds of the students. The finding of the study revealed that majority of students in all the universities under study claimed that their universities lacked appropriate software, that good quality educational content online was lacking, they also agreed that sustainability of e-learning was not prioritized, that they had no access to appropriate content for ICT-enhanced learning and training and that they had access to affordable and reliable computers. For lecturers, the computer certification should be the first on the list of promotion requirements. The finding of the study revealed that students from seven out of nine universities confirmed that their universities lack of appropriate software whereas the other two claimed that they have appropriate software. Also, out of nine universities, two disagreed to the fact that good quality educational content online lacked, whereas seven agreed that they lacked good quality educational content online. The finding of the study also revealed that most of the respondents in almost all the university under study agreed that sustainability of e-learning was not prioritized. The study recommended among other that the Nigerian Government should make concerted effort to provide the enablement for all lecturers and students to become computer literate. This should be done within a time frame, and at the end of the computer course, certificates should be issued, and no student should graduate in his or her field of study without passing the computer course.

Keywords: e-learning, developing countries, computer literacy, ICT

Procedia PDF Downloads 316
3857 Proposal Method of Prediction of the Early Stages of Dementia Using IoT and Magnet Sensors

Authors: João Filipe Papel, Tatsuji Munaka

Abstract:

With society's aging and the number of elderly with dementia rising, researchers have been actively studying how to support the elderly in the early stages of dementia with the objective of allowing them to have a better life quality and as much as possible independence. To make this possible, most researchers in this field are using the Internet Of Things to monitor the elderly activities and assist them in performing them. The most common sensor used to monitor the elderly activities is the Camera sensor due to its easy installation and configuration. The other commonly used sensor is the sound sensor. However, we need to consider privacy when using these sensors. This research aims to develop a system capable of predicting the early stages of dementia based on monitoring and controlling the elderly activities of daily living. To make this system possible, some issues need to be addressed. First, the issue related to elderly privacy when trying to detect their Activities of Daily Living. Privacy when performing detection and monitoring Activities of Daily Living it's a serious concern. One of the purposes of this research is to achieve this detection and monitoring without putting the privacy of the elderly at risk. To make this possible, the study focuses on using an approach based on using Magnet Sensors to collect binary data. The second is to use the data collected by monitoring Activities of Daily Living to predict the early stages of Dementia. To make this possible, the research team suggests developing a proprietary ontology combined with both data-driven and knowledge-driven.

Keywords: dementia, activity recognition, magnet sensors, ontology, data driven and knowledge driven, IoT, activities of daily living

Procedia PDF Downloads 80
3856 Transformational Justice for Employees' Job Satisfaction

Authors: Hassan Barau Singhry

Abstract:

Purpose: Leadership or the absence of it is an important behaviour affecting employees’ job satisfaction. Although, there are many models of leadership, one that stands out in a period of change is the transformational behaviour. The aim of this study is to investigate the role of an organizational justice on the relationship between transformational leadership and employee job satisfaction. The study is based on the assumption that change begins with leaders and leaders should be fair and just. Methodology: A cross-sectional survey through structured questionnaire was employed to collect the data of this study. The population is selected the three tiers of government such as the local, state, and federal governments in Nigeria. The sampling method used in this research is stratified random sampling. 418 middle managers of public organizations respondents to the questionnaire. Multiple regression aided by structural equation modeling was employed to test 4 hypothesized relationships. Finding: The regression results support for the mediating role of organizational justice such as distributive, procedural, interpersonal and informational justice in the link between transformational leadership and job satisfaction. Originality/value: This study adds to the literature of human resource management by empirically validating and integrating transformational leadership behaviour with the four dimensions of organizational justice theory. The study is expected to be beneficial to the top and middle-level administrators as well as theory building and testing.

Keywords: distributive justice, job satisfaction, organizational justice, procedural justice, transformational leadership

Procedia PDF Downloads 150
3855 The Effectiveness of Gamified Learning on Student Learning in Computer Science Education: A Systematic Review (2010-2018)

Authors: Shurui Bai, Biyun Huang, Khe Foon Hew

Abstract:

Gamification is defined as the use of game design elements in non-game contexts. The primary purpose of using gamification in an educational context is to engage students in school activities such that their likelihood of completion is increased. But how actually effective is gamification in improving student learning? In order to answer this question, this paper provides a systematic review of prior research studies on gamification in K-12 and university contexts limited to computer science discipline. Unlike other published gamification review works, we specifically analyzed comparison-based studies in quasi-experiment, historical control, and randomization rather than studies with mere anecdotal or phenomenological results. The main purpose for this is to discuss possible causal effects of gamified practices on student performance, behavior change, and perceptual skills following an integrative model. Implications for practice are discussed, along with several suggestions for future research studies.

Keywords: computer science, gamification, learning performance, systematic review

Procedia PDF Downloads 115
3854 Diagnosis of Induction Machine Faults by DWT

Authors: Hamidreza Akbari

Abstract:

In this paper, for detection of inclined eccentricity in an induction motor, time–frequency analysis of the stator startup current is carried out. For this purpose, the discrete wavelet transform is used. Data are obtained from simulations, using winding function approach. The results show the validity of the approach for detecting the fault and discriminating with respect to other faults.

Keywords: induction machine, fault, DWT, electric

Procedia PDF Downloads 335
3853 An Improved Data Aided Channel Estimation Technique Using Genetic Algorithm for Massive Multi-Input Multiple-Output

Authors: M. Kislu Noman, Syed Mohammed Shamsul Islam, Shahriar Hassan, Raihana Pervin

Abstract:

With the increasing rate of wireless devices and high bandwidth operations, wireless networking and communications are becoming over crowded. To cope with such crowdy and messy situation, massive MIMO is designed to work with hundreds of low costs serving antennas at a time as well as improve the spectral efficiency at the same time. TDD has been used for gaining beamforming which is a major part of massive MIMO, to gain its best improvement to transmit and receive pilot sequences. All the benefits are only possible if the channel state information or channel estimation is gained properly. The common methods to estimate channel matrix used so far is LS, MMSE and a linear version of MMSE also proposed in many research works. We have optimized these methods using genetic algorithm to minimize the mean squared error and finding the best channel matrix from existing algorithms with less computational complexity. Our simulation result has shown that the use of GA worked beautifully on existing algorithms in a Rayleigh slow fading channel and existence of Additive White Gaussian Noise. We found that the GA optimized LS is better than existing algorithms as GA provides optimal result in some few iterations in terms of MSE with respect to SNR and computational complexity.

Keywords: channel estimation, LMMSE, LS, MIMO, MMSE

Procedia PDF Downloads 174
3852 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 228
3851 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis

Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim

Abstract:

The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.

Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection

Procedia PDF Downloads 386
3850 Flexible 3D Virtual Desktop Using Handles for Cloud Environments

Authors: J. K. Lee, S. L. Lee

Abstract:

Due to the improvement in performance of computer hardware and the development of operating systems, a multi-tasking for several programs has become one of the basic functions to computer users. It is natural for computer users to want more functional, convenient, and visual GUI functions (Graphic User Interface). In this paper, a 3D virtual desktop system was proposed to meet users’ requirements for cloud environments such as a virtual desktop function in the Windows environment. The proposed system uses the handles of the windows to hide or restore several windows. It connects the list of task spaces using the circular double linked list to manage the handles. Each handle list is registered in the corresponding task space being executed. The 3D virtual desktop is efficient and flexible in handling the numbers of task spaces and can help users to work under more comfortable environments. Acknowledgment: This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korea government (MSIP) (NRF-2015R1D1A1A01057680).

Keywords: virtual desktop, GUI, cloud, virtualization

Procedia PDF Downloads 194
3849 A Sensitive Approach on Trace Analysis of Methylparaben in Wastewater and Cosmetic Products Using Molecularly Imprinted Polymer

Authors: Soukaina Motia, Nadia El Alami El Hassani, Alassane Diouf, Benachir Bouchikhi, Nezha El Bari

Abstract:

Parabens are the antimicrobial molecules largely used in cosmetic products as a preservative agent. Among them, the methylparaben (MP) is the most frequently used ingredient in cosmetic preparations. Nevertheless, their potential dangers led to the development of sensible and reliable methods for their determination in environmental samples. Firstly, a sensitive and selective molecular imprinted polymer (MIP) based on screen-printed gold electrode (Au-SPE), assembled on a polymeric layer of carboxylated poly(vinyl-chloride) (PVC-COOH), was developed. After the template removal, the obtained material was able to rebind MP and discriminate it among other interfering species such as glucose, sucrose, and citric acid. The behavior of molecular imprinted sensor was characterized by Cyclic Voltammetry (CV), Differential Pulse Voltammetry (DPV) and Electrochemical Impedance Spectroscopy (EIS) techniques. Then, the biosensor was found to have a linear detection range from 0.1 pg.mL-1 to 1 ng.mL-1 and a low limit of detection of 0.12 fg.mL-1 and 5.18 pg.mL-1 by DPV and EIS, respectively. For applications, this biosensor was employed to determine MP content in four wastewaters in Meknes city and two cosmetic products (shower gel and shampoo). The operational reproducibility and stability of this biosensor were also studied. Secondly, another MIP biosensor based on tungsten trioxide (WO3) functionalized by gold nanoparticles (Au-NPs) assembled on a polymeric layer of PVC-COOH was developed. The main goal was to increase the sensitivity of the biosensor. The developed MIP biosensor was successfully applied for the MP determination in wastewater samples and cosmetic products.

Keywords: cosmetic products, methylparaben, molecularly imprinted polymer, wastewater

Procedia PDF Downloads 299
3848 Two Years Retrospective Study of Body Fluid Cultures Obtained from Patients in the Intensive Care Unit of General Hospital of Ioannina

Authors: N. Varsamis, M. Gerasimou, P. Christodoulou, S. Mantzoukis, G. Kolliopoulou, N. Zotos

Abstract:

Purpose: Body fluids (pleural, peritoneal, synovial, pericardial, cerebrospinal) are an important element in the detection of microorganisms. For this reason, it is important to examine them in the Intensive Care Unit (ICU) patients. Material and Method: Body fluids are transported through sterile containers and enriched as soon as possible with Tryptic Soy Broth (TSB). After one day of incubation, the broth is poured into selective media: Blood, Mac Conkey No. 2, Chocolate, Mueller Hinton, Chapman and Saboureaud agar. The above selective media are incubated directly for 2 days. After this period, if any number of microbial colonies are detected, gram staining is performed. After that, the isolated organisms are identified by biochemical techniques in the automated Microscan system (Siemens) and followed by a sensitivity test on the same system using the minimum inhibitory concentration MIC technique. The sensitivity test is verified by Kirby Bauer-based plate test. Results: In 2017 the Laboratory of Microbiology received 60 samples of body fluids from the ICU. More specifically the Microbiology Department received 6 peritoneal fluid specimens, 18 pleural fluid specimens and 36 cerebrospinal fluid specimens. 36 positive cultures were tested. S. epidermidis was identified in 18 specimens, S. haemolyticus in 6, and E. faecium in 12. Conclusions: The results show low detection of microorganisms in body fluid cultures.

Keywords: body fluids, culture, intensive care unit, microorganisms

Procedia PDF Downloads 186
3847 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 109
3846 Visual Inspection of Road Conditions Using Deep Convolutional Neural Networks

Authors: Christos Theoharatos, Dimitris Tsourounis, Spiros Oikonomou, Andreas Makedonas

Abstract:

This paper focuses on the problem of visually inspecting and recognizing the road conditions in front of moving vehicles, targeting automotive scenarios. The goal of road inspection is to identify whether the road is slippery or not, as well as to detect possible anomalies on the road surface like potholes or body bumps/humps. Our work is based on an artificial intelligence methodology for real-time monitoring of road conditions in autonomous driving scenarios, using state-of-the-art deep convolutional neural network (CNN) techniques. Initially, the road and ego lane are segmented within the field of view of the camera that is integrated into the front part of the vehicle. A novel classification CNN is utilized to identify among plain and slippery road textures (e.g., wet, snow, etc.). Simultaneously, a robust detection CNN identifies severe surface anomalies within the ego lane, such as potholes and speed bumps/humps, within a distance of 5 to 25 meters. The overall methodology is illustrated under the scope of an integrated application (or system), which can be integrated into complete Advanced Driver-Assistance Systems (ADAS) systems that provide a full range of functionalities. The outcome of the proposed techniques present state-of-the-art detection and classification results and real-time performance running on AI accelerator devices like Intel’s Myriad 2/X Vision Processing Unit (VPU).

Keywords: deep learning, convolutional neural networks, road condition classification, embedded systems

Procedia PDF Downloads 113
3845 Phenomena-Based Approach for Automated Generation of Process Options and Process Models

Authors: Parminder Kaur Heer, Alexei Lapkin

Abstract:

Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.

Keywords: Phenomena, Process intensification, Process models , Process options

Procedia PDF Downloads 220
3844 Iterative Segmentation and Application of Hausdorff Dilation Distance in Defect Detection

Authors: S. Shankar Bharathi

Abstract:

Inspection of surface defects on metallic components has always been challenging due to its specular property. Occurrences of defects such as scratches, rust, pitting are very common in metallic surfaces during the manufacturing process. These defects if unchecked can hamper the performance and reduce the life time of such component. Many of the conventional image processing algorithms in detecting the surface defects generally involve segmentation techniques, based on thresholding, edge detection, watershed segmentation and textural segmentation. They later employ other suitable algorithms based on morphology, region growing, shape analysis, neural networks for classification purpose. In this paper the work has been focused only towards detecting scratches. Global and other thresholding techniques were used to extract the defects, but it proved to be inaccurate in extracting the defects alone. However, this paper does not focus on comparison of different segmentation techniques, but rather describes a novel approach towards segmentation combined with hausdorff dilation distance. The proposed algorithm is based on the distribution of the intensity levels, that is, whether a certain gray level is concentrated or evenly distributed. The algorithm is based on extraction of such concentrated pixels. Defective images showed higher level of concentration of some gray level, whereas in non-defective image, there seemed to be no concentration, but were evenly distributed. This formed the basis in detecting the defects in the proposed algorithm. Hausdorff dilation distance based on mathematical morphology was used to strengthen the segmentation of the defects.

Keywords: metallic surface, scratches, segmentation, hausdorff dilation distance, machine vision

Procedia PDF Downloads 409
3843 Vehicle Gearbox Fault Diagnosis Based on Cepstrum Analysis

Authors: Mohamed El Morsy, Gabriela Achtenová

Abstract:

Research on damage of gears and gear pairs using vibration signals remains very attractive, because vibration signals from a gear pair are complex in nature and not easy to interpret. Predicting gear pair defects by analyzing changes in vibration signal of gears pairs in operation is a very reliable method. Therefore, a suitable vibration signal processing technique is necessary to extract defect information generally obscured by the noise from dynamic factors of other gear pairs. This article presents the value of cepstrum analysis in vehicle gearbox fault diagnosis. Cepstrum represents the overall power content of a whole family of harmonics and sidebands when more than one family of sidebands is present at the same time. The concept for the measurement and analysis involved in using the technique are briefly outlined. Cepstrum analysis is used for detection of an artificial pitting defect in a vehicle gearbox loaded with different speeds and torques. The test stand is equipped with three dynamometers; the input dynamometer serves as the internal combustion engine, the output dynamometers introduce the load on the flanges of the output joint shafts. The pitting defect is manufactured on the tooth side of a gear of the fifth speed on the secondary shaft. Also, a method for fault diagnosis of gear faults is presented based on order cepstrum. The procedure is illustrated with the experimental vibration data of the vehicle gearbox. The results show the effectiveness of cepstrum analysis in detection and diagnosis of the gear condition.

Keywords: cepstrum analysis, fault diagnosis, gearbox, vibration signals

Procedia PDF Downloads 362
3842 Fabrication and Analysis of Simplified Dragonfly Wing Structures Created Using Balsa Wood and Red Prepreg Fibre Glass for Use in Biomimetic Micro Air Vehicles

Authors: Praveena Nair Sivasankaran, Thomas Arthur Ward, Rubentheren Viyapuri

Abstract:

Paper describes a methodology to fabricate a simplified dragonfly wing structure using balsa wood and red prepreg fibre glass. These simplified wing structures were created for use in Biomimetic Micro Air Vehicles (BMAV). Dragonfly wings are highly corrugated and possess complex vein structures. In order to mimic the wings function and retain its properties, a simplified version of the wing was designed. The simplified dragonfly wing structure was created using a method called spatial network analysis which utilizes Canny edge detection method. The vein structure of the wings were carved out in balsa wood and red prepreg fibre glass. Balsa wood and red prepreg fibre glass was chosen due to its ultra- lightweight property and hence, highly suitable to be used in our application. The fabricated structure was then immersed in a nanocomposite solution containing chitosan as a film matrix, reinforced with chitin nanowhiskers and tannic acid as a crosslinking agent. These materials closely mimic the membrane of a dragonfly wing. Finally, the wings were subjected to a bending test and comparisons were made with previous research for verification. The results had a margin of difference of about 3% and thus the structure was validated.

Keywords: dragonfly wings, simplified, Canny edge detection, balsa wood, red prepreg, chitin, chitosan, tannic acid

Procedia PDF Downloads 312
3841 Helicopter Exhaust Gases Cooler in Terms of Computational Fluid Dynamics (CFD) Analysis

Authors: Mateusz Paszko, Ksenia Siadkowska

Abstract:

Due to the low-altitude and relatively low-speed flight, helicopters are easy targets for actual combat assets e.g. infrared-guided missiles. Current techniques aim to increase the combat effectiveness of the military helicopters. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. One of them is cooling hot exhaust gasses, emitting from the engines to the atmosphere in special heat exchangers. Nowadays, this process is realized in ejective coolers, where strong heat and momentum exchange between hot exhaust gases and cold air ejected from atmosphere takes place. Flow effects of air, exhaust gases; mixture of those two and the heat transfer between cold air and hot exhaust gases are given by differential equations of: Mass transportation–flow continuity, ejection of cold air through expanding exhaust gasses, conservation of momentum, energy and physical relationship equations. Calculation of those processes in ejective cooler by means of classic mathematical analysis is extremely hard or even impossible. Because of this, it is necessary to apply the numeric approach with modern, numeric computer programs. The paper discussed the general usability of the Computational Fluid Dynamics (CFD) in a process of projecting the ejective exhaust gases cooler cooperating with helicopter turbine engine. In this work, the CFD calculations have been performed for ejective-based cooler cooperating with the PA W3 helicopter’s engines.

Keywords: aviation, CFD analysis, ejective-cooler, helicopter techniques

Procedia PDF Downloads 310
3840 GA3C for Anomalous Radiation Source Detection

Authors: Chia-Yi Liu, Bo-Bin Xiao, Wen-Bin Lin, Hsiang-Ning Wu, Liang-Hsun Huang

Abstract:

In order to reduce the risk of radiation damage that personnel may suffer during operations in the radiation environment, the use of automated guided vehicles to assist or replace on-site personnel in the radiation environment has become a key technology and has become an important trend. In this paper, we demonstrate our proof of concept for autonomous self-learning radiation source searcher in an unknown environment without a map. The research uses GPU version of Asynchronous Advantage Actor-Critic network (GA3C) of deep reinforcement learning to search for radiation sources. The searcher network, based on GA3C architecture, has self-directed learned and improved how search the anomalous radiation source by training 1 million episodes under three simulation environments. In each episode of training, the radiation source position, the radiation source intensity, starting position, are all set randomly in one simulation environment. The input for searcher network is the fused data from a 2D laser scanner and a RGB-D camera as well as the value of the radiation detector. The output actions are the linear and angular velocities. The searcher network is trained in a simulation environment to accelerate the learning process. The well-performance searcher network is deployed to the real unmanned vehicle, Dashgo E2, which mounts LIDAR of YDLIDAR G4, RGB-D camera of Intel D455, and radiation detector made by Institute of Nuclear Energy Research. In the field experiment, the unmanned vehicle is enable to search out the radiation source of the 18.5MBq Na-22 by itself and avoid obstacles simultaneously without human interference.

Keywords: deep reinforcement learning, GA3C, source searching, source detection

Procedia PDF Downloads 96
3839 A Deep Learning Approach to Online Social Network Account Compromisation

Authors: Edward K. Boahen, Brunel E. Bouya-Moko, Changda Wang

Abstract:

The major threat to online social network (OSN) users is account compromisation. Spammers now spread malicious messages by exploiting the trust relationship established between account owners and their friends. The challenge in detecting a compromised account by service providers is validating the trusted relationship established between the account owners, their friends, and the spammers. Another challenge is the increase in required human interaction with the feature selection. Research available on supervised learning (machine learning) has limitations with the feature selection and accounts that cannot be profiled, like application programming interface (API). Therefore, this paper discusses the various behaviours of the OSN users and the current approaches in detecting a compromised OSN account, emphasizing its limitations and challenges. We propose a deep learning approach that addresses and resolve the constraints faced by the previous schemes. We detailed our proposed optimized nonsymmetric deep auto-encoder (OPT_NDAE) for unsupervised feature learning, which reduces the required human interaction levels in the selection and extraction of features. We evaluated our proposed classifier using the NSL-KDD and KDDCUP'99 datasets in a graphical user interface enabled Weka application. The results obtained indicate that our proposed approach outperformed most of the traditional schemes in OSN compromised account detection with an accuracy rate of 99.86%.

Keywords: computer security, network security, online social network, account compromisation

Procedia PDF Downloads 100
3838 Gender Differences in Adolescent Avatars: Gender Consistency and Masculinity-Femininity of Nicknames and Characters

Authors: Monika Paleczna, Małgorzata Holda

Abstract:

Choosing an avatar's gender in a computer game is one of the key elements in the process of creating an online identity. The selection of a male or female avatar can define the entirety of subsequent decisions regarding both appearance and behavior. However, when the most popular games available for the Nintendo console in 1998 were analyzed, it turned out that 41% of computer games did not have female characters. Nowadays, players create their avatars based mainly on binary gender classification, with male and female characters to choose from. The main aim of the poster is to explore gender differences in adolescent avatars. 130 adolescents aged 15-17 participated in the study. They created their avatars and then played a computer game. The creation of the avatar was based on the choice of gender, then physical and mental characteristics. Data on gender consistency (consistency between participant’s sex and gender selected for the avatar) and masculinity-femininity of avatar nicknames and appearance will be presented. The masculinity-femininity of avatar nicknames and appearance was assessed by expert raters on a very masculine to very feminine scale. Additionally, data on the relationships of the perceived levels of masculinity-femininity with hostility-friendliness and the intelligence of avatars will be shown. The dimensions of hostility-friendliness and intelligence were also assessed by expert raters on scales ranging from very hostile to very friendly and from very low intelligence to very high intelligence.

Keywords: gender, avatar, adolescence, computer games

Procedia PDF Downloads 197
3837 Model-Based Diagnostics of Multiple Tooth Cracks in Spur Gears

Authors: Ahmed Saeed Mohamed, Sadok Sassi, Mohammad Roshun Paurobally

Abstract:

Gears are important machine components that are widely used to transmit power and change speed in many rotating machines. Any breakdown of these vital components may cause severe disturbance to production and incur heavy financial losses. One of the most common causes of gear failure is the tooth fatigue crack. Early detection of teeth cracks is still a challenging task for engineers and maintenance personnel. So far, to analyze the vibration behavior of gears, different approaches have been tried based on theoretical developments, numerical simulations, or experimental investigations. The objective of this study was to develop a numerical model that could be used to simulate the effect of teeth cracks on the resulting vibrations and hence to permit early fault detection for gear transmission systems. Unlike the majority of published papers, where only one single crack has been considered, this work is more realistic, since it incorporates the possibility of multiple simultaneous cracks with different lengths. As cracks significantly alter the gear mesh stiffness, we performed a finite element analysis using SolidWorks software to determine the stiffness variation with respect to the angular position for different combinations of crack lengths. A simplified six degrees of freedom non-linear lumped parameter model of a one-stage gear system is proposed to study the vibration of a pair of spur gears, with and without tooth cracks. The model takes several physical properties into account, including variable gear mesh stiffness and the effect of friction, but ignores the lubrication effect. The vibration simulation results of the gearbox were obtained via Matlab and Simulink. The results were found to be consistent with the results from previously published works. The effect of one crack with different levels was studied and very similar changes in the total mesh stiffness and the vibration response, both were observed and compared to what has been found in previous studies. The effect of the crack length on various statistical time domain parameters was considered and the results show that these parameters were not equally sensitive to the crack percentage. Multiple cracks are introduced at different locations and the vibration response and the statistical parameters were obtained.

Keywords: dynamic simulation, gear mesh stiffness, simultaneous tooth cracks, spur gear, vibration-based fault detection

Procedia PDF Downloads 195
3836 Bank Failures: A Question of Leadership

Authors: Alison L. Miles

Abstract:

Almost all major financial institutions in the world suffered losses due to the financial crisis of 2007, but the extent varied widely. The causes of the crash of 2007 are well documented and predominately focus on the role and complexity of the financial markets. The dominant theme of the literature suggests the causes of the crash were a combination of globalization, financial sector innovation, moribund regulation and short termism. While these arguments are undoubtedly true, they do not tell the whole story. A key weakness in the current analysis is the lack of consideration of those leading the banks pre and during times of crisis. This purpose of this study is to examine the possible link between the leadership styles and characteristics of the CEO, CFO and chairman and the financial institutions that failed or needed recapitalization. As such, it contributes to the literature and debate on international financial crises and systemic risk and also to the debate on risk management and regulatory reform in the banking sector. In order to first test the proposition (p1) that there are prevalent leadership characteristics or traits in financial institutions, an initial study was conducted using a sample of the top 65 largest global banks and financial institutions according to the Banker Top 1000 banks 2014. Secondary data from publically available and official documents, annual reports, treasury and parliamentary reports together with a selection of press articles and analyst meeting transcripts was collected longitudinally from the period 1998 to 2013. A computer aided key word search was used in order to identify the leadership styles and characteristics of the chairman, CEO and CFO. The results were then compared with the leadership models to form a picture of leadership in the sector during the research period. As this resulted in separate results that needed combining, SPSS data editor was used to aggregate the results across the studies using the variables ‘leadership style’ and ‘company financial performance’ together with the size of the company. In order to test the proposition (p2) that there was a prevalent leadership style in the banks that failed and the proposition (P3) that this was different to those that did not, further quantitative analysis was carried out on the leadership styles of the chair, CEO and CFO of banks that needed recapitalization, were taken over, or required government bail-out assistance during 2007-8. These included: Lehman Bros, Merrill Lynch, Royal Bank of Scotland, HBOS, Barclays, Northern Rock, Fortis and Allied Irish. The findings show that although regulatory reform has been a key mechanism of control of behavior in the banking sector, consideration of the leadership characteristics of those running the board are a key factor. They add weight to the argument that if each crisis is met with the same pattern of popular fury with the financier, increased regulation, followed by back to business as usual, the cycle of failure will always be repeated and show that through a different lens, new paradigms can be formed and future clashes avoided.

Keywords: banking, financial crisis, leadership, risk

Procedia PDF Downloads 305
3835 Advantages of Computer Navigation in Knee Arthroplasty

Authors: Mohammad Ali Al Qatawneh, Bespalchuk Pavel Ivanovich

Abstract:

Computer navigation has been introduced in total knee arthroplasty to improve the accuracy of the procedure. Computer navigation improves the accuracy of bone resection in the coronal and sagittal planes. It was also noted that it normalizes the rotational alignment of the femoral component and fully assesses and balances the deformation of soft tissues in the coronal plane. The work is devoted to the advantages of using computer navigation technology in total knee arthroplasty in 62 patients (11 men and 51 women) suffering from gonarthrosis, aged 51 to 83 years, operated using a computer navigation system, followed up to 3 years from the moment of surgery. During the examination, the deformity variant was determined, and radiometric parameters of the knee joints were measured using the Knee Society Score (KSS), Functional Knee Society Score (FKSS), and Western Ontario and McMaster University Osteoarthritis Index (WOMAC) scales. Also, functional stress tests were performed to assess the stability of the knee joint in the frontal plane and functional indicators of the range of motion. After surgery, improvement was observed in all scales; firstly, the WOMAC values decreased by 5.90 times, and the median value to 11 points (p < 0.001), secondly KSS increased by 3.91 times and reached 86 points (p < 0.001), and the third one is that FKSS data increased by 2.08 times and reached 94 points (p < 0.001). After TKA, the axis deviation of the lower limbs of more than 3 degrees was observed in 4 patients at 6.5% and frontal instability of the knee joint just in 2 cases at 3.2%., The lower incidence of sagittal instability of the knee joint after the operation was 9.6%. The range of motion increased by 1.25 times; the volume of movement averaged 125 degrees (p < 0.001). Computer navigation increases the accuracy of the spatial orientation of the endoprosthesis components in all planes, reduces the variability of the axis of the lower limbs within ± 3 °, allows you to achieve the best results of surgical interventions, and can be used to solve most basic tasks, allowing you to achieve excellent and good outcomes of operations in 100% of cases according to the WOMAC scale. With diaphyseal deformities of the femur and/or tibia, as well as with obstruction of their medullary canal, the use of computer navigation is the method of choice. The use of computer navigation prevents the occurrence of flexion contracture and hyperextension of the knee joint during the distal sawing of the femur. Using the navigation system achieves high-precision implantation for the endoprosthesis; in addition, it achieves an adequate balance of the ligaments, which contributes to the stability of the joint, reduces pain, and allows for the achievement of a good functional result of the treatment.

Keywords: knee joint, arthroplasty, computer navigation, advantages

Procedia PDF Downloads 66
3834 Disease Level Assessment in Wheat Plots Using a Residual Deep Learning Algorithm

Authors: Felipe A. Guth, Shane Ward, Kevin McDonnell

Abstract:

The assessment of disease levels in crop fields is an important and time-consuming task that generally relies on expert knowledge of trained individuals. Image classification in agriculture problems historically has been based on classical machine learning strategies that make use of hand-engineered features in the top of a classification algorithm. This approach tends to not produce results with high accuracy and generalization to the classes classified by the system when the nature of the elements has a significant variability. The advent of deep convolutional neural networks has revolutionized the field of machine learning, especially in computer vision tasks. These networks have great resourcefulness of learning and have been applied successfully to image classification and object detection tasks in the last years. The objective of this work was to propose a new method based on deep learning convolutional neural networks towards the task of disease level monitoring. Common RGB images of winter wheat were obtained during a growing season. Five categories of disease levels presence were produced, in collaboration with agronomists, for the algorithm classification. Disease level tasks performed by experts provided ground truth data for the disease score of the same winter wheat plots were RGB images were acquired. The system had an overall accuracy of 84% on the discrimination of the disease level classes.

Keywords: crop disease assessment, deep learning, precision agriculture, residual neural networks

Procedia PDF Downloads 306
3833 Fabrication of a New Electrochemical Sensor Based on New Nanostructured Molecularly Imprinted Polypyrrole for Selective and Sensitive Determination of Morphine

Authors: Samaneh Nabavi, Hadi Shirzad, Arash Ghoorchian, Maryam Shanesaz, Reza Naderi

Abstract:

Morphine (MO), the most effective painkiller, is considered the reference by which analgesics are assessed. It is very necessary for the biomedical applications to detect and maintain the MO concentrations in the blood and urine with in safe ranges. To date, there are many expensive techniques for detecting MO. Recently, many electrochemical sensors for direct determination of MO were constructed. The molecularly imprinted polymer (MIP) is a polymeric material, which has a built-in functionality for the recognition of a particular chemical substance with its complementary cavity.This paper reports a sensor for MO using a combination of a molecularly imprinted polymer (MIP) and differential-pulse voltammetry (DPV). Electropolymerization of MO doped polypyrrole yielded poor quality, but a well-doped, nanostructure and increased impregnation has been obtained in the pH=12. Above a pH of 11, MO is in the anionic forms. The effect of various experimental parameters including pH, scan rate and accumulation time on the voltammetric response of MO was investigated. At the optimum conditions, the concentration of MO was determined using DPV in a linear range of 7.07 × 10−6 to 2.1 × 10−4 mol L−1 with a correlation coefficient of 0.999, and a detection limit of 13.3 × 10-8 mol L−1, respectively. The effect of common interferences on the current response of MO namely ascorbic acid (AA) and uric acid (UA) is studied. The modified electrode can be used for the determination of MO spiked into urine samples, and excellent recovery results were obtained. The nanostructured polypyrrole films were characterized by field emission scanning electron microscopy (FESEM) and furrier transforms infrared (FTIR).

Keywords: morphine detection, sensor, polypyrrole, nanostructure, molecularly imprinted polymer

Procedia PDF Downloads 403
3832 Limestone Briquette Production and Characterization

Authors: André C. Silva, Mariana R. Barros, Elenice M. S. Silva, Douglas. Y. Marinho, Diego F. Lopes, Débora N. Sousa, Raphael S. Tomáz

Abstract:

Modern agriculture requires productivity, efficiency and quality. Therefore, there is need for agricultural limestone implementation that provides adequate amounts of calcium and magnesium carbonates in order to correct soil acidity. During the limestone process, fine particles (with average size under 400#) are generated. These particles do not have economic value in agricultural and metallurgical sectors due their size. When limestone is used for agriculture purposes, these fine particles can be easily transported by wind generated air pollution. Therefore, briquetting, a mineral processing technique, was used to mitigate this problem resulting in an agglomerated product suitable for agriculture use. Briquetting uses compressive pressure to agglomerate fine particles. It can be aided by agglutination agents, allowing adjustments in shape, size and mechanical parameters of the mass. Briquettes can generate extra profits for mineral industry, presenting as a distinct product for agriculture, and can reduce the environmental liabilities of the fine particles storage or disposition. The produced limestone briquettes were subjected to shatter and water action resistance tests. The results show that after six minutes completely submerged in water, the briquettes where fully diluted, a highly favorable result considering its use for soil acidity correction.

Keywords: agglomeration, briquetting, limestone, soil acidity correction

Procedia PDF Downloads 372
3831 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 67
3830 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 114