Search results for: computer navigation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2673

Search results for: computer navigation

1683 Navigating Rapids And Collecting Medical Insights: A Data Collection Of Athletes Presenting To The Medical Team At The International Canoe Federation Canoe Slalom World Championships 2023

Authors: Grace Scaplehorn, Muhammad Adeel Akhtar, Jane Gibson

Abstract:

Background: Canoe Slalom entails the skilful navigation of a carbon composite canoe or kayak through a series of 18-25 hanging gates, strategically positioned along the course, either upstream or downstream, amidst currents of whitewater rapids in natural and man-made river settings. Athletes compete individually in timed trials, competing for the fastest course time, typically around 80 to 120 seconds. In the new discipline of Kayak Cross, descents of the course are initiated by groups of four athletes freefalling simultaneously from a starting platform situated 3m above the river. Kayak Cross athletes, in contrast to Canoe Slalom, can make physical contact with suspended gates without incurring time penalties and are required to perform a kayak roll half way down the course. The Canoe Slalom World Championships were held at Lee Valley Whitewater Centre, London, from 19th to 24th September 2023. The event comprised 299 international athletes competing for 10 World Championship titles in Canoe/Kayak Slalom events (Olympic Debut Munich 1972), and the new Kayak Cross discipline (Olympic Debut Paris 2024). The inaugural appearance of Kayak Cross at the World Championships occurred in 2017, in Pau, France. There is limited literature surrounding Kayak Cross and the incidence of athlete injuries compared to traditional Canoe Slalom, hence it was felt important to undertake this review to address the perception that the event is dangerous. Aim: The study aimed to quantify and collate data collected from athletes presenting to the event medical centre. Methods: Athletes’ details were collected at initial assessments from the start of the practice period (16th–18th September) and throughout the event. Demographics such as age, sex and nationality were recorded along with presenting complaints, treatment, medication administered and outcome. Specifically, injuries were then sub-classified into body regions. The data does not include athletes who sought medical attention from their own governing body’s medical team. Results: During the 8-day period, there were 11 individual presentations to the medical centre, 3.7% of the athlete population (n=299). The mean age was 23.9 years (n=7), 6 were male (n=10). The most common presentation was minor injury (n=9), with 6 being musculoskeletal and 3 comprising skin damage, followed by insect sting/allergy (n=1) and pain relief requests (n=1). Five presentations were event-related, all being musculoskeletal injuries; 2 shoulder/arm, 1 head/neck, 1 hand/wrist and 1 other (data was not recorded). Of these injuries, the only intervention was 2 cases of 400mg Ibuprofen, which was given to both shoulder/arm injuries. Four of the 11 presentations were pre-existing injuries, which had been exacerbated due to increased intensity of practice. Two patients were advised to return for review, with 100% compliance. There were no unplanned re-presentations, and no emergency transfers to secondary care. Both the Kayak Cross and Canoe Slalom competitions resulted in 1 new event-related athlete presentation each. Conclusion: The event resulted in a negligible incidence of presentations at the medical centre, for both Kayak Cross and Canoe Slalom. This data holds significance in informing risk assessments and medical protocols necessary for the organisation of canoe slalom events.

Keywords: canoe slalom, kayak cross, athlete injuries, event injuries

Procedia PDF Downloads 57
1682 Exploring Chess Game AI Features Application

Authors: Bashayer Almalki, Mayar Bajrai, Dana Mirah, Kholood Alghamdi, Hala Sanyour

Abstract:

This research aims to investigate the features of an AI chess app that are most preferred by users. A questionnaire was used as the methodology to gather responses from a varied group of participants. The questionnaire consisted of several questions related to the features of the AI chess app. The responses were analyzed using descriptive statistics and factor analysis. The findings indicate that the most preferred features of an AI chess app are the ability to play against the computer, the option to adjust the difficulty level, and the availability of tutorials and puzzles. The results of this research could be useful for developers of AI chess apps to enhance the user experience and satisfaction.

Keywords: chess, game, application, computics

Procedia PDF Downloads 72
1681 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 228
1680 Self-Selected Intensity and Discounting Rates of Exercise in Comparison with Food and Money in Healthy Adults

Authors: Tamam Albelwi, Robert Rogers, Hans-Peter Kubis

Abstract:

Background: Exercise is widely acknowledged as a highly important health behavior, which reduces risks related to lifestyle diseases like type 2 diabetes, cardiovascular disease. However, exercise adherence is low in high-risk groups and sedentary lifestyle is more the norm than the exception. Expressed reasons for exercise participation are often based on delayed outcomes related to health threats and benefits but also enjoyment. Whether exercise is perceived as rewarding is well established in animal literature but the evidence is sparse in humans. Additionally, the question how stable any reward is perceived with time delays is an important question influencing decision-making (in favor or against a behavior). For the modality exercise, this has not been examined before. We, therefore, investigated the discounting of pre-established self-selected exercise compared with established rewards of food and money with a computer-based discounting paradigm. We hypothesized that exercise will be discounted like an established reward (food and money); however, we expect that the discounting rate is similar to a consumable reward like food. Additionally, we expected that individuals’ characteristics like preferred intensity, physical activity and body characteristics are associated with discount rates. Methods: 71 participants took part in four sessions. The sessions were designed to let participants select their preferred exercise intensity on a treadmill. Participants were asked to adjust their speed for optimizing pleasantness over an exercise period of up to 30 minutes, heart rate and pleasantness rating was measured. In further sessions, the established exercise intensity was modified and tested on perceptual validity. In the last exercise session rates of perceived exertion was measured on the preferred intensity level. Furthermore, participants filled in questionnaires related to physical activity, mood, craving, and impulsivity and answered choice questions on a bespoke computer task to establish discounting rates of their preferred exercise (kex), their favorite food (kfood) and a value-matching amount of money (kmoney). Results: Participants self-selected preferred speed was 5.5±2.24 km/h, at a heart rate of 120.7±23.5, and perceived exertion scale of 10.13±2.06. This shows that participants preferred a light exercise intensity with low to moderate cardiovascular strain based on perceived pleasantness. Computer assessment of discounting rates revealed that exercise was quickly discounted like a consumable reward, no significant difference between kfood and kex (kfood =0.322±0.263; kex=0.223±0.203). However, kmoney (kmoney=0.080±0.02) was significantly lower than the rates of exercise and food. Moreover, significant associations were found between preferred speed and kex (r=-0.302) and between physical activity levels and preferred speed (r=0.324). Outcomes show that participants perceived and discounted self-selected exercise like an established reward (food and money) but was discounted more like consumable rewards. Moreover, exercise discounting was quicker in individuals who preferred lower speeds, being less physically active. This may show that in a choice conflict between exercise and food the delay of exercise (because of distance) might disadvantage exercise as the chosen behavior particular in sedentary people. Conclusion: exercise can be perceived as a reward and is discounted quickly in time like food. Pleasant exercise experience is connected to low to moderate cardiovascular and perceptual strain.

Keywords: delay discounting, exercise, temporal discounting, time perspective

Procedia PDF Downloads 272
1679 Further Investigation of Core Degradation Using Quench Test Facility Results

Authors: Antoaneta Stefanova, Rositsa Gencheva, Pavlin Groudev

Abstract:

This paper presents an application of the ASTEC V2r3p3 computer code for simulation of QUENCH-12 experiment. The test has been performed to investigate the behavior of VVER type of fuel assemblies during severe accident conditions. In the performed analyses it has been assessed the mass of generated hydrogen during the experiment flooding of overheated core. The comparison of ASTECv2r3p3 calculated results with measured test data shows good agreement.

Keywords: hydrogen production, VVER, QUENCH facility, severe accident, reactor core

Procedia PDF Downloads 234
1678 3D Multimedia Model for Educational Design Engineering

Authors: Mohanaad Talal Shakir

Abstract:

This paper tries to propose educational design by using multimedia technology for Engineering of computer Technology, Alma'ref University College in Iraq. This paper evaluates the acceptance, cognition, and interactiveness of the proposed model by students by using the statistical relationship to determine the stage of the model. Objectives of proposed education design are to develop a user-friendly software for education purposes using multimedia technology and to develop animation for 3D model to simulate assembling and disassembling process of high-speed flow.

Keywords: CAL, multimedia, shock tunnel, interactivity, engineering education

Procedia PDF Downloads 624
1677 'Sextually' Active: Teens, 'Sexting' and Gendered Double Standards in the Digital Age

Authors: Annalise Weckesser, Alex Wade, Clara Joergensen, Jerome Turner

Abstract:

Introduction: Digital mobile technologies afford Generation M a number of opportunities in terms of communication, creativity and connectivity in their social interactions. Yet these young people’s use of such technologies is often the source of moral panic with accordant social anxiety especially prevalent in media representations of teen ‘sexting,’ or the sending of sexually explicit images via smartphones. Thus far, most responses to youth sexting have largely been ineffective or unjust with adult authorities sometimes blaming victims of non-consensual sexting, using child pornography laws to paradoxically criminalise those they are designed to protect, and/or advising teenagers to simply abstain from the practice. Prevention strategies are further skewed, with sex education initiatives often targeted at girls, implying that they shoulder the responsibility of minimising the risks associated with sexting (e.g. revenge porn and sexual predation). Purpose of Study: Despite increasing public interest and concern about ‘teen sexting,’ there remains a dearth of research with young people regarding their experiences of navigating sex and relationships in the current digital media landscape. Furthermore, young people's views on sexting are rarely solicited in the policy and educational strategies aimed at them. To address this research-policy-education gap, an interdisciplinary team of four researchers (from anthropology, media, sociology and education) have undertaken a peer-to-peer research project to co-create a sexual health intervention. Methods: In the winter of 2015-2016, the research team conducted serial group interviews with four cohorts of students (aged 13 to 15) from a secondary school in the West Midlands, UK. To facilitate open dialogue, girls and boys were interviewed separately, and each group consisted of no more than four pupils. The team employed a range of participatory techniques to elicit young people’s views on sexting, its consequences, and its interventions. A final focus group session was conducted with all 14 male and female participants to explore developing a peer-to-peer ‘safe sexting’ education intervention. Findings: This presentation will highlight the ongoing, ‘old school’ sexual double standards at work within this new digital frontier. In the sharing of ‘nudes’ (teens’ preferred term to ‘sexting’) via social media apps (e.g. Snapchat and WhatsApp), girls felt sharing images was inherently risky and feared being blamed and ‘slut-shamed.’ In contrast, boys were seen to gain in social status if they accumulated nudes of female peers. Further, if boys had nudes of themselves shared without consent, they felt they were expected to simply ‘tough it out.’ The presentation will also explore what forms of supports teens desire to help them in their day-to-day navigation of these digitally mediated, heteronormative performances of teen femininity and masculinity expected of them. Conclusion: This is the first research project, within UK, conducted with rather than about teens and the phenomenon of sexting. It marks a timely and important contribution to the nascent, but growing body of knowledge on gender, sexual politics and the digital mobility of sexual images created by and circulated amongst young people.

Keywords: teens, sexting, gender, sexual politics

Procedia PDF Downloads 238
1676 Learning-Teaching Experience about the Design of Care Applications for Nursing Professionals

Authors: A. Gonzalez Aguna, J. M. Santamaria Garcia, J. L. Gomez Gonzalez, R. Barchino Plata, M. Fernandez Batalla, S. Herrero Jaen

Abstract:

Background: Computer Science is a field that transcends other disciplines of knowledge because it allows to support all kinds of physical and mental tasks. Health centres have a greater number and complexity of technological devices and the population consume and demand services derived from technology. Also, nursing education plans have included competencies related to and, even, courses about new technologies are offered to health professionals. However, nurses still limit their performance to the use and evaluation of products previously built. Objective: Develop a teaching-learning methodology for acquiring skills on designing applications for care. Methodology: Blended learning teaching with a group of graduate nurses through official training within a Master's Degree. The study sample was selected by intentional sampling without exclusion criteria. The study covers from 2015 to 2017. The teaching sessions included a four-hour face-to-face class and between one and three tutorials. The assessment was carried out by written test consisting of the preparation of an IEEE 830 Standard Specification document where the subject chosen by the student had to be a problem in the area of care. Results: The sample is made up of 30 students: 10 men and 20 women. Nine students had a degree in nursing, 20 diploma in nursing and one had a degree in Computer Engineering. Two students had a degree in nursing specialty through residence and two in equivalent recognition by exceptional way. Except for the engineer, no subject had previously received training in this regard. All the sample enrolled in the course received the classroom teaching session, had access to the teaching material through a virtual area and maintained at least one tutoring. The maximum of tutorials were three with an hour in total. Among the material available for consultation was an example of a document drawn up based on the IEEE Standard with an issue not related to care. The test to measure competence was completed by the whole group and evaluated by a multidisciplinary teaching team of two computer engineers and two nurses. Engineers evaluated the correctness of the characteristics of the document and the degree of comprehension in the elaboration of the problem and solution elaborated nurses assessed the relevance of the chosen problem statement, the foundation, originality and correctness of the proposed solution and the validity of the application for clinical practice in care. The results were of an average grade of 8.1 over 10 points, a range between 6 and 10. The selected topic barely coincided among the students. Examples of care areas selected are care plans, family and community health, delivery care, administration and even robotics for care. Conclusion: The applied methodology of learning-teaching for the design of technologies demonstrates the success in the training of nursing professionals. The role of expert is essential to create applications that satisfy the needs of end users. Nursing has the possibility, the competence and the duty to participate in the process of construction of technological tools that are going to impact in care of people, family and community.

Keywords: care, learning, nursing, technology

Procedia PDF Downloads 138
1675 Developing a Cloud Intelligence-Based Energy Management Architecture Facilitated with Embedded Edge Analytics for Energy Conservation in Demand-Side Management

Authors: Yu-Hsiu Lin, Wen-Chun Lin, Yen-Chang Cheng, Chia-Ju Yeh, Yu-Chuan Chen, Tai-You Li

Abstract:

Demand-Side Management (DSM) has the potential to reduce electricity costs and carbon emission, which are associated with electricity used in the modern society. A home Energy Management System (EMS) commonly used by residential consumers in a down-stream sector of a smart grid to monitor, control, and optimize energy efficiency to domestic appliances is a system of computer-aided functionalities as an energy audit for residential DSM. Implementing fault detection and classification to domestic appliances monitored, controlled, and optimized is one of the most important steps to realize preventive maintenance, such as residential air conditioning and heating preventative maintenance in residential/industrial DSM. In this study, a cloud intelligence-based green EMS that comes up with an Internet of Things (IoT) technology stack for residential DSM is developed. In the EMS, Arduino MEGA Ethernet communication-based smart sockets that module a Real Time Clock chip to keep track of current time as timestamps via Network Time Protocol are designed and implemented for readings of load phenomena reflecting on voltage and current signals sensed. Also, a Network-Attached Storage providing data access to a heterogeneous group of IoT clients via Hypertext Transfer Protocol (HTTP) methods is configured to data stores of parsed sensor readings. Lastly, a desktop computer with a WAMP software bundle (the Microsoft® Windows operating system, Apache HTTP Server, MySQL relational database management system, and PHP programming language) serves as a data science analytics engine for dynamic Web APP/REpresentational State Transfer-ful web service of the residential DSM having globally-Advanced Internet of Artificial Intelligence (AI)/Computational Intelligence. Where, an abstract computing machine, Java Virtual Machine, enables the desktop computer to run Java programs, and a mash-up of Java, R language, and Python is well-suited and -configured for AI in this study. Having the ability of sending real-time push notifications to IoT clients, the desktop computer implements Google-maintained Firebase Cloud Messaging to engage IoT clients across Android/iOS devices and provide mobile notification service to residential/industrial DSM. In this study, in order to realize edge intelligence that edge devices avoiding network latency and much-needed connectivity of Internet connections for Internet of Services can support secure access to data stores and provide immediate analytical and real-time actionable insights at the edge of the network, we upgrade the designed and implemented smart sockets to be embedded AI Arduino ones (called embedded AIduino). With the realization of edge analytics by the proposed embedded AIduino for data analytics, an Arduino Ethernet shield WizNet W5100 having a micro SD card connector is conducted and used. The SD library is included for reading parsed data from and writing parsed data to an SD card. And, an Artificial Neural Network library, ArduinoANN, for Arduino MEGA is imported and used for locally-embedded AI implementation. The embedded AIduino in this study can be developed for further applications in manufacturing industry energy management and sustainable energy management, wherein in sustainable energy management rotating machinery diagnostics works to identify energy loss from gross misalignment and unbalance of rotating machines in power plants as an example.

Keywords: demand-side management, edge intelligence, energy management system, fault detection and classification

Procedia PDF Downloads 251
1674 Characterization of InP Semiconductor Quantum Dot Laser Diode after Am-Be Neutron Irradiation

Authors: Abdulmalek Marwan Rajkhan, M. S. Al Ghamdi, Mohammed Damoum, Essam Banoqitah

Abstract:

This paper is about the Am-Be neutron source irradiation of the InP Quantum Dot Laser diode. A QD LD was irradiated for 24 hours and 48 hours. The laser underwent IV characterization experiments before and after the first and second irradiations. A computer simulation using GAMOS helped in analyzing the given results from IV curves. The results showed an improvement in the QD LD series resistance, current density, and overall ideality factor at all measured temperatures. This is explained by the activation of the QD LD Indium composition to Strontium, ionization of the compound QD LD materials, and the energy deposited to the QD LD.

Keywords: quantum dot laser diode irradiation, effect of radiation on QD LD, Am-Be irradiation effect on SC QD LD

Procedia PDF Downloads 66
1673 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method

Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota

Abstract:

The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.

Keywords: BOS method, underexpanded jet, abel transformation, density field visualization

Procedia PDF Downloads 81
1672 Complete Chloroplast DNA Sequences of Georgian Endemic Polyploid Wheats

Authors: M. Gogniashvili, I. Maisaia, A. Kotorashvili, N. Kotaria, T. Beridze

Abstract:

Three types of plasmon (A, B and G) is typical for genus Triticum. In polyploid species - Triticum turgidum L. and Triticum aestivum L. plasmon B is detected. In the forthcoming paper, complete nucleotide sequence of chloroplast DNA of 11 representatives of Georgian wheat polyploid species, carrying plasmon B was determined. Sequencing of chloroplast DNA was performed on an Illumina MiSeq platform. Chloroplast DNA molecules were assembled using the SOAPdenovo computer program. All contigs were aligned to the reference chloroplast genome sequence using BLASTN. For detection of SNPs and Indels and phylogeny tree construction computer programs Mafft and Blast were used. Using Triticum aestivum L. subsp. macha (Dekapr. & Menabde) Mackey var. paleocolchicum Dekapr. et Menabde as a reference, 5 SNPs can be identified in chloroplast DNA of Georgian endemic polyploid wheat. The number of noncoding substitutions is 2, coding substitutions - 3. In comparison with reference DNA two - 38 bp and 56 bp inversions were observed in paleocolchicum subspecies. There were six 1 bp indels detected in Georgian polyploid wheats, all of them at microsatellite stretches. The phylogeny tree shows that subspecies macha, carthlicum and paleocolchicum occupy different positions. According to the simplified scheme based on SNP and indel data, the ancestral, female parent of the all studied polyploid wheat is unknown X predecesor, from which four lines were formed. 1 SNP and two inversions (38 bp and 56 bp) caused the formation of subsp. paleocolchicum. Three other lines are macha, durum and carthlicum lines. Macha line is further divided into two sublines (M_1 and M_4). Carthlicum line includes subsp.carthlicum and T.aestivum - C_1 - C_2 - A_1. One of the central question of wheat domestication is which people(s) participated in wheat domestication? It is proposed that the predecessors of Georgian peoples (Proto-Kartvelians) must be placed, on the evidence of archaic lexical and toponymic data, in the mountainous regions of the western and central part of the Little Caucasus (the Transcaucasian foothills) at least 4,000 years ago. One of the possibility to explain the ‘wheat puzzle’ is that Kartvelian speakers brought domesticated wheat species and subspecis from Fertile Crescent further north to South Caucasus.

Keywords: chloroplast DNA, sequencing, SNP, triticum

Procedia PDF Downloads 154
1671 Real-Time Control of Grid-Connected Inverter Based on labVIEW

Authors: L. Benbaouche, H. E. , F. Krim

Abstract:

In this paper we propose real-time control of grid-connected single phase inverter, which is flexible and efficient. The first step is devoted to the study and design of the controller through simulation, conducted by the LabVIEW software on the computer 'host'. The second step is running the application from PXI 'target'. LabVIEW software, combined with NI-DAQmx, gives the tools to easily build applications using the digital to analog converter to generate the PWM control signals. Experimental results show that the effectiveness of LabVIEW software applied to power electronics.

Keywords: real-time control, labview, inverter, PWM

Procedia PDF Downloads 511
1670 Application of Regularized Low-Rank Matrix Factorization in Personalized Targeting

Authors: Kourosh Modarresi

Abstract:

The Netflix problem has brought the topic of “Recommendation Systems” into the mainstream of computer science, mathematics, and statistics. Though much progress has been made, the available algorithms do not obtain satisfactory results. The success of these algorithms is rarely above 5%. This work is based on the belief that the main challenge is to come up with “scalable personalization” models. This paper uses an adaptive regularization of inverse singular value decomposition (SVD) that applies adaptive penalization on the singular vectors. The results show far better matching for recommender systems when compared to the ones from the state of the art models in the industry.

Keywords: convex optimization, LASSO, regression, recommender systems, singular value decomposition, low rank approximation

Procedia PDF Downloads 459
1669 An Empirical Investigation of the Challenges of Secure Edge Computing Adoption in Organizations

Authors: Hailye Tekleselassie

Abstract:

Edge computing is a spread computing outline that transports initiative applications closer to data sources such as IoT devices or local edge servers, and possible happenstances would skull the action of new technologies. However, this investigation was attained to investigation the consciousness of technology and communications organization workers and computer users who support the service cloud. Surveys were used to achieve these objectives. Surveys were intended to attain these aims, and it is the functional using survey. Enquiries about confidence are also a key question. Problems like data privacy, integrity, and availability are the factors affecting the company’s acceptance of the service cloud.

Keywords: IoT, data, security, edge computing

Procedia PDF Downloads 84
1668 Non-intrusive Hand Control of Drone Using an Inexpensive and Streamlined Convolutional Neural Network Approach

Authors: Evan Lowhorn, Rocio Alba-Flores

Abstract:

The purpose of this work is to develop a method for classifying hand signals and using the output in a drone control algorithm. To achieve this, methods based on Convolutional Neural Networks (CNN) were applied. CNN's are a subset of deep learning, which allows grid-like inputs to be processed and passed through a neural network to be trained for classification. This type of neural network allows for classification via imaging, which is less intrusive than previous methods using biosensors, such as EMG sensors. Classification CNN's operate purely from the pixel values in an image; therefore they can be used without additional exteroceptive sensors. A development bench was constructed using a desktop computer connected to a high-definition webcam mounted on a scissor arm. This allowed the camera to be pointed downwards at the desk to provide a constant solid background for the dataset and a clear detection area for the user. A MATLAB script was created to automate dataset image capture at the development bench and save the images to the desktop. This allowed the user to create their own dataset of 12,000 images within three hours. These images were evenly distributed among seven classes. The defined classes include forward, backward, left, right, idle, and land. The drone has a popular flip function which was also included as an additional class. To simplify control, the corresponding hand signals chosen were the numerical hand signs for one through five for movements, a fist for land, and the universal “ok” sign for the flip command. Transfer learning with PyTorch (Python) was performed using a pre-trained 18-layer residual learning network (ResNet-18) to retrain the network for custom classification. An algorithm was created to interpret the classification and send encoded messages to a Ryze Tello drone over its 2.4 GHz Wi-Fi connection. The drone’s movements were performed in half-meter distance increments at a constant speed. When combined with the drone control algorithm, the classification performed as desired with negligible latency when compared to the delay in the drone’s movement commands.

Keywords: classification, computer vision, convolutional neural networks, drone control

Procedia PDF Downloads 213
1667 Lotus Mechanism: Validation of Deployment Mechanism Using Structural and Dynamic Analysis

Authors: Parth Prajapati, A. R. Srinivas

Abstract:

The purpose of this paper is to validate the concept of the Lotus Mechanism using Computer Aided Engineering (CAE) tools considering the statics and dynamics through actual time dependence involving inertial forces acting on the mechanism joints. For a 1.2 m mirror made of hexagonal segments, with simple harnesses and three-point supports, the maximum diameter is 400 mm, minimum segment base thickness is 1.5 mm, and maximum rib height is considered as 12 mm. Manufacturing challenges are explored for the segments using manufacturing research and development approaches to enable use of large lightweight mirrors required for the future space system.

Keywords: dynamics, manufacturing, reflectors, segmentation, statics

Procedia PDF Downloads 375
1666 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 158
1665 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 75
1664 Neural Rendering Applied to Confocal Microscopy Images

Authors: Daniel Li

Abstract:

We present a novel application of neural rendering methods to confocal microscopy. Neural rendering and implicit neural representations have developed at a remarkable pace, and are prevalent in modern 3D computer vision literature. However, they have not yet been applied to optical microscopy, an important imaging field where 3D volume information may be heavily sought after. In this paper, we employ neural rendering on confocal microscopy focus stack data and share the results. We highlight the benefits and potential of adding neural rendering to the toolkit of microscopy image processing techniques.

Keywords: neural rendering, implicit neural representations, confocal microscopy, medical image processing

Procedia PDF Downloads 661
1663 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 115
1662 The Effectiveness of a Courseware in 7th Grade Chemistry Lesson

Authors: Oguz Ak

Abstract:

In this study a courseware for the learning unit of `Properties of matters` in chemistry course is developed. The courseware is applied to 15 7th grade (about age 14) students in real settings. As a result of the study it is found that the students` grade in the learning unit significantly increased when they study the courseware themselves. In addition, the score improvements of the students who found the courseware is usable is not significantly higher than the score improvements of the students who did not found it usable.

Keywords: computer based instruction, effect of courseware and usability of courseware, 7th grade

Procedia PDF Downloads 461
1661 Study of Mixing Conditions for Different Endothelial Dysfunction in Arteriosclerosis

Authors: Sara Segura, Diego Nuñez, Miryam Villamil

Abstract:

In this work, we studied the microscale interaction of foreign substances with blood inside an artificial transparent artery system that represents medium and small muscular arteries. This artery system had channels ranging from 75 μm to 930 μm and was fabricated using glass and transparent polymer blends like Phenylbis(2,4,6-trimethylbenzoyl) phosphine oxide, Poly(ethylene glycol) and PDMS in order to be monitored in real time. The setup was performed using a computer controlled precision micropump and a high resolution optical microscope capable of tracking fluids at fast capture. Observation and analysis were performed using a real time software that reconstructs the fluid dynamics determining the flux velocity, injection dependency, turbulence and rheology. All experiments were carried out with fully computer controlled equipment. Interactions between substances like water, serum (0.9% sodium chloride and electrolyte with a ratio of 4 ppm) and blood cells were studied at microscale as high as 400nm of resolution and the analysis was performed using a frame-by-frame observation and HD-video capture. These observations lead us to understand the fluid and mixing behavior of the interest substance in the blood stream and to shed a light on the use of implantable devices for drug delivery at arteries with different Endothelial dysfunction. Several substances were tested using the artificial artery system. Initially, Milli-Q water was used as a control substance for the study of the basic fluid dynamics of the artificial artery system. However, serum and other low viscous substances were pumped into the system with the presence of other liquids to study the mixing profiles and behaviors. Finally, mammal blood was used for the final test while serum was injected. Different flow conditions, pumping rates, and time rates were evaluated for the determination of the optimal mixing conditions. Our results suggested the use of a very fine controlled microinjection for better mixing profiles with and approximately rate of 135.000 μm3/s for the administration of drugs inside arteries.

Keywords: artificial artery, drug delivery, microfluidics dynamics, arteriosclerosis

Procedia PDF Downloads 298
1660 A Multi Cordic Architecture on FPGA Platform

Authors: Ahmed Madian, Muaz Aljarhi

Abstract:

Coordinate Rotation Digital Computer (CORDIC) is a unique digital computing unit intended for the computation of mathematical operations and functions. This paper presents a multi-CORDIC processor that integrates different CORDIC architectures on a single FPGA chip and allows the user to select the CORDIC architecture to proceed with based on what he wants to calculate and his/her needs. Synthesis show that radix 2 CORDIC has the lowest clock delay, radix 8 CORDIC has the highest LUT usage and lowest register usage while Hybrid Radix 4 CORDIC had the highest clock delay.

Keywords: multi, CORDIC, FPGA, processor

Procedia PDF Downloads 470
1659 A Development of Personalized Edutainment Contents through Storytelling

Authors: Min Kyeong Cha, Ju Yeon Mun, Seong Baeg Kim

Abstract:

Recently, ‘play of learning’ became important and is emphasized as a useful learning tool. Therefore, interest in edutainment contents is growing. Storytelling is considered first as a method that improves the transmission of information and learner's interest when planning edutainment contents. In this study, we designed edutainment contents in the form of an adventure game that applies the storytelling method. This content provides questions and items constituted dynamically and reorganized learning contents through analysis of test results. It allows learners to solve various questions through effective iterative learning. As a result, the learners can reach mastery learning.

Keywords: storytelling, edutainment, mastery learning, computer operating principle

Procedia PDF Downloads 320
1658 Factors Influencing the Usage of ERP in Enterprise Systems

Authors: Mohammad Reza Babaei, Sanaz Kamrani

Abstract:

The main problems That arise In adopting most Enterprise resources planning (ERP) strategies come from organizational, complex information systems like the ERP integrate the data of all business areas within the organization. The implementation of ERP is a difficult process as it involves different types of end users. Based on literature, we proposed a conceptual framework and examined it to find the effect of some of the individual, organizational, and technological factors on the usage of ERP and its impact on the end user. The results of the analysis suggest that computer self-efficacy, organizational support, training, and compatibility have a positive influence on ERP usage which in turn has significant influence on panoptic empowerment and individual performance.

Keywords: factor, influencing, enterprise, system

Procedia PDF Downloads 371
1657 DOS and DDOS Attacks

Authors: Amin Hamrahi, Niloofar Moghaddam

Abstract:

Denial of Service is for denial-of-service attack, a type of attack on a network that is designed to bring the network to its knees by flooding it with useless traffic. Denial of Service (DoS) attacks have become a major threat to current computer networks. Many recent DoS attacks were launched via a large number of distributed attacking hosts in the Internet. These attacks are called distributed denial of service (DDoS) attacks. To have a better understanding on DoS attacks, this article provides an overview on existing DoS and DDoS attacks and major defense technologies in the Internet.

Keywords: denial of service, distributed denial of service, traffic, flooding

Procedia PDF Downloads 394
1656 Community Structure Detection in Networks Based on Bee Colony

Authors: Bilal Saoud

Abstract:

In this paper, we propose a new method to find the community structure in networks. Our method is based on bee colony and the maximization of modularity to find the community structure. We use a bee colony algorithm to find the first community structure that has a good value of modularity. To improve the community structure, that was found, we merge communities until we get a community structure that has a high value of modularity. We provide a general framework for implementing our approach. We tested our method on computer-generated and real-world networks with a comparison to very known community detection methods. The obtained results show the effectiveness of our proposition.

Keywords: bee colony, networks, modularity, normalized mutual information

Procedia PDF Downloads 410
1655 A Longitudinal Exploration into Computer-Mediated Communication Use (CMC) and Relationship Change between 2005-2018

Authors: Laurie Dempsey

Abstract:

Relationships are considered to be beneficial for emotional wellbeing, happiness and physical health. However, they are also complicated: individuals engage in a multitude of complex and volatile relationships during their lifetime, where the change to or ending of these dynamics can be deeply disruptive. As the internet is further integrated into everyday life and relationships are increasingly mediated, Media Studies’ and Sociology’s research interests intersect and converge. This study longitudinally explores how relationship change over time corresponds with the developing UK technological landscape between 2005-2018. Since the early 2000s, the use of computer-mediated communication (CMC) in the UK has dramatically reshaped interaction. Its use has compelled individuals to renegotiate how they consider their relationships: some argue it has allowed for vast networks to be accumulated and strengthened; others contend that it has eradicated the core values and norms associated with communication, damaging relationships. This research collaborated with UK media regulator Ofcom, utilising the longitudinal dataset from their Adult Media Lives study to explore how relationships and CMC use developed over time. This is a unique qualitative dataset covering 2005-2018, where the same 18 participants partook in annual in-home filmed depth interviews. The interviews’ raw video footage was examined year-on-year to consider how the same people changed their reported behaviour and outlooks towards their relationships, and how this coincided with CMC featuring more prominently in their everyday lives. Each interview was transcribed, thematically analysed and coded using NVivo 11 software. This study allowed for a comprehensive exploration into these individuals’ changing relationships over time, as participants grew older, experienced marriages or divorces, conceived and raised children, or lost loved ones. It found that as technology developed between 2005-2018, everyday CMC use was increasingly normalised and incorporated into relationship maintenance. It played a crucial role in altering relationship dynamics, even factoring in the breakdown of several ties. Three key relationships were identified as being shaped by CMC use: parent-child; extended family; and friendships. Over the years there were substantial instances of relationship conflict: for parents renegotiating their dynamic with their child as they tried to both restrict and encourage their child’s technology use; for estranged family members ‘forced’ together in the online sphere; and for friendships compelled to publicly display their relationship on social media, for fear of social exclusion. However, it was also evident that CMC acted as a crucial lifeline for these participants, providing opportunities to strengthen and maintain their bonds via previously unachievable means, both over time and distance. A longitudinal study of this length and nature utilising the same participants does not currently exist, thus provides crucial insight into how and why relationship dynamics alter over time. This unique and topical piece of research draws together Sociology and Media Studies, illustrating how the UK’s changing technological landscape can reshape one of the most basic human compulsions. This collaboration with Ofcom allows for insight that can be utilised in both academia and policymaking alike, making this research relevant and impactful across a range of academic fields and industries.

Keywords: computer mediated communication, longitudinal research, personal relationships, qualitative data

Procedia PDF Downloads 123
1654 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 390