Search results for: trained athletes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1383

Search results for: trained athletes

1053 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 186
1052 Training During Emergency Response to Build Resiliency in Water, Sanitation, and Hygiene

Authors: Lee Boudreau, Ash Kumar Khaitu, Laura A. S. MacDonald

Abstract:

In April 2015, a magnitude 7.8 earthquake struck Nepal, killing, injuring, and displacing thousands of people. The earthquake also damaged water and sanitation service networks, leading to a high risk of diarrheal disease and the associated negative health impacts. In response to the disaster, the Environment and Public Health Organization (ENPHO), a Kathmandu-based non-governmental organization, worked with the Centre for Affordable Water and Sanitation Technology (CAWST), a Canadian education, training and consulting organization, to develop two training programs to educate volunteers on water, sanitation, and hygiene (WASH) needs. The first training program was intended for acute response, with the second focusing on longer term recovery. A key focus was to equip the volunteers with the knowledge and skills to formulate useful WASH advice in the unanticipated circumstances they would encounter when working in affected areas. Within the first two weeks of the disaster, a two-day acute response training was developed, which focused on enabling volunteers to educate those affected by the disaster about local WASH issues, their link to health, and their increased importance immediately following emergency situations. Between March and October 2015, a total of 19 training events took place, with over 470 volunteers trained. The trained volunteers distributed hygiene kits and liquid chlorine for household water treatment. They also facilitated health messaging and WASH awareness activities in affected communities. A three-day recovery phase training was also developed and has been delivered to volunteers in Nepal since October 2015. This training focused on WASH issues during the recovery and reconstruction phases. The interventions and recommendations in the recovery phase training focus on long-term WASH solutions, and so form a link between emergency relief strategies and long-term development goals. ENPHO has trained 226 volunteers during the recovery phase, with training ongoing as of April 2016. In the aftermath of the earthquake, ENPHO found that its existing pool of volunteers were more than willing to help those in their communities who were more in need. By training these and new volunteers, ENPHO was able to reach many more communities in the immediate aftermath of the disaster; together they reached 11 of the 14 earthquake-affected districts. The collaboration between ENPHO and CAWST in developing the training materials was a highly collaborative and iterative process, which enabled the training materials to be developed within a short response time. By training volunteers on basic WASH topics during both the immediate response and the recovery phase, ENPHO and CAWST have been able to link immediate emergency relief to long-term developmental goals. While the recovery phase training continues in Nepal, CAWST is planning to decontextualize the training used in both phases so that it can be applied to other emergency situations in the future. The training materials will become part of the open content materials available on CAWST’s WASH Resources website.

Keywords: water and sanitation, emergency response, education and training, building resilience

Procedia PDF Downloads 305
1051 Embodied Neoliberalism and the Mind as Tool to Manage the Body: A Descriptive Study Applied to Young Australian Amateur Athletes

Authors: Alicia Ettlin

Abstract:

Amid the rise of neoliberalism to the leading economic policy model in Western societies in the 1980s, people have started to internalise a neoliberal way of thinking, whereby the human body has become an entity that can and needs to be precisely managed through free yet rational decision-making processes. The neoliberal citizen has consequently become an entrepreneur of the self who is free, independent, rational, productive and responsible for themselves, their health and wellbeing as well as their appearance. The focus on individuals as entrepreneurs who manage their bodies through the rationally thinking mind has, however, become increasingly criticised for viewing the social actor as ‘disembodied’, as a detached, social actor whose powerful mind governs over the passive body. On the other hand, the discourse around embodiment seeks to connect rational decision-making processes to the dominant neoliberal discourse which creates an embodied understanding that the body, just as other areas of people’s lives, can and should be shaped, monitored and managed through cognitive and rational thinking. This perspective offers an understanding of the body regarding its connections with the social environment that reaches beyond the debates around mind-body binary thinking. Hence, following this argument, body management should not be thought of as either solely guided by embodied discourses nor as merely falling into a mind-body dualism, but rather, simultaneously and inseparably as both at once. The descriptive, qualitative analysis of semi-structured in-depth interviews conducted with young Australian amateur athletes between the age of 18 and 24 has shown that most participants are interested in measuring and managing their body to create self-knowledge and self-improvement. The participants thereby connected self-improvement to weight loss, muscle gain or simply staying fit and healthy. Self-knowledge refers to body measurements including weight, BMI or body fat percentage. Self-management and self-knowledge that are reliant on one another to take rational and well-thought-out decisions, are both characteristic values of the neoliberal doctrine. A neoliberal way of thinking and looking after the body has also by many been connected to rewarding themselves for their discipline, hard work or achievement of specific body management goals (e.g. eating chocolate for reaching the daily step count goal). A few participants, however, have shown resistance against these neoliberal values, and in particular, against the precise monitoring and management of the body with the help of self-tracking devices. Ultimately, however, it seems that most participants have internalised the dominant discourses around self-responsibility, and by association, a sense of duty to discipline their body in normative ways. Even those who have indicated their resistance against body work and body management practices that follow neoliberal thinking and measurement systems, are aware and have internalised the concept of the rational operating mind that needs or should decide how to look after the body in terms of health but also appearance ideals. The discussion around the collected data thereby shows that embodiment and the mind/body dualism constitute two connected, rather than two separate or opposing concepts.

Keywords: dualism, embodiment, mind, neoliberalism

Procedia PDF Downloads 163
1050 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang

Abstract:

Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.

Keywords: CNN, classification, deep learning, GAN, Resnet50

Procedia PDF Downloads 88
1049 Identification of Training Topics for the Improvement of the Relevant Cognitive Skills of Technical Operators in the Railway Domain

Authors: Giulio Nisoli, Jonas Brüngger, Karin Hostettler, Nicole Stoller, Katrin Fischer

Abstract:

Technical operators in the railway domain are experts responsible for the supervisory control of the railway power grid as well as of the railway tunnels. The technical systems used to master these demanding tasks are constantly increasing in their degree of automation. It becomes therefore difficult for technical operators to maintain the control over the technical systems and the processes of their job. In particular, the operators must have the necessary experience and knowledge in dealing with a malfunction situation or unexpected event. For this reason, it is of growing importance that the skills relevant for the execution of the job are maintained and further developed beyond the basic training they receive, where they are educated in respect of technical knowledge and the work with guidelines. Training methods aimed at improving the cognitive skills needed by technical operators are still missing and must be developed. Goals of the present study were to identify which are the relevant cognitive skills of technical operators in the railway domain and to define which topics should be addressed by the training of these skills. Observational interviews were conducted in order to identify the main tasks and the organization of the work of technical operators as well as the technical systems used for the execution of their job. Based on this analysis, the most demanding tasks of technical operators could be identified and described. The cognitive skills involved in the execution of these tasks are those, which need to be trained. In order to identify and analyze these cognitive skills a cognitive task analysis (CTA) was developed. CTA specifically aims at identifying the cognitive skills that employees implement when performing their own tasks. The identified cognitive skills of technical operators were summarized and grouped in training topics. For every training topic, specific goals were defined. The goals regard the three main categories; knowledge, skills and attitude to be trained in every training topic. Based on the results of this study, it is possible to develop specific training methods to train the relevant cognitive skills of the technical operators.

Keywords: cognitive skills, cognitive task analysis, technical operators in the railway domain, training topics

Procedia PDF Downloads 153
1048 Bodybuilding, Gender and Age: A Qualitative Exploration of the Perspectives of Older Canadian Females

Authors: Amy Matharu

Abstract:

Existing literature on older athletes in competitive sports is often male-dominated and limited. This study explores how age and gender impact the experiences of older female bodybuilders in Canada using the social theories of deviance and intersectionality. Qualitative, semi-structured interviews were conducted with 11 Canadian female bodybuilders over the age of 45. Interviews were transcribed, coded, and thematically analysed. This study was approached from a phenomenological perspective. The participants deviated from their perceived social norms of women their age. They exhibited deviance with their actions, such as prioritising themselves and following extreme dieting practices, and with their aesthetics, such as maintaining a muscular appearance. Participants received both positive and negative reactions from society resulting in both admiration and stigmatisation. These reactions varied based on the environment, audience, and context of the situation. Overall, the intersection of age and gender results in a unique position for older female bodybuilders within society and within the sport.

Keywords: age, bodybuilding, gender, females

Procedia PDF Downloads 126
1047 Outcome Evaluation of a Blended-Learning Mental Health Training Course in South African Public Health Facilities

Authors: F. Slaven, M. Uys, Y. Erasmus

Abstract:

The South African National Mental Health Education Programme (SANMHEP) was a National Department of Health (NDoH) initiative to strengthen mental health services in South Africa in collaboration with the Foundation for Professional Development (FPD), SANOFI and the various provincial departments of health. The programme was implemented against the backdrop of a number of challenges in the management of mental health in the country related to staff shortages and infrastructure, the intersection of mental health with the growing burden of non-communicable diseases and various forms of violence, and challenges around substance abuse and its relationship with mental health. The Mental Health Care Act (No. 17 of 2002) prescribes that mental health should be integrated into general health services including primary, secondary and tertiary levels to improve access to services and reduce stigma associated with mental illness. In order for the provisions of the Act to become a reality, and for the journey of mental health patients through the system to improve, sufficient and skilled health care providers are critical. SANMHEP specifically targeted Medical Doctors and Professional Nurses working within the facilities that are listed to conduct 72-hour assessments, as well as District Hospitals. The aim of the programme was to improve the clinical diagnosis and management of mental disorders/conditions and the understanding of and compliance with the Mental Health Care Act and related Regulations and Guidelines in the care, treatment and rehabilitation of mental health care users. The course used a blended-learning approach and trained 1 120 health care providers through 36 workshops between February and November 2019. Of those trained, 689 (61.52%) were Professional Nurses, 337 (30.09%) were Medical Doctors, and 91 (8.13%) indicated their occupation as ‘other’ (of these more than half were psychologists). The pre- and post-evaluation of the face-to-face training sessions indicated a marked improvement in knowledge and confidence level scores (both clinical and legislative) in the care, treatment and rehabilitation of mental health care users by participants in all the training sessions. There was a marked improvement in the knowledge and confidence of participants in performing certain mental health activities (on average the ratings increased by 2.72; or 27%) and in managing certain mental health conditions (on average the ratings increased by 2.55; or 25%). The course also required that participants obtain 70% or higher in their formal assessments as part of the online component. The 337 participants who completed and passed the course scored 90% on average. This illustrates that when participants attempted and completed the course, they did very well. To further assess the effect of the course on the knowledge and behaviour of the trained mental health care practitioners a mixed-method outcome evaluation is currently underway consisting of a survey with participants three months after completion, follow-up interviews with participants, and key informant interviews with department of health officials and course facilitators. This will enable a more detailed assessment of the impact of the training on participants' perceived ability to manage and treat mental health patients.

Keywords: mental health, public health facilities, South Africa, training

Procedia PDF Downloads 119
1046 Need of Trained Clinical Research Professionals Globally to Conduct Clinical Trials

Authors: Tambe Daniel Atem

Abstract:

Background: Clinical Research is an organized research on human beings intended to provide adequate information on the drug use as a therapeutic agent on its safety and efficacy. The significance of the study is to educate the global health and life science graduates in Clinical Research in depth to perform better as it involves testing drugs on human beings. Objectives: to provide an overall understanding of the scientific approach to the evaluation of new and existing medical interventions and to apply ethical and regulatory principles appropriate to any individual research. Methodology: It is based on – Primary data analysis and Secondary data analysis. Primary data analysis: means the collection of data from journals, the internet, and other online sources. Secondary data analysis: a survey was conducted with a questionnaire to interview the Clinical Research Professionals to understand the need of training to perform clinical trials globally. The questionnaire consisted details of the professionals working with the expertise. It also included the areas of clinical research which needed intense training before entering into hardcore clinical research domain. Results: The Clinical Trials market worldwide worth over USD 26 billion and the industry has employed an estimated 2,10,000 people in the US and over 70,000 in the U.K, and they form one-third of the total research and development staff. There are more than 2,50,000 vacant positions globally with salary variations in the regions for a Clinical Research Coordinator. R&D cost on new drug development is estimated at US$ 70-85 billion. The cost of doing clinical trials for a new drug is US$ 200-250 million. Due to an increase trained Clinical Research Professionals India has emerged as a global hub for clinical research. The Global Clinical Trial outsourcing opportunity in India in the pharmaceutical industry increased to more than $2 billion in 2014 due to increased outsourcing from U.S and Europe to India. Conclusion: Assessment of training need is recommended for newer Clinical Research Professionals and trial sites, especially prior the conduct of larger confirmatory clinical trials.

Keywords: clinical research, clinical trials, clinical research professionals

Procedia PDF Downloads 452
1045 Audio-Visual Co-Data Processing Pipeline

Authors: Rita Chattopadhyay, Vivek Anand Thoutam

Abstract:

Speech is the most acceptable means of communication where we can quickly exchange our feelings and thoughts. Quite often, people can communicate orally but cannot interact or work with computers or devices. It’s easy and quick to give speech commands than typing commands to computers. In the same way, it’s easy listening to audio played from a device than extract output from computers or devices. Especially with Robotics being an emerging market with applications in warehouses, the hospitality industry, consumer electronics, assistive technology, etc., speech-based human-machine interaction is emerging as a lucrative feature for robot manufacturers. Considering this factor, the objective of this paper is to design the “Audio-Visual Co-Data Processing Pipeline.” This pipeline is an integrated version of Automatic speech recognition, a Natural language model for text understanding, object detection, and text-to-speech modules. There are many Deep Learning models for each type of the modules mentioned above, but OpenVINO Model Zoo models are used because the OpenVINO toolkit covers both computer vision and non-computer vision workloads across Intel hardware and maximizes performance, and accelerates application development. A speech command is given as input that has information about target objects to be detected and start and end times to extract the required interval from the video. Speech is converted to text using the Automatic speech recognition QuartzNet model. The summary is extracted from text using a natural language model Generative Pre-Trained Transformer-3 (GPT-3). Based on the summary, essential frames from the video are extracted, and the You Only Look Once (YOLO) object detection model detects You Only Look Once (YOLO) objects on these extracted frames. Frame numbers that have target objects (specified objects in the speech command) are saved as text. Finally, this text (frame numbers) is converted to speech using text to speech model and will be played from the device. This project is developed for 80 You Only Look Once (YOLO) labels, and the user can extract frames based on only one or two target labels. This pipeline can be extended for more than two target labels easily by making appropriate changes in the object detection module. This project is developed for four different speech command formats by including sample examples in the prompt used by Generative Pre-Trained Transformer-3 (GPT-3) model. Based on user preference, one can come up with a new speech command format by including some examples of the respective format in the prompt used by the Generative Pre-Trained Transformer-3 (GPT-3) model. This pipeline can be used in many projects like human-machine interface, human-robot interaction, and surveillance through speech commands. All object detection projects can be upgraded using this pipeline so that one can give speech commands and output is played from the device.

Keywords: OpenVINO, automatic speech recognition, natural language processing, object detection, text to speech

Procedia PDF Downloads 80
1044 Earthquake Identification to Predict Tsunami in Andalas Island, Indonesia Using Back Propagation Method and Fuzzy TOPSIS Decision Seconder

Authors: Muhamad Aris Burhanudin, Angga Firmansyas, Bagus Jaya Santosa

Abstract:

Earthquakes are natural hazard that can trigger the most dangerous hazard, tsunami. 26 December 2004, a giant earthquake occurred in north-west Andalas Island. It made giant tsunami which crushed Sumatra, Bangladesh, India, Sri Lanka, Malaysia and Singapore. More than twenty thousand people dead. The occurrence of earthquake and tsunami can not be avoided. But this hazard can be mitigated by earthquake forecasting. Early preparation is the key factor to reduce its damages and consequences. We aim to investigate quantitatively on pattern of earthquake. Then, we can know the trend. We study about earthquake which has happened in Andalas island, Indonesia one last decade. Andalas is island which has high seismicity, more than a thousand event occur in a year. It is because Andalas island is in tectonic subduction zone of Hindia sea plate and Eurasia plate. A tsunami forecasting is needed to mitigation action. Thus, a Tsunami Forecasting Method is presented in this work. Neutral Network has used widely in many research to estimate earthquake and it is convinced that by using Backpropagation Method, earthquake can be predicted. At first, ANN is trained to predict Tsunami 26 December 2004 by using earthquake data before it. Then after we get trained ANN, we apply to predict the next earthquake. Not all earthquake will trigger Tsunami, there are some characteristics of earthquake that can cause Tsunami. Wrong decision can cause other problem in the society. Then, we need a method to reduce possibility of wrong decision. Fuzzy TOPSIS is a statistical method that is widely used to be decision seconder referring to given parameters. Fuzzy TOPSIS method can make the best decision whether it cause Tsunami or not. This work combines earthquake prediction using neural network method and using Fuzzy TOPSIS to determine the decision that the earthquake triggers Tsunami wave or not. Neural Network model is capable to capture non-linear relationship and Fuzzy TOPSIS is capable to determine the best decision better than other statistical method in tsunami prediction.

Keywords: earthquake, fuzzy TOPSIS, neural network, tsunami

Procedia PDF Downloads 495
1043 Impediments to Female Sports Management and Participation: The Experience in the Selected Nigeria South West Colleges of Education

Authors: Saseyi Olaitan Olaoluwa, Osifeko Olalekan Remigious

Abstract:

The study was meant to identify the impediments to female sports management and participation in the selected colleges. Seven colleges of education in the south west parts of the country were selected for the study. A total of one hundred and five subjects were sampled to supply data. Only one hundred adequately completed and returned, copies of the questionnaire were used for data analysis. The collected data were analysed descriptively. The result of the study showed that inadequate fund, personnel, facilities equipment, supplies, management of sports, supervision and coaching were some of the impediments to female sports management and participation. Athletes were not encouraged to participate. Based on the findings, it was recommended that the government should come to the aid of the colleges by providing fund and other needs that will make sports attractive for enhanced participation.

Keywords: female sports, impediments, management, Nigeria, south west, colleges

Procedia PDF Downloads 409
1042 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images

Authors: Elham Bagheri, Yalda Mohsenzadeh

Abstract:

Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.

Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception

Procedia PDF Downloads 91
1041 Influence of Strength Training on the Self-Efficacy of Sports Performance: National Collegiate Athletic Association Student-Athletes Experience of a Strength Training Program

Authors: Alfred M. Caronia

Abstract:

The aim of this pilot study was to explore an NCAA Division 1 female volleyball players’ experience of a strength and conditioning program and the result this has on self-efficacy of sport skill performance. This phenomenological study comprised of 10 college aged participants that have strength training program experience. Data was collected using semi-structured interviews and a reflective journal; the transcribed interviews were analyzed using qualitative content analysis. From the analysis, four themes emerged: performance enhancement, injury prevention, motivational experience, and learning experience. From the players’ perspective, care needs to be taken to explain the purpose of an exercise and the benefit it will have for a play performance. Other factors that play an important role in a strength training program are team motivation, individual goal setting, bonding, and communication with the strength coach, as all these items appear to be fundamentals of coaching.

Keywords: self-efficacy, skill performance, sports performance, strength training

Procedia PDF Downloads 93
1040 Esports: A Biomechanics and Performance Perspective

Authors: Alex S. Talan

Abstract:

The introduction of scientific terms for esports can directly affect the quality of the training process. This is a critically important scientific task since esports is a rapidly developing global sport that has only recently begun to receive scientific and methodological consideration. In this report, we evaluate esports from a biomechanical perspective. First, we examine the relationship between physical performance and esports gaming techniques, with consideration toward engineering more effective physical and in-game training methodologies for amateur and professional esports competitors. In addition, we advocate that applying biomechanical research methodologies has the added potential to improve physical performance and endurance in esports athletes. With the budding attention on the esports enterprise globally, scientific research into esports would benefit from standardizing terminologies and methodological approaches that are specifically tailored to assess esports training efficacy to enhance individual and team performance within the esports community.

Keywords: cybersport, esports, biomechanics, sports technique, training standards, dental occlusion, sports engineering, sitting pose

Procedia PDF Downloads 87
1039 Census and Mapping of Oil Palms Over Satellite Dataset Using Deep Learning Model

Authors: Gholba Niranjan Dilip, Anil Kumar

Abstract:

Conduct of accurate reliable mapping of oil palm plantations and census of individual palm trees is a huge challenge. This study addresses this challenge and developed an optimized solution implemented deep learning techniques on remote sensing data. The oil palm is a very important tropical crop. To improve its productivity and land management, it is imperative to have accurate census over large areas. Since, manual census is costly and prone to approximations, a methodology for automated census using panchromatic images from Cartosat-2, SkySat and World View-3 satellites is demonstrated. It is selected two different study sites in Indonesia. The customized set of training data and ground-truth data are created for this study from Cartosat-2 images. The pre-trained model of Single Shot MultiBox Detector (SSD) Lite MobileNet V2 Convolutional Neural Network (CNN) from the TensorFlow Object Detection API is subjected to transfer learning on this customized dataset. The SSD model is able to generate the bounding boxes for each oil palm and also do the counting of palms with good accuracy on the panchromatic images. The detection yielded an F-Score of 83.16 % on seven different images. The detections are buffered and dissolved to generate polygons demarcating the boundaries of the oil palm plantations. This provided the area under the plantations and also gave maps of their location, thereby completing the automated census, with a fairly high accuracy (≈100%). The trained CNN was found competent enough to detect oil palm crowns from images obtained from multiple satellite sensors and of varying temporal vintage. It helped to estimate the increase in oil palm plantations from 2014 to 2021 in the study area. The study proved that high-resolution panchromatic satellite image can successfully be used to undertake census of oil palm plantations using CNNs.

Keywords: object detection, oil palm tree census, panchromatic images, single shot multibox detector

Procedia PDF Downloads 160
1038 Syntax and Words as Evolutionary Characters in Comparative Linguistics

Authors: Nancy Retzlaff, Sarah J. Berkemer, Trudie Strauss

Abstract:

In the last couple of decades, the advent of digitalization of any kind of data was probably one of the major advances in all fields of study. This paves the way for also analysing these data even though they might come from disciplines where there was no initial computational necessity to do so. Especially in linguistics, one can find a rather manual tradition. Still when considering studies that involve the history of language families it is hard to overlook the striking similarities to bioinformatics (phylogenetic) approaches. Alignments of words are such a fairly well studied example of an application of bioinformatics methods to historical linguistics. In this paper we will not only consider alignments of strings, i.e., words in this case, but also alignments of syntax trees of selected Indo-European languages. Based on initial, crude alignments, a sophisticated scoring model is trained on both letters and syntactic features. The aim is to gain a better understanding on which features in two languages are related, i.e., most likely to have the same root. Initially, all words in two languages are pre-aligned with a basic scoring model that primarily selects consonants and adjusts them before fitting in the vowels. Mixture models are subsequently used to filter ‘good’ alignments depending on the alignment length and the number of inserted gaps. Using these selected word alignments it is possible to perform tree alignments of the given syntax trees and consequently find sentences that correspond rather well to each other across languages. The syntax alignments are then filtered for meaningful scores—’good’ scores contain evolutionary information and are therefore used to train the sophisticated scoring model. Further iterations of alignments and training steps are performed until the scoring model saturates, i.e., barely changes anymore. A better evaluation of the trained scoring model and its function in containing evolutionary meaningful information will be given. An assessment of sentence alignment compared to possible phrase structure will also be provided. The method described here may have its flaws because of limited prior information. This, however, may offer a good starting point to study languages where only little prior knowledge is available and a detailed, unbiased study is needed.

Keywords: alignments, bioinformatics, comparative linguistics, historical linguistics, statistical methods

Procedia PDF Downloads 154
1037 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution

Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone

Abstract:

The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.

Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder

Procedia PDF Downloads 113
1036 Identifying Common Sports Injuries in Karate and Presenting a Model for Preventing Identified Injuries (A Case Study of East Azerbaijan, Iranian Karatekas)

Authors: Nadia Zahra Karimi Khiavi, Amir Ghiami Rad

Abstract:

Due to the high likelihood of injuries in karate, karatekas' injuries warrant special treatment. This study explores the prevalence of karate injuries in East Azerbaijan, Iran and provides a model for karatekas to use in the prevention of such injuries. This study employs a descriptive approach. Male and female participants with a brown belt or above in either control or non-control styles in East Azerbaijan province are included in the study's statistical population. A statistical sample size of 100 people was computed using the tools employed (smartpls), and the samples were drawn at random from all clubs in the province with the assistance of the Karate Board in order to give a model for the prevention of karate injuries. Information was gathered by means of a survey that made use of the Standard Questionnaire for Australian Sports Medicine Injury Reports. The information is presented in the form of tables and samples, and descriptive statistics were used to organise and summarise the data. Control and non-control independent t-tests were conducted using SPSS version 20, and structural equation modelling (pls) was utilised for injury prevention modelling at a 0.05 level of significance. The results showed that the most common areas of injury among the control groups were the upper limbs (46.15%), lower limbs (34.61%), trunk (15.38%), and head and neck (3.84%). The most common types of injuries were broken bones (34.61%), sprain or strain (23.13%), bruising and contusions (23.13%), trauma to the face and mouth (11.53%), and damage to the nerves (69.69%). Uncontrolled committees are most likely to sustain injuries to the head and neck (33.33%), trunk (25.92%), upper limbs (22.22%), and lower limbs (18.51%). The most common injuries were to the mouth and face (33.33%), dislocations and fractures (22.22%), aspirin and strain (22.22%), bruises and contusions (18.51%), and nerves (70%), in that order. Among those who practice control kata, injuries to the upper limb account for 45.83%, the lower limb for 41.666%, the trunk for 8.33%, and the head and neck for 4.166%. The most common types of injuries are dislocations and fractures (41.66 per cent), aspirin and strain (29.16 per cent), bruising and bruises (16.66 per cent), and nerves (12.5%). Injuries to the face and mouth were not reported among those practising the control kata. By far, the most common sites of injury for those practising uncontrolled kata were the lower limb (43.74%), upper limb (39.13%), trunk (13.14%), and head and neck (4.34%). The most common types of injuries were dislocations and fractures (34.82%), aspirin and strain (26.08%), bruises and contusions (21.73%), mouth and face (13.14%), and nerves. Teaching the concepts of cooling and warming (0.591) and enhancing the degree of safety in the sports environment (0.413) were shown to play the most essential roles in reducing sports injuries among karate practitioners of controlling and uncontrolled styles, respectively. Use of common sports gear (0.390), Modification of training programme principles (0.341), Formulation of an effective diet plan for athletes (0.284), Evaluation of athletes' physical anatomy, physiology, chemistry, and physics (0.247).

Keywords: sports injuries, karate, prevention, cooling and warming

Procedia PDF Downloads 101
1035 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 78
1034 The Influence of Training on the Special Aerial Gymnastics Instruments on Selected C-Reactive Proteins in Cadets’ Serum

Authors: Z. Wochyński, K. A. Sobiech, Z. Kobos

Abstract:

To C-Reactive Proteins include ferritin, transferrin, and ceruloplasmin- metalloproteins. The study aimed at assessing an effect of training on the Special Aerial Gymnastics Instruments (SAGI) on changes of serum ferritin, transferrin, and ceruloplasmin and cadets’ physical fitness in comparison with a control group. Fifty-five cadets in the mean age 20 years were included into this study. They were divided into two groups: Group A (N=41) trained on SAGI and Group B (N=14) trained according the standard program of physical education (control group). In both groups, blood was a material for assays. Samples were collected twice before and after training at the start of the program (training I), during (training II), and after education program completion (training III). Commercially available kits were used to assay blood serum ferritin, transferrin, and ceruloplasmin. Cadets’ physical fitness was evaluated with exercise tests before and after education program completion. In Group A, serum post-exercise ferritin decreased statistically insignificantly in training I and II and increased in training III in comparison with pre-exercise values. In Group B, post-exercise serum ferritin decreased statistically insignificantly in training I and III and significantly increased in training II in comparison with the pre-exercise values. In Group A, serum transferrin decreased statistically insignificantly in training I, and significantly increased in training II, whereas in training III it increased insignificantly in comparison with pre-exercise values. In Group B, post-exercise serum transferrin increased statistically significantly in training I, II, and III in comparison with pre-exercise values. I n Group A, serum ceruloplasmin decreased in all three series in comparison with pre-exercise values. In Group B, serum ceruloplasmin increased significantly in training II. It was showed that the training on SAGI significantly decreased serum ceruloplasmin in Group A in all three series of assays and did not produce significant changes in serum ferritin also was showed significant increase in serum transferrin.

Keywords: special aerial gymnastics instruments, ferritin, ceruloplasmin, transferrin

Procedia PDF Downloads 463
1033 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes

Authors: Madushani Rodrigo, Banuka Athuraliya

Abstract:

In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.

Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16

Procedia PDF Downloads 120
1032 Robustness of the Deep Chroma Extractor and Locally-Normalized Quarter Tone Filters in Automatic Chord Estimation under Reverberant Conditions

Authors: Luis Alvarado, Victor Poblete, Isaac Gonzalez, Yetzabeth Gonzalez

Abstract:

In MIREX 2016 (http://www.music-ir.org/mirex), the deep neural network (DNN)-Deep Chroma Extractor, proposed by Korzeniowski and Wiedmer, reached the highest score in an audio chord recognition task. In the present paper, this tool is assessed under acoustic reverberant environments and distinct source-microphone distances. The evaluation dataset comprises The Beatles and Queen datasets. These datasets are sequentially re-recorded with a single microphone in a real reverberant chamber at four reverberation times (0 -anechoic-, 1, 2, and 3 s, approximately), as well as four source-microphone distances (32, 64, 128, and 256 cm). It is expected that the performance of the trained DNN will dramatically decrease under these acoustic conditions with signals degraded by room reverberation and distance to the source. Recently, the effect of the bio-inspired Locally-Normalized Cepstral Coefficients (LNCC), has been assessed in a text independent speaker verification task using speech signals degraded by additive noise at different signal-to-noise ratios with variations of recording distance, and it has also been assessed under reverberant conditions with variations of recording distance. LNCC showed a performance so high as the state-of-the-art Mel Frequency Cepstral Coefficient filters. Based on these results, this paper proposes a variation of locally-normalized triangular filters called Locally-Normalized Quarter Tone (LNQT) filters. By using the LNQT spectrogram, robustness improvements of the trained Deep Chroma Extractor are expected, compared with classical triangular filters, and thus compensating the music signal degradation improving the accuracy of the chord recognition system.

Keywords: chord recognition, deep neural networks, feature extraction, music information retrieval

Procedia PDF Downloads 232
1031 Comparative Study of Expository and Simulation Method of Teaching Woodwork at Federal University of Technology, Minna, Nigeria

Authors: Robert Ogbanje Okwori

Abstract:

The research studied expository and simulation method of teaching woodwork at Federal University of Technology, Minna, Niger State, Nigeria. The purpose of the study was to compare expository and simulation method of teaching woodwork and determine the method that is more effective in improving performance of students in woodwork. Two research questions and two hypotheses were formulated to guide the study. Fifteen objective questions and two theory questions were used for data collection. The questions set were on structure of timber. The study used the quasi experimental design. The population of the study consisted of 25 woodwork students of Federal University of Technology, Minna, Niger State, Nigeria and three hundred (300) level students were used for the study. The lesson plans for expository method and questions were validated by three lecturers in the Department of Industrial and Technology Education, Federal University of Technology, Minna, Nigeria. The validators checked the appropriates of test items and all the corrections and inputs were effected before administration of the instrument. Data obtained were analyzed using mean, standard deviation and t-test statistical tool. The null hypotheses were formulated and tested using t-test statistics at 0.05 level of significance. The findings of the study showed that simulation method of teaching has improved students’ performance in woodwork and the performance of the students was not influenced by gender. Based on the findings of the study, it was concluded that there was a significant difference in the mean achievement scores of students taught woodwork using simulation method. This implies that simulation method is more effective than expository method of teaching woodwork. Therefore, woodwork teachers should adopt simulation method of teaching woodwork towards better performance. It was recommended that simulation method should be used by woodwork lecturers to teach woodwork since students perform better using the method and also the teachers needs to be trained and re-trained in using simulation method for teaching woodwork. Teachers should be encouraged to use simulation method for their instructional delivery because it will allow them to identify their areas of strength and weakness when imparting knowledge to woodwork students. Government and different agencies should assist in procuring materials and equipment for wood workshops to enable students effectively practice what they have been taught using simulation method.

Keywords: comparative, expository, simulation, woodwork

Procedia PDF Downloads 425
1030 Examining a Volunteer-Tutoring Program for Students with Special Education Needs

Authors: David Dean Hampton, William Morrison, Mary Rizza, Jan Osborn

Abstract:

This evaluation examined the effects of a supplemental reading intervThis evaluation examined the effects of a supplemental reading intervention for students with specific learning disabilities in reading who were presented with below grade level on fall benchmark scores on DIBELS 6th ed. Revised. Participants consisted of a condition group, those who received supplemental reading instruction in addition to core + special education services and a comparison group of students who were at grade level in their fall benchmark scores. The students in the condition group received 26 weeks of Project MORE instruction delivered multiple times each week from trained volunteer tutors. Using a regression-discontinuity design, condition and comparison groups were compared on reading development growth using DIBELS ORF. Significant findings were reported for grade 2, 3, and 4. ntion for students with specific learning disabilities in reading who presented with below grade level on fall benchmark scores on DIBELS 6th ed. Revised. Participants consisted of a condition group, those who received supplemental reading instruction in addition to core + special education services and a comparison group of students who were at grade level in their fall benchmark scores. The students in the condition group received 26 weeks of Project MORE instruction delivered multiple times each week from trained volunteer tutors. Using a regression-discontinuity design, condition and comparison groups were compared on reading development growth using DIBELS ORF. Significant findings were reported for grade 2, 3, and 4.

Keywords: special education, evidence-based practices, curriculum, tutoring

Procedia PDF Downloads 67
1029 Concussion Prediction for Speed Skater Impacting on Crash Mats by Computer Simulation Modeling

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Concussion for speed skaters often occurs when skaters fall on the ice and impact the crash mats during practices and competition races. Gaining insight into the impact of interactions is of essential interest as it is directly related to skaters’ potential health risks and injuries. Precise concussion measurements are challenging and very difficult, making computer simulation the only reliable way to analyze accidents. This research aims to create the crash mat and skater’s multi-body model using Solidworks, develop a computer simulation model for skater-mat impact using ANSYS software, and predict the skater’s concussion degree by evaluating the “head injury criteria” (HIC) through the resulting accelerations. The developed method and results help understand the relationship between impact parameters and concussion risk for speed skaters and inform the design of crash mats and skating rink layouts more specifically by considering athletes’ health risks.

Keywords: computer simulation modeling, concussion, impact, speed skater

Procedia PDF Downloads 141
1028 Artificial Neural Networks and Hidden Markov Model in Landslides Prediction

Authors: C. S. Subhashini, H. L. Premaratne

Abstract:

Landslides are the most recurrent and prominent disaster in Sri Lanka. Sri Lanka has been subjected to a number of extreme landslide disasters that resulted in a significant loss of life, material damage, and distress. It is required to explore a solution towards preparedness and mitigation to reduce recurrent losses associated with landslides. Artificial Neural Networks (ANNs) and Hidden Markov Model (HMMs) are now widely used in many computer applications spanning multiple domains. This research examines the effectiveness of using Artificial Neural Networks and Hidden Markov Model in landslides predictions and the possibility of applying the modern technology to predict landslides in a prominent geographical area in Sri Lanka. A thorough survey was conducted with the participation of resource persons from several national universities in Sri Lanka to identify and rank the influencing factors for landslides. A landslide database was created using existing topographic; soil, drainage, land cover maps and historical data. The landslide related factors which include external factors (Rainfall and Number of Previous Occurrences) and internal factors (Soil Material, Geology, Land Use, Curvature, Soil Texture, Slope, Aspect, Soil Drainage, and Soil Effective Thickness) are extracted from the landslide database. These factors are used to recognize the possibility to occur landslides by using an ANN and HMM. The model acquires the relationship between the factors of landslide and its hazard index during the training session. These models with landslide related factors as the inputs will be trained to predict three classes namely, ‘landslide occurs’, ‘landslide does not occur’ and ‘landslide likely to occur’. Once trained, the models will be able to predict the most likely class for the prevailing data. Finally compared two models with regards to prediction accuracy, False Acceptance Rates and False Rejection rates and This research indicates that the Artificial Neural Network could be used as a strong decision support system to predict landslides efficiently and effectively than Hidden Markov Model.

Keywords: landslides, influencing factors, neural network model, hidden markov model

Procedia PDF Downloads 384
1027 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: convolutional neural networks, coffee bean, peaberry, sorting, support vector machine

Procedia PDF Downloads 144
1026 Outcomes of the Gastrocnemius Flap Performed by Orthopaedic Surgeons in Salvage Revision Knee Arthroplasty: A Retrospective Study at a Tertiary Orthopaedic Centre

Authors: Amirul Adlan, Robert McCulloch, Scott Evans, Michael Parry, Jonathan Stevenson, Lee Jeys

Abstract:

Background and Objectives: The gastrocnemius myofascial flap is used to manage soft-tissue defects over the anterior aspect of the knee in the context of a patient presenting with a sinus and periprosthetic joint infection (PJI) or extensor mechanism failure. The aim of this study was twofold: firstly, to evaluate the outcomes of gastrocnemius flaps performed by appropriately trained orthopaedic surgeons in the context of PJI and, secondly, to evaluate the infection-free survival of this patient group. Methods: We retrospectively reviewed 30 patients who underwent gastrocnemius flap reconstruction during staged revision total knee arthroplasty for prosthetic joint infection (PJI). All flaps were performed by an orthopaedic surgeon with orthoplastics training. Patients had a mean age of 68.9 years (range 50–84) and were followed up for a mean of 50.4 months (range 2–128 months). A total of 29 patients (97 %) were categorized into Musculoskeletal Infection Society (MSIS) local extremity grade 3 (greater than two compromising factors), and 52 % of PJIs were polymicrobial. The primary outcome measure was flap failure, and the secondary outcome measure was a recurrent infection. Results: Flap survival was 100% with no failures or early returns to theatre for flap problems such as necrosis or haematoma. Overall infection-free survival during the study period was 48% (13 of 27 infected cases). Using limb salvage as the outcome, 77% (23 of 30 patients) retained the limb. Infection recurrence occurred in 48% (10 patients) in the type B3 cohort and 67% (4 patients) in the type C3 cohort (p = 0.65). Conclusion: The surgical technique for a gastrocnemius myofascial flap is reliable and reproducible when performed by appropriately trained orthopaedic surgeons, even in high-risk groups. However, the risks of recurrent infection and amputation remain high within our series due to poor host and extremity factors.

Keywords: gastrocnemius flap, limb salvage, revision arthroplasty, outcomes

Procedia PDF Downloads 111
1025 Effects of Preparation Caused by Ischemic-Reperfusion along with Sodium Bicarbonate Supplementation on Submaximal Dynamic Force Production

Authors: Sara Nasiri Semnani, Alireza Ramzani

Abstract:

Background and Aims: Sodium bicarbonate is a supplementation that used to reduce fatigue and increase power output in short-term training. On the other hand, the Ischemic Reperfusion Preconditioning (IRPC) is an appropriate stimulus to increase the submaximal contractile response. Materials and methods: 9 female student-athletes in double-blind randomized crossover design were three mode, sodium bicarbonate + IRPC, sodium bicarbonate and placebo+ IRPC. Participants moved forward single arm dumbbell hand with a weight of 2 kg can be carried out most frequently. Results: The results showed that plasma lactate concentration and records of sodium bicarbonate + IRPC and sodium bicarbonate conditions were significantly different compared to placebo + IRPC (Respectively p=0.001, p=0/02). Conclusion: According to the research findings, bicarbonate supplementation in IRPC training condition increased force and delay fatigue in submaximal dynamic contraction.

Keywords: ischemic reperfusion, preconditioning, sodium bicarbonate, submaximal dynamic force

Procedia PDF Downloads 303
1024 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 89