Search results for: cohesion metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 842

Search results for: cohesion metrics

182 Design and Validation of the 'Teachers' Resilience Scale' for Assessing Protective Factors

Authors: Athena Daniilidou, Maria Platsidou

Abstract:

Resilience is considered to greatly affect the personal and occupational wellbeing and efficacy of individuals; therefore, it has been widely studied in the social and behavioral sciences. Given its significance, several scales have been created to assess resilience of children and adults. However, most of these scales focus on examining only the internal protective or risk factors that affect the levels of resilience. The aim of the present study is to create a reliable scale that assesses both the internal and the external protective factors that affect Greek teachers’ levels of resilience. Participants were 136 secondary school teachers (89 females, 47 males) from urban areas of Greece. Connor-Davidson Resilience Scale (CD-Risc) and Resilience Scale for Adults (RSA) were used to collect the data. First, exploratory factor analysis was employed to investigate the inner structure of each scale. For both scales, the analyses revealed a differentiated factor solution compared to the ones proposed by the creators. That prompt us to create a scale that would combine the best fitting subscales of the CD-Risc and the RSA. To this end, the items of the four factors with the best fit and highest reliability were used to create the ‘Teachers' resilience scale’. Exploratory factor analysis revealed that the scale assesses the following protective/risk factors: Personal Competence and Strength (9 items, α=.83), Family Cohesion Spiritual Influences (7 items, α=.80), Social Competence and Peers Support (7 items, α=.78) and Spiritual Influence (3 items, α=.58). This four-factor model explained 49,50% of the total variance. In the next step, a confirmatory factor analysis was performed on the 26 items of the derived scale to test the above factor solution. The fit of the model to the data was good (χ2/292 = 1.245, CFI = .921, GFI = .829, SRMR = .074, CI90% = .026-,056, RMSEA = 0.43), indicating that the proposed scale can validly measure the aforementioned four aspects of teachers' resilience and thus confirmed its factorial validity. Finally, analyses of variance were performed to check for individual differences in the levels of teachers' resilience in relation to their gender, age, marital status, level of studies, and teaching specialty. Results were consistent to previous findings, thus providing an indication of discriminant validity for the instrument. This scale has the advantage of assessing both the internal and the external protective factors of resilience in a brief yet comprehensive way, since it consists 26 items instead of the total of 58 of the CD-Risc and RSA scales. Its factorial inner structure is supported by the relevant literature on resilience, as it captures the major protective factors of resilience identified in previous studies.

Keywords: protective factors, resilience, scale development, teachers

Procedia PDF Downloads 278
181 From Vegetarian to Cannibal: A Literary Analysis of a Journey of Innocence in ‘Life of Pi’

Authors: Visvaganthie Moodley

Abstract:

Language use and aesthetic appreciation are integral to meaning-making in prose, as they are in poetry. However, in comparison to poetic analysis, a literary analysis of prose that focuses on linguistics and stylistics is somewhat scarce as it generally requires the study of lengthy texts. Nevertheless, the effect of linguistic and stylistic features in prose as conscious design by authors for creating specific effects and conveying preconceived messages is drawing increasing attention of linguists and literary experts. A close examination of language use in prose can, among a host of literary purposes, convey emotive and cognitive values and contribute to making interpretations about how fictional characters are represented to the imaginative reader. This paper provides a literary analysis of Yann Martel’s narrative of a 14-year-old Indian boy, Pi, who had survived the wreck of a Japanese cargo ship, by focusing on his 227-day journey of tribulations, along with a Bengal tiger, on a lifeboat. The study favours a pluralistic approach blending literary criticism, linguistic analysis and stylistic description. It adopts Leech and Short’s (2007) broad framework of linguistic and stylistic categories (lexical categories, grammatical categories, figures of speech etc. [sic] and context and cohesion) as well as a range of other relevant linguistic phenomena to show how the narrator, Pi, and the author influence the reader’s interpretations of Pi’s character. Such interpretations are made using the lens of Freud’s psychoanalytical theory (which focuses on the interplay of the instinctual id, the ego and the moralistic superego) and Blake’s philosophy of innocence and experience (the two contrary states of the human soul). The paper traces Pi’s transformation from animal-loving, God-fearing vegetarian to brutal animal slayer and cannibal in his journey of survival. By a close examination of the linguistic and stylistic features of the narrative, it argues that, despite evidence of butchery and cannibalism, Pi’s gruesome behaviour is motivated by extreme physiological and psychological duress and not intentional malice. Finally, the paper concludes that the voice of the narrator, Pi, and that of the author, Martel, act as powerful persuasive agents in influencing the reader to respond with a sincere flow of sympathy for Pi and judge him as having retained his innocence in his instinctual need for survival.

Keywords: foregrounding, innocence and experience, lexis, literary analysis, psychoanalytical lens, style

Procedia PDF Downloads 138
180 Quality of Service Based Routing Algorithm for Real Time Applications in MANETs Using Ant Colony and Fuzzy Logic

Authors: Farahnaz Karami

Abstract:

Routing is an important, challenging task in mobile ad hoc networks due to node mobility, lack of central control, unstable links, and limited resources. An ant colony has been found to be an attractive technique for routing in Mobile Ad Hoc Networks (MANETs). However, existing swarm intelligence based routing protocols find an optimal path by considering only one or two route selection metrics without considering correlations among such parameters making them unsuitable lonely for routing real time applications. Fuzzy logic combines multiple route selection parameters containing uncertain information or imprecise data in nature, but does not have multipath routing property naturally in order to provide load balancing. The objective of this paper is to design a routing algorithm using fuzzy logic and ant colony that can solve some of routing problems in mobile ad hoc networks, such as nodes energy consumption optimization to increase network lifetime, link failures rate reduction to increase packet delivery reliability and providing load balancing to optimize available bandwidth. In proposed algorithm, the path information will be given to fuzzy inference system by ants. Based on the available path information and considering the parameters required for quality of service (QoS), the fuzzy cost of each path is calculated and the optimal paths will be selected. NS2.35 simulation tools are used for simulation and the results are compared and evaluated with the newest QoS based algorithms in MANETs according to packet delivery ratio, end-to-end delay and routing overhead ratio criterions. The simulation results show significant improvement in the performance of these networks in terms of decreasing end-to-end delay, and routing overhead ratio, and also increasing packet delivery ratio.

Keywords: mobile ad hoc networks, routing, quality of service, ant colony, fuzzy logic

Procedia PDF Downloads 37
179 Improving Chest X-Ray Disease Detection with Enhanced Data Augmentation Using Novel Approach of Diverse Conditional Wasserstein Generative Adversarial Networks

Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Daniyal Haider, Xiaodong Yang

Abstract:

Chest X-rays are instrumental in the detection and monitoring of a wide array of diseases, including viral infections such as COVID-19, tuberculosis, pneumonia, lung cancer, and various cardiac and pulmonary conditions. To enhance the accuracy of diagnosis, artificial intelligence (AI) algorithms, particularly deep learning models like Convolutional Neural Networks (CNNs), are employed. However, these deep learning models demand a substantial and varied dataset to attain optimal precision. Generative Adversarial Networks (GANs) can be employed to create new data, thereby supplementing the existing dataset and enhancing the accuracy of deep learning models. Nevertheless, GANs have their limitations, such as issues related to stability, convergence, and the ability to distinguish between authentic and fabricated data. In order to overcome these challenges and advance the detection and classification of CXR normal and abnormal images, this study introduces a distinctive technique known as DCWGAN (Diverse Conditional Wasserstein GAN) for generating synthetic chest X-ray (CXR) images. The study evaluates the effectiveness of this Idiosyncratic DCWGAN technique using the ResNet50 model and compares its results with those obtained using the traditional GAN approach. The findings reveal that the ResNet50 model trained on the DCWGAN-generated dataset outperformed the model trained on the classic GAN-generated dataset. Specifically, the ResNet50 model utilizing DCWGAN synthetic images achieved impressive performance metrics with an accuracy of 0.961, precision of 0.955, recall of 0.970, and F1-Measure of 0.963. These results indicate the promising potential for the early detection of diseases in CXR images using this Inimitable approach.

Keywords: CNN, classification, deep learning, GAN, Resnet50

Procedia PDF Downloads 55
178 Role of Organizational Culture in Building Sustainable Employee’s Performance in Organizations: A Case Study of Zenith Bank PLC Jalingo Taraba State Nigeria

Authors: Jerome Nyameh

Abstract:

The most valuable asset in the existence of organization is the employees and their ability in maintain appreciable level of performance which support the goal of the organization and the ability to do that depend largely on the organizational culture and culture has been considered most currently as the factor that relate positively to organizational excellence and sustainable employee’s performance over the period of time An employee engagement program will not go far without first establishing the organizational culture that is required to support sustainability. This means integrating sustainability into the overall employee’s performance, with clear vision, goals and metrics. It means having strong culture and a collaborative governance structure that has been develop as a ways of doing things in the organization for decision making and resource allocation. It requires a rewards and recognition program to support and reinforce sustainability behaviors. With such a culture in place, organization will be able to develop a strategy that fully engages employees, while fully realizing the benefits of their contributions. The study investigated empirically the role of organizational culture building sustainable employee’s performance using Zenith bank PLC a model where organizational culture will build sustainable employees performance strategy for a lasting actualization of organizational was developed. In order to achieve the research objectives of (i) to assess how organizational culture can build sustainable employee’s performance (ii) to analyze the gap that exists between organizational culture and sustainable employee’s performance in the organization, a survey questionnaires of 20 items was administered to sixty respondents. The findings of this study have practical implications for organizational leaders, managers and employees, and their organizations, particularly commercial banks in Nigeria, besides offering scope for further research in the area of organizational culture and sustainable employee’s performance. It will also show a significance and positive relationship that exist between organizational culture and sustainable employee’s performance, as means of building viable organization with cultural uniqueness and excellence performance in the world of competition.

Keywords: organizational culture, sustainable employee’s performance, organizations, Zenith Bank PLC Nigeria

Procedia PDF Downloads 492
177 Multi-Sensor Image Fusion for Visible and Infrared Thermal Images

Authors: Amit Kumar Happy

Abstract:

This paper is motivated by the importance of multi-sensor image fusion with a specific focus on infrared (IR) and visual image (VI) fusion for various applications, including military reconnaissance. Image fusion can be defined as the process of combining two or more source images into a single composite image with extended information content that improves visual perception or feature extraction. These images can be from different modalities like visible camera & IR thermal imager. While visible images are captured by reflected radiations in the visible spectrum, the thermal images are formed from thermal radiation (infrared) that may be reflected or self-emitted. A digital color camera captures the visible source image, and a thermal infrared camera acquires the thermal source image. In this paper, some image fusion algorithms based upon multi-scale transform (MST) and region-based selection rule with consistency verification have been proposed and presented. This research includes the implementation of the proposed image fusion algorithm in MATLAB along with a comparative analysis to decide the optimum number of levels for MST and the coefficient fusion rule. The results are presented, and several commonly used evaluation metrics are used to assess the suggested method's validity. Experiments show that the proposed approach is capable of producing good fusion results. While deploying our image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also make it hard to become deployed in systems and applications that require a real-time operation, high flexibility, and low computation ability. So, the methods presented in this paper offer good results with minimum time complexity.

Keywords: image fusion, IR thermal imager, multi-sensor, multi-scale transform

Procedia PDF Downloads 87
176 Comparative Study of Static and Dynamic Representations of the Family Structure and Its Clinical Utility

Authors: Marietta Kékes Szabó

Abstract:

The patterns of personality (mal)function and the individuals’ psychosocial environment influence the healthy status collectively and may lie in the background of psychosomatic disorders. Although the patients with their diversified symptoms usually do not have any organic problems, the experienced complaint, the fear of serious illness and the lack of social support often lead to increased anxiety and further enigmatic symptoms. The role of the family system and its atmosphere seem to be very important in this process. More studies explored the characteristics of dysfunctional family organization: inflexible family structure, hidden conflicts that are not spoken about by the family members during their daily interactions, undefined role boundaries, neglect or overprotection of the children by the parents and coalition between generations. However, questionnaires that are used to measure the properties of the family system are able to explore only its unit and cannot pay attention to the dyadic interactions, while the representation of the family structure by a figure placing test gives us a new perspective to better understand the organization of the (sub)system(s). Furthermore, its dynamic form opens new perspectives to explore the family members’ joint representations, which gives us the opportunity to know more about the flexibility of cohesion and hierarchy of the given family system. In this way, the communication among the family members can be also examined. The aim of my study was to collect a great number of information about the organization of psychosomatic families. In our research we used Gehring’s Family System Test (FAST) both in static and dynamic forms to mobilize the family members’ mental representations about their family and to get data in connection with their individual representations as well as cooperation. There were four families in our study, all of them with a young adult person. Two families with healthy participants and two families with asthmatic patient(s) were involved in our research. The family members’ behavior that could be observed during the dynamic situation was recorded on video for further data analysis with Noldus Observer XT 8.0 program software. In accordance with the previous studies, our results show that the family structure of the families with at least one psychosomatic patient is more rigid than it was found in the control group and the certain (typical, ideal, and conflict) dynamic representations reflected mainly the most dominant family member’s individual concept. The behavior analysis also confirmed the intensified role of the dominant person(s) in the family life, thereby influencing the family decisions, the place of the other family members, as well as the atmosphere of the interactions, which could also be grasped well by the applied methods. However, further research is needed to learn more about the phenomenon that can open the door for new therapeutic approaches.

Keywords: psychosomatic families, family structure, family system test (FAST), static and dynamic representations, behavior analysis

Procedia PDF Downloads 367
175 Building Resilient Communities: The Traumatic Effect of Wildfire on Mati, Greece

Authors: K. Vallianou, T. Alexopoulos, V. Plaka, M. K. Seleventi, V. Skanavis, C. Skanavis

Abstract:

The present research addresses the role of place attachment and emotions in community resiliency and recovery within the context of a disaster. Natural disasters represent a disruption in the normal functioning of a community, leading to a general feeling of disorientation. This study draws on the trauma caused by a natural hazard such as a forest fire. The changes of the sense of togetherness are being assessed. Finally this research determines how the place attachment of the inhabitants was affected during the reorientation process of the community. The case study area is Mati, a small coastal town in eastern Attica, Greece. The fire broke out on July 23rd, 2018. A quantitative research was conducted through questionnaires via phone interviews, one year after the disaster, to address community resiliency in the long-run. The sample was composed of 159 participants from the rural community of Mati plus 120 coming from Skyros Island that was used as a control group. Inhabitants were prompted to answer items gauging their emotions related to the event, group identification and emotional significance of their community, and place attachment before and a year after the fire took place. Importantly, the community recovery and reorientation were examined within the context of a relative absence of government backing and official support. Emotions related to the event were aggregated into 4 clusters related to: activation/vigilance, distress/disorientation, indignation, and helplessness. The findings revealed a decrease in the level of place attachment in the impacted area of Mati as compared to the control group of Skyros Island. Importantly, initial distress caused by the fire prompted the residents to identify more with their community and to report more positive feelings toward their community. Moreover, a mediation analysis indicated that the positive effect of community cohesion on place attachment one year after the disaster was mediated by the positive feelings toward the community. Finally, place attachment contributes to enhanced optimism and a more positive perspective concerning Mati’s future prospects. Despite an insufficient state support to this affected area, the findings suggest an important role of emotions and place attachment during the process of recovery. Implications concerning the role of emotions and social dynamics in meshing place attachment during the disaster recovery process as well as community resiliency are discussed.

Keywords: community resilience, natural disasters, place attachment, wildfire

Procedia PDF Downloads 76
174 A Qualitative Study of Unmet Needs of Families of Children with Cerebral Palsy in Bangladesh

Authors: Reshma Parvin Nuri, Heather Michelle Aldersey, Setareh Ghahari

Abstract:

Objectives: Worldwide, it is well known that taking care of children with disabilities (CWD) can have a significant impact on the entire family unit. Over the last few decades, an increased number of studies have been conducted on families of CWD in higher income countries, and much of this research has identified family needs and strategies to meet those needs. However, family needs are incredibly under-studied in developing countries. Therefore, the aims of this study were to: (a) explore the needs of families of children with cerebral palsy (CP) in Bangladesh; (b) investigate how some of the family needs have been met and (c) identify the sources of supports that might help the families to meet their needs in the future. Methods: A face to face, semi-structured in-depth interview was conducted with 20 family members (12 mothers, 4 fathers, 1 sister, 2 grandmothers, and 1 aunt) who visited the Centre for the Rehabilitation of the Paralysed (CRP), Bangladesh between June and August 2016. Constant comparison method of grounded theory approach within the broader spectrum of qualitative study was used to analyze the data. Results: Participants identified five categories of needs: (a) financial needs, (b) access to disability-related services, (c) family and community cohesion, (d) informational needs, and (e) emotional needs. Participants overwhelmingly reported that financial need is their greatest family need. Participants noted that families encountered additional financial expenses for a child with CP, beyond what they would typically pay for their other children. Participants were seeing education as their non-primary need as they had no hope that their children would be physically able to go to school. Some participants also shared their needs for social inclusion and participation and receiving emotional support. Participants further expressed needs to receive information related to the child’s health condition and availability/accessibility of governmental support programs. Besides unmet needs, participants also highlighted that some of their needs have been met through formal and informal support systems. Formal support systems were mainly institution-based and run by non-governmental organizations, whereas participants identified informal support coming from family, friends and community members. Participants overwhelmingly reported that they receive little to no support from the government. However, participants identified the government as the key stakeholder who can play vital role in meeting their unmet needs. Conclusions: In the next phase of this research, the plan is to understand how the Government of the People’s Republic of Bangladesh is working to meet the needs of families of CWD. There is also need for further study on needs of families of children with conditions other than CP and those who live in the community and do not have access to the CRP Services. There is clear need to investigate ways to enable children with CP have better access to education in Bangladesh.

Keywords: Bangladesh, children with cerebral palsy, family needs, support

Procedia PDF Downloads 356
173 Investigating the Effectiveness of Multilingual NLP Models for Sentiment Analysis

Authors: Othmane Touri, Sanaa El Filali, El Habib Benlahmar

Abstract:

Natural Language Processing (NLP) has gained significant attention lately. It has proved its ability to analyze and extract insights from unstructured text data in various languages. It is found that one of the most popular NLP applications is sentiment analysis which aims to identify the sentiment expressed in a piece of text, such as positive, negative, or neutral, in multiple languages. While there are several multilingual NLP models available for sentiment analysis, there is a need to investigate their effectiveness in different contexts and applications. In this study, we aim to investigate the effectiveness of different multilingual NLP models for sentiment analysis on a dataset of online product reviews in multiple languages. The performance of several NLP models, including Google Cloud Natural Language API, Microsoft Azure Cognitive Services, Amazon Comprehend, Stanford CoreNLP, spaCy, and Hugging Face Transformers are being compared. The models based on several metrics, including accuracy, precision, recall, and F1 score, are being evaluated and compared to their performance across different categories of product reviews. In order to run the study, preprocessing of the dataset has been performed by cleaning and tokenizing the text data in multiple languages. Then training and testing each model has been applied using a cross-validation approach where randomly dividing the dataset into training and testing sets and repeating the process multiple times has been used. A grid search approach to optimize the hyperparameters of each model and select the best-performing model for each category of product reviews and language has been applied. The findings of this study provide insights into the effectiveness of different multilingual NLP models for Multilingual Sentiment Analysis and their suitability for different languages and applications. The strengths and limitations of each model were identified, and recommendations for selecting the most performant model based on the specific requirements of a project were provided. This study contributes to the advancement of research methods in multilingual NLP and provides a practical guide for researchers and practitioners in the field.

Keywords: NLP, multilingual, sentiment analysis, texts

Procedia PDF Downloads 63
172 Data Mining Model for Predicting the Status of HIV Patients during Drug Regimen Change

Authors: Ermias A. Tegegn, Million Meshesha

Abstract:

Human Immunodeficiency Virus and Acquired Immunodeficiency Syndrome (HIV/AIDS) is a major cause of death for most African countries. Ethiopia is one of the seriously affected countries in sub Saharan Africa. Previously in Ethiopia, having HIV/AIDS was almost equivalent to a death sentence. With the introduction of Antiretroviral Therapy (ART), HIV/AIDS has become chronic, but manageable disease. The study focused on a data mining technique to predict future living status of HIV/AIDS patients at the time of drug regimen change when the patients become toxic to the currently taking ART drug combination. The data is taken from University of Gondar Hospital ART program database. Hybrid methodology is followed to explore the application of data mining on ART program dataset. Data cleaning, handling missing values and data transformation were used for preprocessing the data. WEKA 3.7.9 data mining tools, classification algorithms, and expertise are utilized as means to address the research problem. By using four different classification algorithms, (i.e., J48 Classifier, PART rule induction, Naïve Bayes and Neural network) and by adjusting their parameters thirty-two models were built on the pre-processed University of Gondar ART program dataset. The performances of the models were evaluated using the standard metrics of accuracy, precision, recall, and F-measure. The most effective model to predict the status of HIV patients with drug regimen substitution is pruned J48 decision tree with a classification accuracy of 98.01%. This study extracts interesting attributes such as Ever taking Cotrim, Ever taking TbRx, CD4 count, Age, Weight, and Gender so as to predict the status of drug regimen substitution. The outcome of this study can be used as an assistant tool for the clinician to help them make more appropriate drug regimen substitution. Future research directions are forwarded to come up with an applicable system in the area of the study.

Keywords: HIV drug regimen, data mining, hybrid methodology, predictive model

Procedia PDF Downloads 116
171 Safeguarding Product Quality through Pre-Qualification of Material Manufacturers: A Ship and Offshore Classification Society's Perspective

Authors: Sastry Y. Kandukuri, Isak Andersen

Abstract:

Despite recent advances in the manufacturing sector, quality issues remain a frequent occurrence, and can result in fatal accidents, equipment downtime, and loss of life. Adequate quality is of high importance in high-risk industries such as sea-going vessels and offshore installations in which third party quality assurance and product control play an important essential role in ensuring manufacturing quality of critical components. Classification societies play a vital role in mitigating risk in these industries by making sure that all the stakeholders i.e. manufacturers, builders, and end users are provided with adequate rules and standards that effectively ensures components produced at a high level of quality based on the area of application and risk of its failure. Quality issues have also been linked to the lack of competence or negligence of stakeholders in supply value chain. However, continued actions and regulatory reforms through modernization of rules and requirements has provided additional tools for purchasers and manufacturers to confront these issues. Included among these tools are updated ‘approval of manufacturer class programs’ aimed at developing and implementing a set of standardized manufacturing quality metrics for use by the manufacturer and verified by the classification society. The establishment and collection of manufacturing and testing requirements described in these programs could provide various stakeholders – from industry to vessel owners – with greater insight into the state of quality at a given manufacturing facility, and allow stakeholders to anticipate better and address quality issues while simultaneously reducing unnecessary failures that are costly to the industry. The publication introduces, explains and discusses critical manufacturing and testing requirements set in a leading class society’s approval of manufacturer regime and its rationale and some case studies.

Keywords: classification society, manufacturing, materials processing, materials testing, quality control

Procedia PDF Downloads 331
170 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 170
169 Phase Composition Analysis of Ternary Alloy Materials for Gas Turbine Applications

Authors: Mayandi Ramanathan

Abstract:

Gas turbine blades see the most aggressive thermal stress conditions within the engine, due to high Turbine Entry Temperatures in the range of 1500 to 1600°C. The blades rotate at very high rotation rates and remove a significant amount of thermal power from the gas stream. At high temperatures, the major component failure mechanism is a creep. During its service over time under high thermal loads, the blade will deform, lengthen and rupture. High strength and stiffness in the longitudinal direction up to elevated service temperatures are certainly the most needed properties of turbine blades and gas turbine components. The proposed advanced Ti alloy material needs a process that provides a strategic orientation of metallic ordering, uniformity in composition and high metallic strength. The chemical composition of the proposed Ti alloy material (25% Ta/(Al+Ta) ratio), unlike Ti-47Al-2Cr-2Nb, has less excess Al that could limit the service life of turbine blades. Properties and performance of Ti-47Al-2Cr-2Nb and Ti-6Al-4V materials will be compared with that of the proposed Ti alloy material to generalize the performance metrics of various gas turbine components. This paper will involve the summary of the effects of additive manufacturing and heat treatment process conditions on the changes in the phase composition, grain structure, lattice structure of the material, tensile strength, creep strain rate, thermal expansion coefficient and fracture toughness at different temperatures. Based on these results, additive manufacturing and heat treatment process conditions will be optimized to fabricate turbine blade with Ti-43Al matrix alloyed with an optimized amount of refractory Ta metal. Improvement in service temperature of the turbine blades and corrosion resistance dependence on the coercivity of the alloy material will be reported. A correlation of phase composition and creep strain rate will also be discussed.

Keywords: high temperature materials, aerospace, specific strength, creep strain, phase composition

Procedia PDF Downloads 86
168 Digital Survey to Detect Factors That Determine Successful Implementation of Cooperative Learning in Physical Education

Authors: Carolin Schulze

Abstract:

Characterized by a positive interdependence of learners, cooperative learning (CL) is one possibility of successfully dealing with the increasing heterogeneity of students. Various positive effects of CL on the mental, physical and social health of students have already been documented. However, this structure is still rarely used in physical education (PE). Moreover, there is a lack of information about factors that determine the successful implementation of CL in PE. Therefore, the objective of the current study was to find out factors that determine the successful implementation of CL in PE using a digital questionnaire that was conducted from November to December 2022. In addition to socio-demographic data (age, gender, teaching experience, and education level), frequency of using CL, implementation strategies (theory-led, student-centred), and positive and negative effects of CL were measured. Furthermore, teachers were asked to rate the success of implementation on a 6-point rating scale (1-very successful to 6-not successful at all). For statistical analysis, multiple linear regression was performed, setting the success of implementation as the dependent variable. A total of 224 teachers (mean age=44.81±10.60 years; 58% male) took part in the current study. Overall, 39% of participants stated that they never use CL in their PE classes. Main reasons against the implementations of CL in PE were no time for preparation (74%) or for implementation (61%) and high heterogeneity of students (55%). When using CL, most of the reported difficulties are related to uncertainties about the correct procedure (54%) and the heterogeneous performance of students (54%). The most frequently mentioned positive effect was increased motivation of students (42%) followed by an improvement of psychological abilities (e.g. self-esteem, self-concept; 36%) and improved class cohesion (31%). Reported negative effects were unpredictability (29%), restlessness (24%), confusion (24%), and conflicts between students (17%). The successful use of CL is related to a theory-based preparation (e.g., heterogeneous formation of groups, use of rules and rituals) and a flexible implementation tailored to the needs and conditions of students (e.g., the possibility of individual work, omission of CL phases). Compared to teachers who solely implemented CL theory-led or student-adapted, teachers who switched from theory-led preparation to student-centred implementation of CL reported more successful implementation (t=5.312; p<.001). Neither frequency of using CL in PE nor the gender, age, the teaching experience, or the education level of the teacher showed a significant connection with the successful use of CL. Corresponding to the results of the current study, it is advisable that teachers gather enough knowledge about CL during their education and to point out the need to adapt the learning structure according to the diversity of their students. In order to analyse implementation strategies of teachers more deeply, qualitative methods and guided interviews with teachers are needed.

Keywords: diversity, educational technology, physical education, teaching styles

Procedia PDF Downloads 60
167 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach

Authors: James Ladzekpo

Abstract:

Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.

Keywords: diabetes, machine learning, prediction, biomarkers

Procedia PDF Downloads 30
166 A Comparative Study of the Proposed Models for the Components of the National Health Information System

Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi

Abstract:

National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.

Keywords: National Health Information System, components of the NHIS, Lippeveld Model

Procedia PDF Downloads 393
165 Limiting Freedom of Expression to Fight Radicalization: The 'Silencing' of Terrorists Does Not Always Allow Rights to 'Speak Loudly'

Authors: Arianna Vedaschi

Abstract:

This paper addresses the relationship between freedom of expression, national security and radicalization. Is it still possible to talk about a balance between the first two elements? Or, due to the intrusion of the third, is it more appropriate to consider freedom of expression as “permanently disfigured” by securitarian concerns? In this study, both the legislative and the judicial level are taken into account and the comparative method is employed in order to provide the reader with a complete framework of relevant issues and a workable set of solutions. The analysis moves from the finding according to which the tension between free speech and national security has become a major issue in democratic countries, whose very essence is continuously endangered by the ever-changing and multi-faceted threat of international terrorism. In particular, a change in terrorist groups’ recruiting pattern, attracting more and more people by way of a cutting-edge communicative strategy, often employing sophisticated technology as a radicalization tool, has called on law-makers to modify their approach to dangerous speech. While traditional constitutional and criminal law used to punish speech only if it explicitly and directly incited the commission of a criminal action (“cause-effect” model), so-called glorification offences – punishing mere ideological support for terrorism, often on the web – are becoming commonplace in the comparative scenario. Although this is direct, and even somehow understandable, consequence of the impending terrorist menace, this research shows many problematic issues connected to such a preventive approach. First, from a predominantly theoretical point of view, this trend negatively impacts on the already blurred line between permissible and prohibited speech. Second, from a pragmatic point of view, such legislative tools are not always suitable to keep up with ongoing developments of both terrorist groups and their use of technology. In other words, there is a risk that such measures become outdated even before their application. Indeed, it seems hard to still talk about a proper balance: what was previously clearly perceived as a balancing of values (freedom of speech v. public security) has turned, in many cases, into a hierarchy with security at its apex. In light of these findings, this paper concludes that such a complex issue would perhaps be better dealt with through a combination of policies: not only criminalizing ‘terrorist speech,’ which should be relegated to a last resort tool, but acting at an even earlier stage, i.e., trying to prevent dangerous speech itself. This might be done by promoting social cohesion and the inclusion of minorities, so as to reduce the probability of people considering terrorist groups as a “viable option” to deal with the lack of identification within their social contexts.

Keywords: radicalization, free speech, international terrorism, national security

Procedia PDF Downloads 177
164 The Markers -mm and dämmo in Amharic: Developmental Approach

Authors: Hayat Omar

Abstract:

Languages provide speakers with a wide range of linguistic units to organize and deliver information. There are several ways to verbally express the mental representations of events. According to the linguistic tools they have acquired, speakers select the one that brings out the most communicative effect to convey their message. Our study focuses on two markers, -mm and dämmo, in Amharic (Ethiopian Semitic language). Our aim is to examine, from a developmental perspective, how they are used by speakers. We seek to distinguish the communicative and pragmatic functions indicated by means of these markers. To do so, we created a corpus of sixty narrative productions of children from 5-6, 7-8 to 10-12 years old and adult Amharic speakers. The experimental material we used to collect our data is a series of pictures without text 'Frog, Where are you?'. Although -mm and dämmo are each used in specific contexts, they are sometimes analyzed as being interchangeable. The suffix -mm is complex and multifunctional. It marks the end of the negative verbal structure, it is found in the relative structure of the imperfect, it creates new words such as adverbials or pronouns, it also serves to coordinate words, sentences and to mark the link between macro-propositions within a larger textual unit. -mm was analyzed as marker of insistence, topic shift marker, element of concatenation, contrastive focus marker, 'bisyndetic' coordinator. On the other hand, dämmo has limited function and did not attract the attention of many authors. The only approach we could find analyzes it in terms of 'monosyndetic' coordinator. The paralleling of these two elements made it possible to understand their distinctive functions and refine their description. When it comes to marking a referent, the choice of -mm or dämmo is not neutral, depending on whether the tagged argument is newly introduced, maintained, promoted or reintroduced. The presence of these morphemes explains the inter-phrastic link. The information is seized by anaphora or presupposition: -mm goes upstream while dämmo arrows downstream, the latter requires new information. The speaker uses -mm or dämmo according to what he assumes to be known to his interlocutors. The results show that -mm and dämmo, although all the speakers use them both, do not always have the same scope according to the speaker and vary according to the age. dämmo is mainly used to mark a contrastive topic to signal the concomitance of events. It is more commonly used in young children’s narratives (F(3,56) = 3,82, p < .01). Some values of -mm (additive) are acquired very early while others are rather late and increase with age (F(3,56) = 3,2, p < .03). The difficulty is due not only because of its synthetic structure but primarily because it is multi-purpose and requires a memory work. It highlights the constituent on which it operates to clarify how the message should be interpreted.

Keywords: acquisition, cohesion, connection, contrastive topic, contrastive focus, discourse marker, pragmatics

Procedia PDF Downloads 112
163 Strengthening Social and Psychological Resources - Project "Herausforderung" as a (Sports-) Pedagogical Concept in Adolescence

Authors: Kristof Grätz

Abstract:

Background: Coping with crisis situations (e.g., the identity crisis in adolescence) is omnipresent in today's socialization and should be encouraged as a child. For this reason, students should be given the opportunity to create, endure and manage these crisis situations in a sporting context within the project “Herausforderung.” They should prove themselves by working on a self-assigned task, accompanied by ‚coaches’ in a place outside of their hometown. The aim of the project is to observe this process from a resource-oriented perspective. Health promotion, as called for by the WHO in the Ottawa Charter since 1986, includes strengthening psychosocial resources. These include cognitive, emotional, and social potentials that contribute to improving the quality of life, provide favourable conditions for coping with health burdens and enable people to influence their physical performance and well-being self-confidently and actively. A systematic strengthening of psychosocial resources leads to an improvement in mental health and contributes decisively to the regular implementation and long-term maintenance of this health behavior. Previous studies have already shown significant increases in self-concept following experiential educational measures [Fengler, 2007; Eberle & Fengler, 2018] and positive effects of experience-based school trips on the social competence of students [Reuker, 2009]. Method: The research project examines the influence of the project “Herausforderung” on psychosocial resources such as self-efficacy, self-concept, social support, and group cohesion. The students participating in the project will be tested in a pre-post design in the context of the challenge. This test includes specific questions to capture the different psychosocial resources. For the measurement, modifications of existing scales with good item selectivity and reliability are used to a large extent, so that acceptable item and scale values can be expected. If necessary, the scales were adapted or shortened to the specific context in order to ensure a balanced relationship between reliability and test economy. Specifically, these are already tested scales such as FRKJ 8-16, FSKN, GEQ, and F-SozU. The aim is to achieve a sample size of n ≥ 100. Conclusion: The project will be reviewed with regard to its effectiveness, and implications for a resource-enhancing application in sports settings will be given. Conclusions are drawn as to which extent to specific experiential educational content in physical education can have a health-promoting effect on the participants.

Keywords: children, education, health promotion, psychosocial resources

Procedia PDF Downloads 124
162 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 58
161 Personalization of Context Information Retrieval Model via User Search Behaviours for Ranking Document Relevance

Authors: Kehinde Agbele, Longe Olumide, Daniel Ekong, Dele Seluwa, Akintoye Onamade

Abstract:

One major problem of most existing information retrieval systems (IRS) is that they provide even access and retrieval results to individual users specially based on the query terms user issued to the system. When using IRS, users often present search queries made of ad-hoc keywords. It is then up to IRS to obtain a precise representation of user’s information need, and the context of the information. In effect, the volume and range of the Internet documents is growing exponentially and consequently causes difficulties for a user to obtain information that precisely matches the user interest. Diverse combination techniques are used to achieve the specific goal. This is due, firstly, to the fact that users often do not present queries to IRS that optimally represent the information they want, and secondly, the measure of a document's relevance is highly subjective between diverse users. In this paper, we address the problem by investigating the optimization of IRS to individual information needs in order of relevance. The paper addressed the development of algorithms that optimize the ranking of documents retrieved from IRS. This paper addresses this problem with a two-fold approach in order to retrieve domain-specific documents. Firstly, the design of context of information. The context of a query determines retrieved information relevance using personalization and context-awareness. Thus, executing the same query in diverse contexts often leads to diverse result rankings based on the user preferences. Secondly, the relevant context aspects should be incorporated in a way that supports the knowledge domain representing users’ interests. In this paper, the use of evolutionary algorithms is incorporated to improve the effectiveness of IRS. A context-based information retrieval system that learns individual needs from user-provided relevance feedback is developed whose retrieval effectiveness is evaluated using precision and recall metrics. The results demonstrate how to use attributes from user interaction behavior to improve the IR effectiveness.

Keywords: context, document relevance, information retrieval, personalization, user search behaviors

Procedia PDF Downloads 438
160 Hypertension and Obesity: A Cross-National Comparison of BMI and Waist-Height Ratio

Authors: Adam M. Yates, Julie E. Byles

Abstract:

Hypertension has been identified as a prominent co-morbidity of obesity. To improve clinical intervention of hypertension, it is critical to identify metrics that most accurately reflect risk for increased morbidity. Two of the most relevant and accurate measures for increased risk of hypertension due to excess adipose tissue are Body Mass Index (BMI) and Waist-Height Ratio (WHtR). Previous research has examined these measures in cross-national and cross-ethnic studies, but has most often relied on secondary means such as meta-analysis to identify and evaluate the efficacy of individual body mass measures. In this study, we instead use cross-sectional analysis to assess the cross-ethnic discriminative power of BMI and WHtR to predict risk of hypertension. Using the WHO SAGE survey, which collected anthropometric and biometric data from respondents in six middle-income countries (China, Ghana, India, Mexico, Russia, South Africa), we implement logistic regression to examine the discriminative power of measured BMI and WHtR with a known population of hypertensive and non-hypertensive respondents. We control for gender and age to identify whether optimum cut-off points that are adequately sensitive as tests for risk of hypertension may be different between groups. We report results for OR, RR, and ROC curves for each of the six SAGE countries. As seen in existing literature, results demonstrate that both WHtR and BMI are significant predictors of hypertension (p < .01). For these six countries, we find that cut-off points for WHtR may be dependent upon gender, age and ethnicity. While an optimum omnibus cut-point for WHtR may be 0.55, results also suggest that the gender and age relationship with WHtR may warrant the development of individual cut-offs to optimize health outcomes. Trends through multiple countries show that the optimum cut-point for WHtR increases with age while the area under the curve (AUROC) decreases for both men and women. Comparison between BMI and WHtR indicate that BMI may remain more robust than WHtR. Implications for public health policy are discussed.

Keywords: hypertension, obesity, Waist-Height ratio, SAGE

Procedia PDF Downloads 452
159 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization

Authors: Taha Benarbia

Abstract:

The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics

Keywords: automated vehicles, connected vehicles, deep learning, smart transportation network

Procedia PDF Downloads 48
158 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks

Authors: Mst Shapna Akter, Hossain Shahriar

Abstract:

One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.

Keywords: cyber security, vulnerability detection, neural networks, feature extraction

Procedia PDF Downloads 56
157 Optimizing Wind Turbine Blade Geometry for Enhanced Performance and Durability: A Computational Approach

Authors: Nwachukwu Ifeanyi

Abstract:

Wind energy is a vital component of the global renewable energy portfolio, with wind turbines serving as the primary means of harnessing this abundant resource. However, the efficiency and stability of wind turbines remain critical challenges in maximizing energy output and ensuring long-term operational viability. This study proposes a comprehensive approach utilizing computational aerodynamics and aeromechanics to optimize wind turbine performance across multiple objectives. The proposed research aims to integrate advanced computational fluid dynamics (CFD) simulations with structural analysis techniques to enhance the aerodynamic efficiency and mechanical stability of wind turbine blades. By leveraging multi-objective optimization algorithms, the study seeks to simultaneously optimize aerodynamic performance metrics such as lift-to-drag ratio and power coefficient while ensuring structural integrity and minimizing fatigue loads on the turbine components. Furthermore, the investigation will explore the influence of various design parameters, including blade geometry, airfoil profiles, and turbine operating conditions, on the overall performance and stability of wind turbines. Through detailed parametric studies and sensitivity analyses, valuable insights into the complex interplay between aerodynamics and structural dynamics will be gained, facilitating the development of next-generation wind turbine designs. Ultimately, this research endeavours to contribute to the advancement of sustainable energy technologies by providing innovative solutions to enhance the efficiency, reliability, and economic viability of wind power generation systems. The findings have the potential to inform the design and optimization of wind turbines, leading to increased energy output, reduced maintenance costs, and greater environmental benefits in the transition towards a cleaner and more sustainable energy future.

Keywords: computation, robotics, mathematics, simulation

Procedia PDF Downloads 23
156 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 139
155 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 119
154 Threat Modeling Methodology for Supporting Industrial Control Systems Device Manufacturers and System Integrators

Authors: Raluca Ana Maria Viziteu, Anna Prudnikova

Abstract:

Industrial control systems (ICS) have received much attention in recent years due to the convergence of information technology (IT) and operational technology (OT) that has increased the interdependence of safety and security issues to be considered. These issues require ICS-tailored solutions. That led to the need to creation of a methodology for supporting ICS device manufacturers and system integrators in carrying out threat modeling of embedded ICS devices in a way that guarantees the quality of the identified threats and minimizes subjectivity in the threat identification process. To research, the possibility of creating such a methodology, a set of existing standards, regulations, papers, and publications related to threat modeling in the ICS sector and other sectors was reviewed to identify various existing methodologies and methods used in threat modeling. Furthermore, the most popular ones were tested in an exploratory phase on a specific PLC device. The outcome of this exploratory phase has been used as a basis for defining specific characteristics of ICS embedded devices and their deployment scenarios, identifying the factors that introduce subjectivity in the threat modeling process of such devices, and defining metrics for evaluating the minimum quality requirements of identified threats associated to the deployment of the devices in existing infrastructures. Furthermore, the threat modeling methodology was created based on the previous steps' results. The usability of the methodology was evaluated through a set of standardized threat modeling requirements and a standardized comparison method for threat modeling methodologies. The outcomes of these verification methods confirm that the methodology is effective. The full paper includes the outcome of research on different threat modeling methodologies that can be used in OT, their comparison, and the results of implementing each of them in practice on a PLC device. This research is further used to build a threat modeling methodology tailored to OT environments; a detailed description is included. Moreover, the paper includes results of the evaluation of created methodology based on a set of parameters specifically created to rate threat modeling methodologies.

Keywords: device manufacturers, embedded devices, industrial control systems, threat modeling

Procedia PDF Downloads 58
153 Improving Cell Type Identification of Single Cell Data by Iterative Graph-Based Noise Filtering

Authors: Annika Stechemesser, Rachel Pounds, Emma Lucas, Chris Dawson, Julia Lipecki, Pavle Vrljicak, Jan Brosens, Sean Kehoe, Jason Yap, Lawrence Young, Sascha Ott

Abstract:

Advances in technology make it now possible to retrieve the genetic information of thousands of single cancerous cells. One of the key challenges in single cell analysis of cancerous tissue is to determine the number of different cell types and their characteristic genes within the sample to better understand the tumors and their reaction to different treatments. For this analysis to be possible, it is crucial to filter out background noise as it can severely blur the downstream analysis and give misleading results. In-depth analysis of the state-of-the-art filtering methods for single cell data showed that they do, in some cases, not separate noisy and normal cells sufficiently. We introduced an algorithm that filters and clusters single cell data simultaneously without relying on certain genes or thresholds chosen by eye. It detects communities in a Shared Nearest Neighbor similarity network, which captures the similarities and dissimilarities of the cells by optimizing the modularity and then identifies and removes vertices with a weak clustering belonging. This strategy is based on the fact that noisy data instances are very likely to be similar to true cell types but do not match any of these wells. Once the clustering is complete, we apply a set of evaluation metrics on the cluster level and accept or reject clusters based on the outcome. The performance of our algorithm was tested on three datasets and led to convincing results. We were able to replicate the results on a Peripheral Blood Mononuclear Cells dataset. Furthermore, we applied the algorithm to two samples of ovarian cancer from the same patient before and after chemotherapy. Comparing the standard approach to our algorithm, we found a hidden cell type in the ovarian postchemotherapy data with interesting marker genes that are potentially relevant for medical research.

Keywords: cancer research, graph theory, machine learning, single cell analysis

Procedia PDF Downloads 83