Search results for: automatic reporting
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1659

Search results for: automatic reporting

759 Requirements Management in Agile

Authors: Ravneet Kaur

Abstract:

The concept of Agile Requirements Engineering and Management is not new. However, the struggle to figure out how traditional Requirements Management Process fits within an Agile framework remains complex. This paper talks about a process that can merge the organization’s traditional Requirements Management Process nicely into the Agile Software Development Process. This process provides Traceability of the Product Backlog to the external documents on one hand and User Stories on the other hand. It also gives sufficient evidence that the system will deliver the right functionality with good quality in the form of various statistics and reports. In the nutshell, by overlaying a process on top of Agile, without disturbing the Agility, we are able to get synergic benefits in terms of productivity, profitability, its reporting, and end to end visibility to all Stakeholders. The framework can be used for just-in-time requirements definition or to build a repository of requirements for future use. The goal is to make sure that the business (specifically, the product owner) can clearly articulate what needs to be built and define what is of high quality. To accomplish this, the requirements cycle follows a Scrum-like process that mirrors the development cycle but stays two to three steps ahead. The goal is to create a process by which requirements can be thoroughly vetted, organized, and communicated in a manner that is iterative, timely, and quality-focused. Agile is quickly becoming the most popular way of developing software because it fosters continuous improvement, time-boxed development cycles, and more quickly delivering value to the end users. That value will be driven to a large extent by the quality and clarity of requirements that feed the software development process. An agile, lean, and timely approach to requirements as the starting point will help to ensure that the process is optimized.

Keywords: requirements management, Agile

Procedia PDF Downloads 365
758 Lower Extremity Injuries and Landing Kinematics and Kinetics in University-Level Netball Players

Authors: Henriette Hammill

Abstract:

Background: Safe landing in netball is fundamental. Research on the biomechanics of multidirectional landings is lacking, especially among netball players. Furthermore, few studies reporting the associations between lower extremity injuries and landing kinematics and kinetics in university-level netball players have been undertaken. Objectives: The aim is to determine the relationships between lower extremity injuries and landing kinematics and kinetics in university-level netball players that have been undertaken during a single season. Methods: This cross-sectional repeated measure study consisted of ten university-level female netball players. The injury prevalence data was collected during the 2022 netball season. The kinematic and kinetic data were collected during multidirectional single-leg landing trials and was collected. Results: Generally, the ankle strength of netball players was below average. There was evidence of negative correlations between the ankle range of motion (ROM), and muscle activity amplitudes. A lack of evidence precluded the conclusion that lower extremity dominance was a predisposing factor for injury and that any specific body part was most likely to be injured among netball players. Conclusion: Landing forces and muscle activity are direction-dependent, especially for the dominant extremity. Lower extremity strength and neuromuscular control (NMC) across multiple jump-landing directions should be an area of focus for female netball players.

Keywords: netball players, landing kinetics, landing kinematics, lower extremity

Procedia PDF Downloads 39
757 Digital Retinal Images: Background and Damaged Areas Segmentation

Authors: Eman A. Gani, Loay E. George, Faisel G. Mohammed, Kamal H. Sager

Abstract:

Digital retinal images are more appropriate for automatic screening of diabetic retinopathy systems. Unfortunately, a significant percentage of these images are poor quality that hinders further analysis due to many factors (such as patient movement, inadequate or non-uniform illumination, acquisition angle and retinal pigmentation). The retinal images of poor quality need to be enhanced before the extraction of features and abnormalities. So, the segmentation of retinal image is essential for this purpose, the segmentation is employed to smooth and strengthen image by separating the background and damaged areas from the overall image thus resulting in retinal image enhancement and less processing time. In this paper, methods for segmenting colored retinal image are proposed to improve the quality of retinal image diagnosis. The methods generate two segmentation masks; i.e., background segmentation mask for extracting the background area and poor quality mask for removing the noisy areas from the retinal image. The standard retinal image databases DIARETDB0, DIARETDB1, STARE, DRIVE and some images obtained from ophthalmologists have been used to test the validation of the proposed segmentation technique. Experimental results indicate the introduced methods are effective and can lead to high segmentation accuracy.

Keywords: retinal images, fundus images, diabetic retinopathy, background segmentation, damaged areas segmentation

Procedia PDF Downloads 395
756 An Accurate Brain Tumor Segmentation for High Graded Glioma Using Deep Learning

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

Gliomas are most challenging and aggressive type of tumors which appear in different sizes, locations, and scattered boundaries. CNN is most efficient deep learning approach with outstanding capability of solving image analysis problems. A fully automatic deep learning based 2D-CNN model for brain tumor segmentation is presented in this paper. We used small convolution filters (3 x 3) to make architecture deeper. We increased convolutional layers for efficient learning of complex features from large dataset. We achieved better results by pushing convolutional layers up to 16 layers for HGG model. We achieved reliable and accurate results through fine-tuning among dataset and hyper-parameters. Pre-processing of this model includes generation of brain pipeline, intensity normalization, bias correction and data augmentation. We used the BRATS-2015, and Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.81 for complete, 0.79 for core, 0.80 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, HGG

Procedia PDF Downloads 250
755 Stigmatising AIDS: A Content Analysis on HIV/AIDS-Related News Articles Published in Three Major Philippine Broadsheet

Authors: L. Dinco John Christian, C. Ramos Camille, C. Reyes Maria Eloisa

Abstract:

HIV/AIDS has been dubbed as one of the most stigmatised diseases of the recent century. Nelson Mandela pointed out that PLWHA (People Living With HIV/AIDS) are not killed by the disease, but by the stigma surrounding it. Despite the numerous studies on HIV/AIDS Stigmatisation globally, little is known about how evident and how powerful the media can be in framing the views of the readers when it comes to print in the Philippine context. This study dealt with a quantitative content analysis of HIV/AIDS-related news articles published by the top three broadsheets such as Philippine Daily Inquirer, Manila Bulletin and the Philippine Star in the span of one year. The HIV/AIDS-related news articles were collected and subjected to coding according to their tones, stigmatising statements/terminologies and news prominence. An analysis of the results had supported the researchers’ objectives (1) that there are different tones of HIV/AIDS-related news articles, (2) that there is a significant relation between the Stigmatizing Statements/Terminologies and the tone and that the (3) technical properties of HIV/AIDS related news articles determine the news prominence. Results revealed that despite the fact that the broadsheets were overtly reporting HIV/AIDS in Anti-Stigma-toned articles, they were covertly suggesting Stigma by the use of Stigmatising statements/terminologies present in it rather than plainly disseminating current medical knowledge about the transmission and treatments of the disease; the technical properties of the HIV/AIDS related news articles determined its prominence.

Keywords: HIV, AIDS, newspaper, content analysis

Procedia PDF Downloads 430
754 Distributed Processing for Content Based Lecture Video Retrieval on Hadoop Framework

Authors: U. S. N. Raju, Kothuri Sai Kiran, Meena G. Kamal, Vinay Nikhil Pabba, Suresh Kanaparthi

Abstract:

There is huge amount of lecture video data available for public use, and many more lecture videos are being created and uploaded every day. Searching for videos on required topics from this huge database is a challenging task. Therefore, an efficient method for video retrieval is needed. An approach for automated video indexing and video search in large lecture video archives is presented. As the amount of video lecture data is huge, it is very inefficient to do the processing in a centralized computation framework. Hence, Hadoop Framework for distributed computing for Big Video Data is used. First, step in the process is automatic video segmentation and key-frame detection to offer a visual guideline for the video content navigation. In the next step, we extract textual metadata by applying video Optical Character Recognition (OCR) technology on key-frames. The OCR and detected slide text line types are adopted for keyword extraction, by which both video- and segment-level keywords are extracted for content-based video browsing and search. The performance of the indexing process can be improved for a large database by using distributed computing on Hadoop framework.

Keywords: video lectures, big video data, video retrieval, hadoop

Procedia PDF Downloads 527
753 Segmentation of Arabic Handwritten Numeral Strings Based on Watershed Approach

Authors: Nidal F. Shilbayeh, Remah W. Al-Khatib, Sameer A. Nooh

Abstract:

Arabic offline handwriting recognition systems are considered as one of the most challenging topics. Arabic Handwritten Numeral Strings are used to automate systems that deal with numbers such as postal code, banking account numbers and numbers on car plates. Segmentation of connected numerals is the main bottleneck in the handwritten numeral recognition system.  This is in turn can increase the speed and efficiency of the recognition system. In this paper, we proposed algorithms for automatic segmentation and feature extraction of Arabic handwritten numeral strings based on Watershed approach. The algorithms have been designed and implemented to achieve the main goal of segmenting and extracting the string of numeral digits written by hand especially in a courtesy amount of bank checks. The segmentation algorithm partitions the string into multiple regions that can be associated with the properties of one or more criteria. The numeral extraction algorithm extracts the numeral string digits into separated individual digit. Both algorithms for segmentation and feature extraction have been tested successfully and efficiently for all types of numerals.

Keywords: handwritten numerals, segmentation, courtesy amount, feature extraction, numeral recognition

Procedia PDF Downloads 379
752 Air-Coupled Ultrasonic Testing for Non-Destructive Evaluation of Various Aerospace Composite Materials by Laser Vibrometry

Authors: J. Vyas, R. Kazys, J. Sestoke

Abstract:

Air-coupled ultrasonic is the contactless ultrasonic measurement approach which has become widespread for material characterization in Aerospace industry. It is always essential for the requirement of lightest weight, without compromising the durability. To archive the requirements, composite materials are widely used. This paper yields analysis of the air-coupled ultrasonics for composite materials such as CFRP (Carbon Fibre Reinforced Polymer) and GLARE (Glass Fiber Metal Laminate) and honeycombs for the design of modern aircrafts. Laser vibrometry could be the key source of characterization for the aerospace components. The air-coupled ultrasonics fundamentals, including principles, working modes and transducer arrangements used for this purpose is also recounted in brief. The emphasis of this paper is to approach the developed NDT techniques based on the ultrasonic guided waves applications and the possibilities of use of laser vibrometry in different materials with non-contact measurement of guided waves. 3D assessment technique which employs the single point laser head using, automatic scanning relocation of the material to assess the mechanical displacement including pros and cons of the composite materials for aerospace applications with defects and delaminations.

Keywords: air-coupled ultrasonics, contactless measurement, laser interferometry, NDT, ultrasonic guided waves

Procedia PDF Downloads 235
751 Potential Use of Local Materials as Synthesizing One Part Geopolymer Cement

Authors: Areej Almalkawi, Sameer Hamadna, Parviz Soroushian, Nalin Darsana

Abstract:

The work on indigenous binders in this paper focused on the following indigenous raw materials: red clay, red lava and pumice (as primary aluminosilicate precursors), wood ash and gypsum (as supplementary minerals), and sodium sulfate and lime (as alkali activators). The experimental methods used for evaluation of these indigenous raw materials included laser granulometry, x-ray fluorescence (XRF) spectroscopy, and chemical reactivity. Formulations were devised for transforming these raw materials into alkali aluminosilicate-based hydraulic cements. These formulations were processed into hydraulic cements via simple heating and milling actions to render thermal activation, mechanochemical and size reduction effects. The resulting hydraulic cements were subjected to laser granulometry, heat of hydration and reactivity tests. These cements were also used to prepare mortar mixtures, which were evaluated via performance of compressive strength tests. The measured values of strength were correlated with the reactivity, size distribution and microstructural features of raw materials. Some of the indigenous hydraulic cements produced in this reporting period yielded viable levels of compressive strength. The correlation trends established in this work are being evaluated for development of simple and thorough methods of qualifying indigenous raw materials for use in production of indigenous hydraulic cements.

Keywords: one-part geopolymer cement, aluminosilicate precursors, thermal activation, mechanochemical

Procedia PDF Downloads 312
750 Data Modeling and Calibration of In-Line Pultrusion and Laser Ablation Machine Processes

Authors: David F. Nettleton, Christian Wasiak, Jonas Dorissen, David Gillen, Alexandr Tretyak, Elodie Bugnicourt, Alejandro Rosales

Abstract:

In this work, preliminary results are given for the modeling and calibration of two inline processes, pultrusion, and laser ablation, using machine learning techniques. The end product of the processes is the core of a medical guidewire, manufactured to comply with a user specification of diameter and flexibility. An ensemble approach is followed which requires training several models. Two state of the art machine learning algorithms are benchmarked: Kernel Recursive Least Squares (KRLS) and Support Vector Regression (SVR). The final objective is to build a precise digital model of the pultrusion and laser ablation process in order to calibrate the resulting diameter and flexibility of a medical guidewire, which is the end product while taking into account the friction on the forming die. The result is an ensemble of models, whose output is within a strict required tolerance and which covers the required range of diameter and flexibility of the guidewire end product. The modeling and automatic calibration of complex in-line industrial processes is a key aspect of the Industry 4.0 movement for cyber-physical systems.

Keywords: calibration, data modeling, industrial processes, machine learning

Procedia PDF Downloads 288
749 Design of Target Selection for Pedestrian Autonomous Emergency Braking System

Authors: Tao Song, Hao Cheng, Guangfeng Tian, Chuang Xu

Abstract:

An autonomous emergency braking system is an advanced driving assistance system that enables vehicle collision avoidance and pedestrian collision avoidance to improve vehicle safety. At present, because the pedestrian target is small, and the mobility is large, the pedestrian AEB system is faced with more technical difficulties and higher functional requirements. In this paper, a method of pedestrian target selection based on a variable width funnel is proposed. Based on the current position and predicted position of pedestrians, the relative position of vehicle and pedestrian at the time of collision is calculated, and different braking strategies are adopted according to the hazard level of pedestrian collisions. In the CNCAP standard operating conditions, comparing the method of considering only the current position of pedestrians and the method of considering pedestrian prediction position, as well as the method based on fixed width funnel and variable width funnel, the results show that, based on variable width funnel, the choice of pedestrian target will be more accurate and the opportunity of the intervention of AEB system will be more reasonable by considering the predicted position of the pedestrian target and vehicle's lateral motion.

Keywords: automatic emergency braking system, pedestrian target selection, TTC, variable width funnel

Procedia PDF Downloads 153
748 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure

Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer

Abstract:

The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.

Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition

Procedia PDF Downloads 105
747 Autism Spectrum Disorder Classification Algorithm Using Multimodal Data Based on Graph Convolutional Network

Authors: Yuntao Liu, Lei Wang, Haoran Xia

Abstract:

Machine learning has shown extensive applications in the development of classification models for autism spectrum disorder (ASD) using neural image data. This paper proposes a fusion multi-modal classification network based on a graph neural network. First, the brain is segmented into 116 regions of interest using a medical segmentation template (AAL, Anatomical Automatic Labeling). The image features of sMRI and the signal features of fMRI are extracted, which build the node and edge embedding representations of the brain map. Then, we construct a dynamically updated brain map neural network and propose a method based on a dynamic brain map adjacency matrix update mechanism and learnable graph to further improve the accuracy of autism diagnosis and recognition results. Based on the Autism Brain Imaging Data Exchange I dataset(ABIDE I), we reached a prediction accuracy of 74% between ASD and TD subjects. Besides, to study the biomarkers that can help doctors analyze diseases and interpretability, we used the features by extracting the top five maximum and minimum ROI weights. This work provides a meaningful way for brain disorder identification.

Keywords: autism spectrum disorder, brain map, supervised machine learning, graph network, multimodal data, model interpretability

Procedia PDF Downloads 59
746 A Deep-Learning Based Prediction of Pancreatic Adenocarcinoma with Electronic Health Records from the State of Maine

Authors: Xiaodong Li, Peng Gao, Chao-Jung Huang, Shiying Hao, Xuefeng B. Ling, Yongxia Han, Yaqi Zhang, Le Zheng, Chengyin Ye, Modi Liu, Minjie Xia, Changlin Fu, Bo Jin, Karl G. Sylvester, Eric Widen

Abstract:

Predicting the risk of Pancreatic Adenocarcinoma (PA) in advance can benefit the quality of care and potentially reduce population mortality and morbidity. The aim of this study was to develop and prospectively validate a risk prediction model to identify patients at risk of new incident PA as early as 3 months before the onset of PA in a statewide, general population in Maine. The PA prediction model was developed using Deep Neural Networks, a deep learning algorithm, with a 2-year electronic-health-record (EHR) cohort. Prospective results showed that our model identified 54.35% of all inpatient episodes of PA, and 91.20% of all PA that required subsequent chemoradiotherapy, with a lead-time of up to 3 months and a true alert of 67.62%. The risk assessment tool has attained an improved discriminative ability. It can be immediately deployed to the health system to provide automatic early warnings to adults at risk of PA. It has potential to identify personalized risk factors to facilitate customized PA interventions.

Keywords: cancer prediction, deep learning, electronic health records, pancreatic adenocarcinoma

Procedia PDF Downloads 153
745 Estimating PM2.5 Concentrations Based on Landsat 8 Imagery and Historical Field Data over the Metropolitan Area of Mexico City

Authors: Rodrigo T. Sepulveda-Hirose, Ana B. Carrera-Aguilar, Francisco Andree Ramirez-Casas, Alondra Orozco-Gomez, Miguel Angel Sanchez-Caro, Carlos Herrera-Ventosa

Abstract:

High concentrations of particulate matter in the atmosphere pose a threat to human health, especially over areas with high concentrations of population; however, field air pollution monitoring is expensive and time-consuming. In order to achieve reduced costs and global coverage of the whole urban area, remote sensing can be used. This study evaluates PM2.5 concentrations, over the Mexico City´s metropolitan area, are estimated using atmospheric reflectance from LANDSAT 8, satellite imagery and historical PM2.5 measurements of the Automatic Environmental Monitoring Network of Mexico City (RAMA). Through the processing of the available satellite images, a preliminary model was generated to evaluate the optimal bands for the generation of the final model for Mexico City. Work on the final model continues with the results of the preliminary model. It was found that infrared bands have helped to model in other cities, but the effectiveness that these bands could provide for the geographic and climatic conditions of Mexico City is still being evaluated.

Keywords: air pollution modeling, Landsat 8, PM2.5, remote sensing

Procedia PDF Downloads 189
744 Data Augmentation for Automatic Graphical User Interface Generation Based on Generative Adversarial Network

Authors: Xulu Yao, Moi Hoon Yap, Yanlong Zhang

Abstract:

As a branch of artificial neural network, deep learning is widely used in the field of image recognition, but the lack of its dataset leads to imperfect model learning. By analysing the data scale requirements of deep learning and aiming at the application in GUI generation, it is found that the collection of GUI dataset is a time-consuming and labor-consuming project, which is difficult to meet the needs of current deep learning network. To solve this problem, this paper proposes a semi-supervised deep learning model that relies on the original small-scale datasets to produce a large number of reliable data sets. By combining the cyclic neural network with the generated countermeasure network, the cyclic neural network can learn the sequence relationship and characteristics of data, make the generated countermeasure network generate reasonable data, and then expand the Rico dataset. Relying on the network structure, the characteristics of collected data can be well analysed, and a large number of reasonable data can be generated according to these characteristics. After data processing, a reliable dataset for model training can be formed, which alleviates the problem of dataset shortage in deep learning.

Keywords: GUI, deep learning, GAN, data augmentation

Procedia PDF Downloads 178
743 Valuation of Cultural Heritage: A Hedonic Pricing Analysis of Housing via GIS-based Data

Authors: Dai-Ling Li, Jung-Fa Cheng, Min-Lang Huang, Yun-Yao Chi

Abstract:

The hedonic pricing model has been popularly applied to describe the economic value of environmental amenities in urban housing, but the results for cultural heritage variables remain relatively ambiguous. In this paper, integrated variables extending by GIS-based data and an existing typology of communities used to examine how cultural heritage and environmental amenities and disamenities affect housing prices across urban communities in Tainan, Taiwan. The developed models suggest that, although a sophisticated variable for central services is selected, the centrality of location is not fully controlled in the price models and thus picked up by correlated peripheral and central amenities such as cultural heritage, open space or parks. Analysis of these correlations permits us to qualify results and present a revised set of relatively reliable estimates. Positive effects on housing prices are identified for views, various types of recreational infrastructure and vicinity of nationally cultural sites and significant landscapes. Negative effects are found for several disamenities including wasteyards, refuse incinerators, petrol stations and industries. The results suggest that systematic hypothesis testing and reporting of correlations may contribute to consistent explanatory patterns in hedonic pricing estimates for cultural heritage and landscape amenities in urban.

Keywords: hedonic pricing model, cultural heritage, landscape amenities, housing

Procedia PDF Downloads 335
742 A Speeded up Robust Scale-Invariant Feature Transform Currency Recognition Algorithm

Authors: Daliyah S. Aljutaili, Redna A. Almutlaq, Suha A. Alharbi, Dina M. Ibrahim

Abstract:

All currencies around the world look very different from each other. For instance, the size, color, and pattern of the paper are different. With the development of modern banking services, automatic methods for paper currency recognition become important in many applications like vending machines. One of the currency recognition architecture’s phases is Feature detection and description. There are many algorithms that are used for this phase, but they still have some disadvantages. This paper proposes a feature detection algorithm, which merges the advantages given in the current SIFT and SURF algorithms, which we call, Speeded up Robust Scale-Invariant Feature Transform (SR-SIFT) algorithm. Our proposed SR-SIFT algorithm overcomes the problems of both the SIFT and SURF algorithms. The proposed algorithm aims to speed up the SIFT feature detection algorithm and keep it robust. Simulation results demonstrate that the proposed SR-SIFT algorithm decreases the average response time, especially in small and minimum number of best key points, increases the distribution of the number of best key points on the surface of the currency. Furthermore, the proposed algorithm increases the accuracy of the true best point distribution inside the currency edge than the other two algorithms.

Keywords: currency recognition, feature detection and description, SIFT algorithm, SURF algorithm, speeded up and robust features

Procedia PDF Downloads 229
741 Exploring Workaholism Determinants and Life Balance: A Mixed-Method Study Among Academic Nurse Educators

Authors: Ebtsam Aly Abou Hashish, Sharifah Abdulmuttalib Alsayed, Hend Abdu Alnajjar

Abstract:

Background: Academic nurse educators play a crucial role in the educational environment, but the demands of their profession can lead to workaholism, which could result in an imbalance between work and personal life. Purpose: The study aimed to explore workaholism and life balance among academic nursing educators, as well as investigate the factors associated with workaholism. Methods: A mixed-methods design based on the ‘concurrent triangulation’ approach was employed. A convenience sample of 76 nurse educators completed the Dutch Work Addiction Scale (DUWAS) and the Life Balance Inventory (LBI), while a purposive sample of 20 nurse educators participated in semi-structured interviews. Inferential statistics and thematic analysis were used to analyze the data. Results: The researchers found a notable prevalence of workaholism among nurse educators, with 59.0 % reporting a mean score above 2.5 and 86.8 % perceiving an unbalanced life. Regression analysis indicated that workaholism negatively predicted life balance (B = 􀀀 0.404, p < 0.001). The qualitative findings derived three themes as determinants of workaholism: antecedents, consequences, and personal and institutional strategies to mitigate workaholism among nursing educators. Conclusion: Educational institutions should develop comprehensive approaches to support and develop their academicians, fostering a positive work environment, work-life balance, employee well-being, and professional development.

Keywords: workaholism, life balance, academic nurse educators, mixed-method

Procedia PDF Downloads 7
740 Compared Psychophysiological Responses under Stress in Patients of Chronic Fatigue Syndrome and Depressive Disorder

Authors: Fu-Chien Hung, Chi‐Wen Liang

Abstract:

Background: People who suffer from chronic fatigue syndrome (CFS) frequently complain about continuous tiredness, weakness or lack of strength, but without apparent organic etiology. The prevalence rate of the CFS is nearly from 3% to 20%, yet more than 80% go undiagnosed or misdiagnosed as depression. The biopsychosocial model has suggested the associations among the CFS, depressive syndrome, and stress. This study aimed to investigate the difference between individuals with the CFS and with the depressive syndrome on psychophysiological responses under stress. Method: There were 23 participants in the CFS group, 14 participants in the depression group, and 23 participants in the healthy control group. All of the participants first completed the measures of demographic data, CFS-related symptoms, daily life functioning, and depressive symptoms. The participants were then asked to perform a stressful cognitive task. The participants’ psychophysiological responses including the HR, BVP and SC were measured during the task. These indexes were used to assess the reactivity and recovery rates of the automatic nervous system. Results: The stress reactivity of the CFS and depression groups was not different from that of the healthy control group. However, the stress recovery rate of the CFS group was worse than that of the healthy control group. Conclusion: The results from this study suggest that the CFS is a syndrome which can be independent from the depressive syndrome, although the depressive syndrome may include fatigue syndrome.

Keywords: chronic fatigue syndrome, depression, stress response, misdiagnosis

Procedia PDF Downloads 455
739 Novel Practices in Research and Innovation Management

Authors: A. Ravinder Nath, D. Jaya Prakash, T. Venkateshwarlu, P. Raja Rao

Abstract:

The introduction of novel practices in research and innovation management at the university are likely to make a real difference in improving the quality of life and boost the global competitiveness for sustainable economic growth. Establishment a specific institutional structure at the university level provides professional management and administrative expertise to the university’s research community by sourcing out funding opportunities, extending guidance in grant proposal preparation and submission and also assisting in the post award reporting and regulatory observance. In addition to these it can involve in negotiating fair and equitable research contracts. Further it administer research governance to provide support and encourage collaborations across all disciplines of the university with industry, government, community based organizations, foundations, and associations at the local, regional, national and international levels/scales. The partnerships in research and innovation are more powerful and far needed tools for knowledge-based economy, where the universities can offer the services of much wanted human resources to promote, foster, and sustain excellence in research. In addition to this the institutes provide amply desired infrastructure and expertise to work with the investigators, and the industry will generate required financial resources in a coordinated manner. Further it is possible to carryout high-end applied research and synergizes the research capabilities and professional skills of students, faculty, scientists, and industrial work force.

Keywords: collaborations, competitiveness, contracts, governance

Procedia PDF Downloads 393
738 A Framework for University Social Responsibility and Sustainability: The Case of South Valley University, Egypt

Authors: Alaa Tag-Eldin Mohamed

Abstract:

The environmental, cultural, social, and technological changes have led higher education institutes to question their traditional roles. Many declarations and frameworks highlight the importance of fulfilling social responsibility of higher education institutes. The study aims at developing a framework of university social responsibility and sustainability (USR&S) with focus on South Valley University (SVU) as a case study of Egyptian Universities. The study used meetings with 12 vice deans of community services and environmental affairs on social responsibility and environmental issues. The proposed framework integrates social responsibility with strategic management through the establishment and maintenance of the vision, mission, values, goals and management systems; elaboration of policies; provision of actions; evaluation of services and development of social collaboration with stakeholders to meet current and future needs of the community and environment. The framework links between different stakeholders internally and externally using communication and reporting tools. The results show that SVU integrates social responsibility and sustainability in its strategic plans. It has policies and actions however fragmented and lack of appropriate structure and budgeting. The proposed framework could be valuable for researchers and decision makers of the Egyptian Universities. The study proposed recommendations and highlighted building on the results and conducting future research.

Keywords: corporate social responsibility (CSR), south valley university, sustainable university, university social responsibility and sustainability (USR&S)

Procedia PDF Downloads 344
737 A Novel Breast Cancer Detection Algorithm Using Point Region Growing Segmentation and Pseudo-Zernike Moments

Authors: Aileen F. Wang

Abstract:

Mammography has been one of the most reliable methods for early detection and diagnosis of breast cancer. However, mammography misses about 17% and up to 30% of breast cancers due to the subtle and unstable appearances of breast cancer in their early stages. Recent computer-aided diagnosis (CADx) technology using Zernike moments has improved detection accuracy. However, it has several drawbacks: it uses manual segmentation, Zernike moments are not robust, and it still has a relatively high false negative rate (FNR)–17.6%. This project will focus on the development of a novel breast cancer detection algorithm to automatically segment the breast mass and further reduce FNR. The algorithm consists of automatic segmentation of a single breast mass using Point Region Growing Segmentation, reconstruction of the segmented breast mass using Pseudo-Zernike moments, and classification of the breast mass using the root mean square (RMS). A comparative study among the various algorithms on the segmentation and reconstruction of breast masses was performed on randomly selected mammographic images. The results demonstrated that the newly developed algorithm is the best in terms of accuracy and cost effectiveness. More importantly, the new classifier RMS has the lowest FNR–6%.

Keywords: computer aided diagnosis, mammography, point region growing segmentation, pseudo-zernike moments, root mean square

Procedia PDF Downloads 448
736 Automatic Tagging and Accuracy in Assamese Text Data

Authors: Chayanika Hazarika Bordoloi

Abstract:

This paper is an attempt to work on a highly inflectional language called Assamese. This is also one of the national languages of India and very little has been achieved in terms of computational research. Building a language processing tool for a natural language is not very smooth as the standard and language representation change at various levels. This paper presents inflectional suffixes of Assamese verbs and how the statistical tools, along with linguistic features, can improve the tagging accuracy. Conditional random fields (CRF tool) was used to automatically tag and train the text data; however, accuracy was improved after linguistic featured were fed into the training data. Assamese is a highly inflectional language; hence, it is challenging to standardizing its morphology. Inflectional suffixes are used as a feature of the text data. In order to analyze the inflections of Assamese word forms, a list of suffixes is prepared. This list comprises suffixes, comprising of all possible suffixes that various categories can take is prepared. Assamese words can be classified into inflected classes (noun, pronoun, adjective and verb) and un-inflected classes (adverb and particle). The corpus used for this morphological analysis has huge tokens. The corpus is a mixed corpus and it has given satisfactory accuracy. The accuracy rate of the tagger has gradually improved with the modified training data.

Keywords: CRF, morphology, tagging, tagset

Procedia PDF Downloads 189
735 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index

Authors: A. Sathiya Susuman, Hamisi F. Hamisi

Abstract:

Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.

Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index

Procedia PDF Downloads 470
734 Multipurpose Agricultural Robot Platform: Conceptual Design of Control System Software for Autonomous Driving and Agricultural Operations Using Programmable Logic Controller

Authors: P. Abhishesh, B. S. Ryuh, Y. S. Oh, H. J. Moon, R. Akanksha

Abstract:

This paper discusses about the conceptual design and development of the control system software using Programmable logic controller (PLC) for autonomous driving and agricultural operations of Multipurpose Agricultural Robot Platform (MARP). Based on given initial conditions by field analysis and desired agricultural operations, the structural design development of MARP is done using modelling and analysis tool. PLC, being robust and easy to use, has been used to design the autonomous control system of robot platform for desired parameters. The robot is capable of performing autonomous driving and three automatic agricultural operations, viz. hilling, mulching, and sowing of seeds in the respective order. The input received from various sensors on the field is later transmitted to the controller via ZigBee network to make the changes in the control program to get desired field output. The research is conducted to provide assistance to farmers by reducing labor hours for agricultural activities by implementing automation. This study will provide an alternative to the existing systems with machineries attached behind tractors and rigorous manual operations on agricultural field at effective cost.

Keywords: agricultural operations, autonomous driving, MARP, PLC

Procedia PDF Downloads 359
733 Influence of Environmental Temperature on Dairy Herd Performance and Behaviour

Authors: L. Krpalkova, N. O' Mahony, A. Carvalho, S. Campbell, S. Harapanahalli, J. Walsh

Abstract:

The objective of this study was to determine the effects of environmental stressors on the performance of lactating dairy cows and discuss some future trends. There exists a relationship between the meteorological data and milk yield prediction accuracy in pasture-based dairy systems. New precision technologies are available and are being developed to improve the sustainability of the dairy industry. Some of these technologies focus on welfare of individual animals on dairy farms. These technologies allow the automatic identification of animal behaviour and health events, greatly increasing overall herd health and yield while reducing animal health inspection demands and long-term animal healthcare costs. The data set consisted of records from 489 dairy cows at two dairy farms and temperature measured from the nearest meteorological weather station in 2018. The effects of temperature on milk production and behaviour of animals were analyzed. The statistical results indicate different effects of temperature on milk yield and behaviour. The “comfort zone” for animals is in the range 10 °C to 20 °C. Dairy cows out of this zone had to decrease or increase their metabolic heat production, and it affected their milk production and behaviour.

Keywords: behavior, milk yield, temperature, precision technologies

Procedia PDF Downloads 107
732 Determinants of Corporate Social Responsibility in Indonesia

Authors: Bela Sulistyaguna, Yuli Chomsatu Samrotun, Endang Masitoh Wahyuningsih

Abstract:

The purpose of this research was to analyze the influence of company size, liquidity, profitability, leverage, company age, industry type, board of director, board of commissioner, audit committee and public ownership on the corporate social responsibility disclosure. The grand theories of this research are agency theory, stakeholders theory, and legitimacy theory. Analysis of data using multiple linear regression method with SPSS 22.0 for mac. The sample consists of companies listed on the Indonesia Stock Exchange (IDX) and disclosed the Global Reporting Initiative (GRI) sustainability reports from 2013 to 2018. The final sample of this research was 19 companies that obtained by purposive sampling. The results of the research showed that, simultaneously, company size, liquidity, profitability, leverage, company age, industry type, board of director, board of commissioner, audit committee and public ownership has an influence on the corporate social responsibility disclosure. Partially, the results showed that liquidity and leverage has an influence on the corporate social responsibility disclosure. Meanwhile, company size, profitability, company age, industry type, board of director, board of commissioner, audit committee and public ownership has no influence on corporate social responsibility disclosure.

Keywords: corporate social responsibility, CSR disclosure, Indonesia

Procedia PDF Downloads 149
731 Functionality Based Composition of Web Services to Attain Maximum Quality of Service

Authors: M. Mohemmed Sha Mohamed Kunju, Abdalla A. Al-Ameen Abdurahman, T. Manesh Thankappan, A. Mohamed Mustaq Ahmed Hameed

Abstract:

Web service composition is an effective approach to complete the web based tasks with desired quality. A single web service with limited functionality is inadequate to execute a specific task with series of action. So, it is very much required to combine multiple web services with different functionalities to reach the target. Also, it will become more and more challenging, when these services are from different providers with identical functionalities and varying QoS, so while composing the web services, the overall QoS is considered to be the major factor. Also, it is not true that the expected QoS is always attained when the task is completed. A single web service in the composed chain may affect the overall performance of the task. So care should be taken in different aspects such as functionality of the service, while composition. Dynamic and automatic service composition is one of the main option available. But to achieve the actual functionality of the task, quality of the individual web services are also important. Normally the QoS of the individual service can be evaluated by using the non-functional parameters such as response time, throughput, reliability, availability, etc. At the same time, the QoS is not needed to be at the same level for all the composed services. So this paper proposes a framework that allows composing the services in terms of QoS by setting the appropriate weight to the non-functional parameters of each individual web service involved in the task. Experimental results show that the importance given to the non-functional parameter while composition will definitely improve the performance of the web services.

Keywords: composition, non-functional parameters, quality of service, web service

Procedia PDF Downloads 325
730 Monthly Labor Forces Surveys Portray Smooth Labor Markets and Bias Fixed Effects Estimation: Evidence from Israel’s Transition from Quarterly to Monthly Surveys

Authors: Haggay Etkes

Abstract:

This study provides evidence for the impact of monthly interviews conducted for the Israeli Labor Force Surveys (LFSs) on estimated flows between labor force (LF) statuses and on coefficients in fixed-effects estimations. The study uses the natural experiment of parallel interviews for the quarterly and the monthly LFSs in Israel in 2011 for demonstrating that the Labor Force Participation (LFP) rate of Jewish persons who participated in the monthly LFS increased between interviews, while in the quarterly LFS it decreased. Interestingly, the estimated impact on the LFP rate of self-reporting individuals is 2.6–3.5 percentage points while the impact on the LFP rate of individuals whose data was reported by another member of their household (a proxy), is lower and statistically insignificant. The relative increase of the LFP rate in the monthly survey is a result of a lower rate of exit from the LF and a somewhat higher rate of entry into the LF relative to these flows in the quarterly survey. These differing flows have a bearing on labor search models as the monthly survey portrays a labor market with less friction and a “steady state” LFP rate that is 5.9 percentage points higher than the quarterly survey. The study also demonstrates that monthly interviews affect a specific group (45–64 year-olds); thus the sign of coefficient of age as an explanatory variable in fixed-effects regressions on LFP is negative in the monthly survey and positive in the quarterly survey.

Keywords: measurement error, surveys, search, LFSs

Procedia PDF Downloads 268