Search results for: the statistical measure
39 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program
Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner
Abstract:
Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NYKeywords: accessibility, COVID-19, recovery, testing
Procedia PDF Downloads 19338 Developing and integrated Clinical Risk Management Model
Authors: Mohammad H. Yarmohammadian, Fatemeh Rezaei
Abstract:
Introduction: Improving patient safety in health systems is one of the main priorities in healthcare systems, so clinical risk management in organizations has become increasingly significant. Although several tools have been developed for clinical risk management, each has its own limitations. Aims: This study aims to develop a comprehensive tool that can complete the limitations of each risk assessment and management tools with the advantage of other tools. Methods: Procedure was determined in two main stages included development of an initial model during meetings with the professors and literature review, then implementation and verification of final model. Subjects and Methods: This study is a quantitative − qualitative research. In terms of qualitative dimension, method of focus groups with inductive approach is used. To evaluate the results of the qualitative study, quantitative assessment of the two parts of the fourth phase and seven phases of the research was conducted. Purposive and stratification sampling of various responsible teams for the selected process was conducted in the operating room. Final model verified in eight phases through application of activity breakdown structure, failure mode and effects analysis (FMEA), healthcare risk priority number (RPN), root cause analysis (RCA), FT, and Eindhoven Classification model (ECM) tools. This model has been conducted typically on patients admitted in a day-clinic ward of a public hospital for surgery in October 2012 to June. Statistical Analysis Used: Qualitative data analysis was done through content analysis and quantitative analysis done through checklist and edited RPN tables. Results: After verification the final model in eight-step, patient's admission process for surgery was developed by focus discussion group (FDG) members in five main phases. Then with adopted methodology of FMEA, 85 failure modes along with its causes, effects, and preventive capabilities was set in the tables. Developed tables to calculate RPN index contain three criteria for severity, two criteria for probability, and two criteria for preventability. Tree failure modes were above determined significant risk limitation (RPN > 250). After a 3-month period, patient's misidentification incidents were the most frequent reported events. Each RPN criterion of misidentification events compared and found that various RPN number for tree misidentification reported events could be determine against predicted score in previous phase. Identified root causes through fault tree categorized with ECM. Wrong side surgery event was selected by focus discussion group to purpose improvement action. The most important causes were lack of planning for number and priority of surgical procedures. After prioritization of the suggested interventions, computerized registration system in health information system (HIS) was adopted to prepare the action plan in the final phase. Conclusion: Complexity of health care industry requires risk managers to have a multifaceted vision. Therefore, applying only one of retrospective or prospective tools for risk management does not work and each organization must provide conditions for potential application of these methods in its organization. The results of this study showed that the integrated clinical risk management model can be used in hospitals as an efficient tool in order to improve clinical governance.Keywords: failure modes and effective analysis, risk management, root cause analysis, model
Procedia PDF Downloads 24837 Deciphering Information Quality: Unraveling the Impact of Information Distortion in the UK Aerospace Supply Chains
Authors: Jing Jin
Abstract:
The incorporation of artificial intelligence (AI) and machine learning (ML) in aircraft manufacturing and aerospace supply chains leads to the generation of a substantial amount of data among various tiers of suppliers and OEMs. Identifying the high-quality information challenges decision-makers. The application of AI/ML models necessitates access to 'high-quality' information to yield desired outputs. However, the process of information sharing introduces complexities, including distortion through various communication channels and biases introduced by both human and AI entities. This phenomenon significantly influences the quality of information, impacting decision-makers engaged in configuring supply chain systems. Traditionally, distorted information is categorized as 'low-quality'; however, this study challenges this perception, positing that distorted information, contributing to stakeholder goals, can be deemed high-quality within supply chains. The main aim of this study is to identify and evaluate the dimensions of information quality crucial to the UK aerospace supply chain. Guided by a central research question, "What information quality dimensions are considered when defining information quality in the UK aerospace supply chain?" the study delves into the intricate dynamics of information quality in the aerospace industry. Additionally, the research explores the nuanced impact of information distortion on stakeholders' decision-making processes, addressing the question, "How does the information distortion phenomenon influence stakeholders’ decisions regarding information quality in the UK aerospace supply chain system?" This study employs deductive methodologies rooted in positivism, utilizing a cross-sectional approach and a mono-quantitative method -a questionnaire survey. Data is systematically collected from diverse tiers of supply chain stakeholders, encompassing end-customers, OEMs, Tier 0.5, Tier 1, and Tier 2 suppliers. Employing robust statistical data analysis methods, including mean values, mode values, standard deviation, one-way analysis of variance (ANOVA), and Pearson’s correlation analysis, the study interprets and extracts meaningful insights from the gathered data. Initial analyses challenge conventional notions, revealing that information distortion positively influences the definition of information quality, disrupting the established perception of distorted information as inherently low-quality. Further exploration through correlation analysis unveils the varied perspectives of different stakeholder tiers on the impact of information distortion on specific information quality dimensions. For instance, Tier 2 suppliers demonstrate strong positive correlations between information distortion and dimensions like access security, accuracy, interpretability, and timeliness. Conversely, Tier 1 suppliers emphasise strong negative influences on the security of accessing information and negligible impact on information timeliness. Tier 0.5 suppliers showcase very strong positive correlations with dimensions like conciseness and completeness, while OEMs exhibit limited interest in considering information distortion within the supply chain. Introducing social network analysis (SNA) provides a structural understanding of the relationships between information distortion and quality dimensions. The moderately high density of ‘information distortion-by-information quality’ underscores the interconnected nature of these factors. In conclusion, this study offers a nuanced exploration of information quality dimensions in the UK aerospace supply chain, highlighting the significance of individual perspectives across different tiers. The positive influence of information distortion challenges prevailing assumptions, fostering a more nuanced understanding of information's role in the Industry 4.0 landscape.Keywords: information distortion, information quality, supply chain configuration, UK aerospace industry
Procedia PDF Downloads 6336 Source of Professionalism and Knowledge among Sport Industry Professionals in India with Limited Sport Management Higher Education
Authors: Sandhya Manjunath
Abstract:
The World Association for Sport Management (WASM) was established in 2012, and its mission is "to facilitate sport management research, teaching, and learning excellence and professional practice worldwide". As the field of sport management evolves, it have seen increasing globalization of not only the sport product but many educators have also internationalized courses and curriculums. Curricula should reflect globally recognized issues and disseminate specific intercultural knowledge, skills, and practices, but regional disparities still exist. For example, while India has some of the most ardent sports fans and events in the world, sport management education programs and the development of a proper curriculum in India are still in their nascent stages, especially in comparison to the United States and Europe. Using the extant literature on professionalization and institutional theory, this study aims to investigate the source of knowledge and professionalism of sports managers in India with limited sport management education programs and to subsequently develop a conceptual framework that addresses any gaps or disparities across regions. This study will contribute to WASM's (2022) mission statement of research practice worldwide, specifically to fill the existing disparities between regions. Additionally, this study may emphasize the value of higher education among professionals entering the workforce in the sport industry. Most importantly, this will be a pioneer study highlighting the social issue of limited sport management higher education programs in India and improving professional research practices. Sport management became a field of study in the 1980s, and scholars have studied its professionalization since this time. Dowling, Edwards, & Washington (2013) suggest that professionalization can be categorized into three broad categories of organizational, systemic, and occupational professionalization. However, scant research has integrated the concept of professionalization with institutional theory. A comprehensive review of the literature reveals that sports industry research is progressing in every country worldwide at its own pace. However, there is very little research evidence about the Indian sports industry and the country's limited higher education sport management programs. A growing need exists for sports scholars to pursue research in developing countries like India to develop theoretical frameworks and academic instruments to evaluate the current standards of qualified professionals in sport management, sport marketing, venue and facilities management, sport governance, and development-related activities. This study may postulate a model highlighting the value of higher education in sports management. Education stakeholders include governments, sports organizations and their representatives, educational institutions, and accrediting bodies. As these stakeholders work collaboratively in developed countries like the United States and Europe and developing countries like India, they simultaneously influence the professionalization (i.e., organizational, systemic, and occupational) of sport management education globally. The results of this quantitative study will investigate the current standards of education in India and the source of knowledge among industry professionals. Sports industry professionals will be randomly selected to complete the COSM survey on PsychData and rate their perceived knowledge and professionalism on a Likert scale. Additionally, they will answer questions involving their competencies, experience, or challenges in contributing to Indian sports management research. Multivariate regression will be used to measure the degree to which the various independent variables impact the current knowledge, contribution to research, and professionalism of India's sports industry professionals. This quantitative study will contribute to the limited academic literature available to Indian sports practitioners. Additionally, it shall synthesize knowledge from previous work on professionalism and institutional knowledge, providing a springboard for new research that will fill the existing knowledge gaps. While a further empirical investigation is warranted, our conceptualization contributes to and highlights India's burgeoning sport management industry.Keywords: sport management, professionalism, source of knowledge, higher education, India
Procedia PDF Downloads 6935 Quantitative Texture Analysis of Shoulder Sonography for Rotator Cuff Lesion Classification
Authors: Chung-Ming Lo, Chung-Chien Lee
Abstract:
In many countries, the lifetime prevalence of shoulder pain is up to 70%. In America, the health care system spends 7 billion per year about the healthy issues of shoulder pain. With respect to the origin, up to 70% of shoulder pain is attributed to rotator cuff lesions This study proposed a computer-aided diagnosis (CAD) system to assist radiologists classifying rotator cuff lesions with less operator dependence. Quantitative features were extracted from the shoulder ultrasound images acquired using an ALOKA alpha-6 US scanner (Hitachi-Aloka Medical, Tokyo, Japan) with linear array probe (scan width: 36mm) ranging from 5 to 13 MHz. During examination, the postures of the examined patients are standard sitting position and are followed by the regular routine. After acquisition, the shoulder US images were drawn out from the scanner and stored as 8-bit images with pixel value ranging from 0 to 255. Upon the sonographic appearance, the boundary of each lesion was delineated by a physician to indicate the specific pattern for analysis. The three lesion categories for classification were composed of 20 cases of tendon inflammation, 18 cases of calcific tendonitis, and 18 cases of supraspinatus tear. For each lesion, second-order statistics were quantified in the feature extraction. The second-order statistics were the texture features describing the correlations between adjacent pixels in a lesion. Because echogenicity patterns were expressed via grey-scale. The grey-scale co-occurrence matrixes with four angles of adjacent pixels were used. The texture metrics included the mean and standard deviation of energy, entropy, correlation, inverse different moment, inertia, cluster shade, cluster prominence, and Haralick correlation. Then, the quantitative features were combined in a multinomial logistic regression classifier to generate a prediction model of rotator cuff lesions. Multinomial logistic regression classifier is widely used in the classification of more than two categories such as the three lesion types used in this study. In the classifier, backward elimination was used to select a feature subset which is the most relevant. They were selected from the trained classifier with the lowest error rate. Leave-one-out cross-validation was used to evaluate the performance of the classifier. Each case was left out of the total cases and used to test the trained result by the remaining cases. According to the physician’s assessment, the performance of the proposed CAD system was shown by the accuracy. As a result, the proposed system achieved an accuracy of 86%. A CAD system based on the statistical texture features to interpret echogenicity values in shoulder musculoskeletal ultrasound was established to generate a prediction model for rotator cuff lesions. Clinically, it is difficult to distinguish some kinds of rotator cuff lesions, especially partial-thickness tear of rotator cuff. The shoulder orthopaedic surgeon and musculoskeletal radiologist reported greater diagnostic test accuracy than general radiologist or ultrasonographers based on the available literature. Consequently, the proposed CAD system which was developed according to the experiment of the shoulder orthopaedic surgeon can provide reliable suggestions to general radiologists or ultrasonographers. More quantitative features related to the specific patterns of different lesion types would be investigated in the further study to improve the prediction.Keywords: shoulder ultrasound, rotator cuff lesions, texture, computer-aided diagnosis
Procedia PDF Downloads 28434 Information Pollution: Exploratory Analysis of Subs-Saharan African Media’s Capabilities to Combat Misinformation and Disinformation
Authors: Muhammed Jamiu Mustapha, Jamiu Folarin, Stephen Obiri Agyei, Rasheed Ademola Adebiyi, Mutiu Iyanda Lasisi
Abstract:
The role of information in societal development and growth cannot be over-emphasized. It has remained an age-long strategy to adopt the information flow to make an egalitarian society. The same has become a tool for throwing society into chaos and anarchy. It has been adopted as a weapon of war and a veritable instrument of psychological warfare with a variety of uses. That is why some scholars posit that information could be deployed as a weapon to wreak “Mass Destruction" or promote “Mass Development". When used as a tool for destruction, the effect on society is like an atomic bomb which when it is released, pollutes the air and suffocates the people. Technological advancement has further exposed the latent power of information and many societies seem to be overwhelmed by its negative effect. While information remains one of the bedrock of democracy, the information ecosystem across the world is currently facing a more difficult battle than ever before due to information pluralism and technological advancement. The more the agents involved try to combat its menace, the difficult and complex it is proving to be curbed. In a region like Africa with dangling democracy enfolds with complexities of multi-religion, multi-cultures, inter-tribes, ongoing issues that are yet to be resolved, it is important to pay critical attention to the case of information disorder and find appropriate ways to curb or mitigate its effects. The media, being the middleman in the distribution of information, needs to build capacities and capabilities to separate the whiff of misinformation and disinformation from the grains of truthful data. From quasi-statistical senses, it has been observed that the efforts aimed at fighting information pollution have not considered the built resilience of media organisations against this disorder. Apparently, the efforts, resources and technologies adopted for the conception, production and spread of information pollution are much more sophisticated than approaches to suppress and even reduce its effects on society. Thus, this study seeks to interrogate the phenomenon of information pollution and the capabilities of select media organisations in Sub-Saharan Africa. In doing this, the following questions are probed; what are the media actions to curb the menace of information pollution? Which of these actions are working and how effective are they? And which of the actions are not working and why they are not working? Adopting quantitative and qualitative approaches and anchored on the Dynamic Capability Theory, the study aims at digging up insights to further understand the complexities of information pollution, media capabilities and strategic resources for managing misinformation and disinformation in the region. The quantitative approach involves surveys and the use of questionnaires to get data from journalists on their understanding of misinformation/disinformation and their capabilities to gate-keep. Case Analysis of select media and content analysis of their strategic resources to manage misinformation and disinformation is adopted in the study while the qualitative approach will involve an In-depth Interview to have a more robust analysis is also considered. The study is critical in the fight against information pollution for a number of reasons. One, it is a novel attempt to document the level of media capabilities to fight the phenomenon of information disorder. Two, the study will enable the region to have a clear understanding of the capabilities of existing media organizations to combat misinformation and disinformation in the countries that make up the region. Recommendations emanating from the study could be used to initiate, intensify or review existing approaches to combat the menace of information pollution in the region.Keywords: disinformation, information pollution, misinformation, media capabilities, sub-Saharan Africa
Procedia PDF Downloads 16033 Feasibility and Acceptability of an Emergency Department Digital Pain Self-Management Intervention: An Randomized Controlled Trial Pilot Study
Authors: Alexandria Carey, Angela Starkweather, Ann Horgas, Hwayoung Cho, Jason Beneciuk
Abstract:
Background/Significance: Over 3.4 million acute axial low back pain (aLBP) cases are treated annually in the United States (US) emergency departments (ED). ED patients with aLBP receive varying verbal and written discharge routine care (RC), leading to ineffective patient self-management. Ineffective self-management increase chronic low back pain (cLPB) transition risks, a chief cause of worldwide disability, with associated costs >$60 million annually. This research addresses this significant problem by evaluating an ED digital pain self-management intervention (EDPSI) focused on improving self-management through improved knowledge retainment, skills, and self-efficacy (confidence) (KSC) thus reducing aLBP to cLBP transition in ED patients discharged with aLBP. The research has significant potential to increase self-efficacy, one of the most potent mechanisms of behavior change and improve health outcomes. Focusing on accessibility and usability, the intervention may reduce discharge disparities in aLBP self-management, especially with low health literacy. Study Questions: This research will answer the following questions: 1) Will an EDPSI focused on improving KSC progress patient self-management behaviors and health status?; 2) Is the EDPSI sustainable to improve pain severity, interference, and pain recurrence?; 3) Will an EDPSI reduce aLBP to cLBP transition in patients discharged with aLBP? Aims: The pilot randomized-controlled trial (RCT) study’s objectives assess the effects of a 12-week digital self-management discharge tool in patients with aLBP. We aim to 1) Primarily assess the feasibility [recruitment, enrollment, and retention], and [intervention] acceptability, and sustainability of EDPSI on participant’s pain self-management; 2) Determine the effectiveness and sustainability of EDPSI on pain severity/interference among participants. 3) Explore patient preferences, health literacy, and changes among participants experiencing the transition to cLBP. We anticipate that EDPSI intervention will increase likelihood of achieving self-management milestones and significantly improve pain-related symptoms in aLBP. Methods: The study uses a two-group pilot RCT to enroll 30 individuals who have been seen in the ED with aLBP. Participants are randomized into RC (n=15) or RC + EDPSI (n=15) and receive follow-up surveys for 12-weeks post-intervention. EDPSI innovative content focuses on 1) highlighting discharge education; 2) provides self-management treatment options; 3) actor demonstration of ergonomics, range of motion movements, safety, and sleep; 4) complementary alternative medicine (CAM) options including acupuncture, yoga, and Pilates; 5) combination therapies including thermal application, spinal manipulation, and PT treatments. The intervention group receives Booster sessions via Zoom to assess and reinforce their knowledge retention of techniques and provide return demonstration reinforcing ergonomics, in weeks two and eight. Outcome Measures: All participants are followed for 12-weeks, assessing pain severity/ interference using the Brief Pain Inventory short-form (BPI-sf) survey, self-management (measuring KSC) using the short 13-item Patient Activation Measure (PAM), and self-efficacy using the Pain Self-Efficacy Questionnaire (PSEQ) weeks 1, 6, and 12. Feasibility is measured by recruitment, enrollment, and retention percentages. Acceptability and education satisfaction are measured using the Education-Preference and Satisfaction Questionnaire (EPSQ) post-intervention. Self-management sustainment is measured including PSEQ, PAM, and patient satisfaction and healthcare utilization (PSHU) requesting patient overall satisfaction, additional healthcare utilization, and pain management related to continued back pain or complications post-injury.Keywords: digital, pain self-management, education, tool
Procedia PDF Downloads 4832 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis
Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam
Abstract:
Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.Keywords: chlorophyll, gene expression, genetic variation, stomata
Procedia PDF Downloads 23831 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study
Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet
Abstract:
These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment
Procedia PDF Downloads 6330 Cloud-Based Multiresolution Geodata Cube for Efficient Raster Data Visualization and Analysis
Authors: Lassi Lehto, Jaakko Kahkonen, Juha Oksanen, Tapani Sarjakoski
Abstract:
The use of raster-formatted data sets in geospatial analysis is increasing rapidly. At the same time, geographic data are being introduced into disciplines outside the traditional domain of geoinformatics, like climate change, intelligent transport, and immigration studies. These developments call for better methods to deliver raster geodata in an efficient and easy-to-use manner. Data cube technologies have traditionally been used in the geospatial domain for managing Earth Observation data sets that have strict requirements for effective handling of time series. The same approach and methodologies can also be applied in managing other types of geospatial data sets. A cloud service-based geodata cube, called GeoCubes Finland, has been developed to support online delivery and analysis of most important geospatial data sets with national coverage. The main target group of the service is the academic research institutes in the country. The most significant aspects of the GeoCubes data repository include the use of multiple resolution levels, cloud-optimized file structure, and a customized, flexible content access API. Input data sets are pre-processed while being ingested into the repository to bring them into a harmonized form in aspects like georeferencing, sampling resolutions, spatial subdivision, and value encoding. All the resolution levels are created using an appropriate generalization method, selected depending on the nature of the source data set. Multiple pre-processed resolutions enable new kinds of online analysis approaches to be introduced. Analysis processes based on interactive visual exploration can be effectively carried out, as the level of resolution most close to the visual scale can always be used. In the same way, statistical analysis can be carried out on resolution levels that best reflect the scale of the phenomenon being studied. Access times remain close to constant, independent of the scale applied in the application. The cloud service-based approach, applied in the GeoCubes Finland repository, enables analysis operations to be performed on the server platform, thus making high-performance computing facilities easily accessible. The developed GeoCubes API supports this kind of approach for online analysis. The use of cloud-optimized file structures in data storage enables the fast extraction of subareas. The access API allows for the use of vector-formatted administrative areas and user-defined polygons as definitions of subareas for data retrieval. Administrative areas of the country in four levels are available readily from the GeoCubes platform. In addition to direct delivery of raster data, the service also supports the so-called virtual file format, in which only a small text file is first downloaded. The text file contains links to the raster content on the service platform. The actual raster data is downloaded on demand, from the spatial area and resolution level required in each stage of the application. By the geodata cube approach, pre-harmonized geospatial data sets are made accessible to new categories of inexperienced users in an easy-to-use manner. At the same time, the multiresolution nature of the GeoCubes repository facilitates expert users to introduce new kinds of interactive online analysis operations.Keywords: cloud service, geodata cube, multiresolution, raster geodata
Procedia PDF Downloads 13329 Quasi-Photon Monte Carlo on Radiative Heat Transfer: An Importance Sampling and Learning Approach
Authors: Utkarsh A. Mishra, Ankit Bansal
Abstract:
At high temperature, radiative heat transfer is the dominant mode of heat transfer. It is governed by various phenomena such as photon emission, absorption, and scattering. The solution of the governing integrodifferential equation of radiative transfer is a complex process, more when the effect of participating medium and wavelength properties are taken into consideration. Although a generic formulation of such radiative transport problem can be modeled for a wide variety of problems with non-gray, non-diffusive surfaces, there is always a trade-off between simplicity and accuracy of the problem. Recently, solutions of complicated mathematical problems with statistical methods based on randomization of naturally occurring phenomena have gained significant importance. Photon bundles with discrete energy can be replicated with random numbers describing the emission, absorption, and scattering processes. Photon Monte Carlo (PMC) is a simple, yet powerful technique, to solve radiative transfer problems in complicated geometries with arbitrary participating medium. The method, on the one hand, increases the accuracy of estimation, and on the other hand, increases the computational cost. The participating media -generally a gas, such as CO₂, CO, and H₂O- present complex emission and absorption spectra. To model the emission/absorption accurately with random numbers requires a weighted sampling as different sections of the spectrum carries different importance. Importance sampling (IS) was implemented to sample random photon of arbitrary wavelength, and the sampled data provided unbiased training of MC estimators for better results. A better replacement to uniform random numbers is using deterministic, quasi-random sequences. Halton, Sobol, and Faure Low-Discrepancy Sequences are used in this study. They possess better space-filling performance than the uniform random number generator and gives rise to a low variance, stable Quasi-Monte Carlo (QMC) estimators with faster convergence. An optimal supervised learning scheme was further considered to reduce the computation costs of the PMC simulation. A one-dimensional plane-parallel slab problem with participating media was formulated. The history of some randomly sampled photon bundles is recorded to train an Artificial Neural Network (ANN), back-propagation model. The flux was calculated using the standard quasi PMC and was considered to be the training target. Results obtained with the proposed model for the one-dimensional problem are compared with the exact analytical and PMC model with the Line by Line (LBL) spectral model. The approximate variance obtained was around 3.14%. Results were analyzed with respect to time and the total flux in both cases. A significant reduction in variance as well a faster rate of convergence was observed in the case of the QMC method over the standard PMC method. However, the results obtained with the ANN method resulted in greater variance (around 25-28%) as compared to the other cases. There is a great scope of machine learning models to help in further reduction of computation cost once trained successfully. Multiple ways of selecting the input data as well as various architectures will be tried such that the concerned environment can be fully addressed to the ANN model. Better results can be achieved in this unexplored domain.Keywords: radiative heat transfer, Monte Carlo Method, pseudo-random numbers, low discrepancy sequences, artificial neural networks
Procedia PDF Downloads 22328 Neural Correlates of Diminished Humor Comprehension in Schizophrenia: A Functional Magnetic Resonance Imaging Study
Authors: Przemysław Adamczyk, Mirosław Wyczesany, Aleksandra Domagalik, Artur Daren, Kamil Cepuch, Piotr Błądziński, Tadeusz Marek, Andrzej Cechnicki
Abstract:
The present study aimed at evaluation of neural correlates of humor comprehension impairments observed in schizophrenia. To investigate the nature of this deficit in schizophrenia and to localize cortical areas involved in humor processing we used functional magnetic resonance imaging (fMRI). The study included chronic schizophrenia outpatients (SCH; n=20), and sex, age and education level matched healthy controls (n=20). The task consisted of 60 stories (setup) of which 20 had funny, 20 nonsensical and 20 neutral (not funny) punchlines. After the punchlines were presented, the participants were asked to indicate whether the story was comprehensible (yes/no) and how funny it was (1-9 Likert-type scale). fMRI was performed on a 3T scanner (Magnetom Skyra, Siemens) using 32-channel head coil. Three contrasts in accordance with the three stages of humor processing were analyzed in both groups: abstract vs neutral stories - incongruity detection; funny vs abstract - incongruity resolution; funny vs neutral - elaboration. Additionally, parametric modulation analysis was performed using both subjective ratings separately in order to further differentiate the areas involved in incongruity resolution processing. Statistical analysis for behavioral data used U Mann-Whitney test and Bonferroni’s correction, fMRI data analysis utilized whole-brain voxel-wise t-tests with 10-voxel extent threshold and with Family Wise Error (FWE) correction at alpha = 0.05, or uncorrected at alpha = 0.001. Between group comparisons revealed that the SCH subjects had attenuated activation in: the right superior temporal gyrus in case of irresolvable incongruity processing of nonsensical puns (nonsensical > neutral); the left medial frontal gyrus in case of incongruity resolution processing of funny puns (funny > nonsensical) and the interhemispheric ACC in case of elaboration of funny puns (funny > neutral). Additionally, the SCH group revealed weaker activation during funniness ratings in the left ventro-medial prefrontal cortex, the medial frontal gyrus, the angular and the supramarginal gyrus, and the right temporal pole. In comprehension ratings the SCH group showed suppressed activity in the left superior and medial frontal gyri. Interestingly, these differences were accompanied by protraction of time in both types of rating responses in the SCH group, a lower level of comprehension for funny punchlines and a higher funniness for absurd punchlines. Presented results indicate that, in comparison to healthy controls, schizophrenia is characterized by difficulties in humor processing revealed by longer reaction times, impairments of understanding jokes and finding nonsensical punchlines more funny. This is accompanied by attenuated brain activations, especially in the left fronto-parietal and the right temporal cortices. Disturbances of the humor processing seem to be impaired at the all three stages of the humor comprehension process, from incongruity detection, through its resolution to elaboration. The neural correlates revealed diminished neural activity of the schizophrenia brain, as compared with the control group. The study was supported by the National Science Centre, Poland (grant no 2014/13/B/HS6/03091).Keywords: communication skills, functional magnetic resonance imaging, humor, schizophrenia
Procedia PDF Downloads 21227 Relevance of Dosing Time for Everolimus Toxicity on Thyroid Gland and Hormones in Mice
Authors: Dilek Ozturk, Narin Ozturk, Zeliha Pala Kara, Engin Kaptan, Serap Sancar Bas, Nurten Ozsoy, Alper Okyar
Abstract:
Most physiological processes oscillate in a rhythmic manner in mammals including metabolism and energy homeostasis, locomotor activity, hormone secretion, immune and endocrine system functions. Endocrine body rhythms are tightly regulated by the circadian timing system. The hypothalamic-pituitary-thyroid (HPT) axis is under circadian control at multiple levels from hypothalamus to thyroid gland. Since circadian timing system controls a variety of biological functions in mammals, circadian rhythms of biological functions may modify the drug tolerability/toxicity depending on the dosing time. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer agent that is active against many cancers. It was also found to be active in medullary thyroid cancer. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus on the thyroid gland and hormones in mice. Healthy C57BL/6J mice were synchronized with 12h:12h Light-Dark cycle (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding to Light onset). Everolimus was administered to male (5 mg/kg/day) and female mice (15 mg/kg/day) orally at ZT1-rest period- and ZT13-activity period- for 4 weeks; body weight loss, clinical signs and possible changes in serum thyroid hormone levels (TSH and free T4) were examined. Histological alterations in the thyroid gland were evaluated according to the following criteria: follicular size, colloid density and viscidity, height of the follicular epithelium and the presence of necrotic cells. The statistical significance between differences was analyzed with ANOVA. Study findings included everolimus-related diarrhea, decreased activity, decreased body weight gains, alterations in serum TSH levels, and histopathological changes in thyroid gland. Decreases in mean body weight gains were more evident in mice treated at ZT1 as compared to ZT13 (p < 0.001, for both sexes). Control tissue sections of thyroid glands exhibited well-organized histoarchitecture when compared to everolimus-treated groups. Everolimus caused histopathological alterations in thyroid glands in male (5 mg/kg, slightly) and female mice (15 mg/kg; p < 0.01 for both ZT as compared to their controls) irrespective of dosing-time. TSH levels were slightly decreased upon everolimus treatment at ZT13 in both males and females. Conversely, increases in TSH levels were observed when everolimus treated at ZT1 in both males (5 mg/kg; p < 0.05) and females (15 mg/kg; slightly). No statistically significant alterations in serum free T4 levels were observed. TSH and free T4 is clinically important thyroid hormones since a number of disease states have been linked to alterations in these hormones. Serum free T4 levels within the normal ranges in the presence of abnormal serum TSH levels in everolimus treated mice may suggest subclinical thyroid disease which may have repercussions on the cardiovascular system, as well as on other organs and systems. Our study has revealed the histological damage on thyroid gland induced by subacute everolimus administration, this effect was irrespective of dosing time. However, based on the body weight changes and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13 in both male and females. Yet, effects of everolimus on thyroid functions may deserve further studies regarding their clinical importance and chronotoxicity.Keywords: circadian rhythm, chronotoxicity, everolimus, thyroid gland, thyroid hormones
Procedia PDF Downloads 34826 Relevance of Dosing Time for Everolimus Toxicity in Respect to the Circadian P-Glycoprotein Expression in Mdr1a::Luc Mice
Authors: Narin Ozturk, Xiao-Mei Li, Sylvie Giachetti, Francis Levi, Alper Okyar
Abstract:
P-glycoprotein (P-gp, MDR1, ABCB1) is a transmembrane protein acting as an ATP-dependent efflux pump and functions as a biological barrier by extruding drugs and xenobiotics out of cells in healthy tissues especially in intestines, liver and brain as well as in tumor cells. The circadian timing system controls a variety of biological functions in mammals including xenobiotic metabolism and detoxification, proliferation and cell cycle events, and may affect pharmacokinetics, toxicity and efficacy of drugs. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer drug that is active against many cancers, and its pharmacokinetics depend on P-gp. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus with respect to the intestinal P-gp expression rhythms in mdr1a::Luc mice using Real Time-Biolumicorder (RT-BIO) System. Mdr1a::Luc male mice were synchronized with 12 h of Light and 12 h of Dark (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding Light onset). After 1-week baseline recordings, everolimus (5 mg/kg/day x 14 days) was administered orally at ZT1-resting period- and ZT13-activity period- to mdr1a::Luc mice singly housed in an innovative monitoring device, Real Time-Biolumicorder units which let us monitor real-time and long-term gene expression in freely moving mice. D-luciferin (1.5 mg/mL) was dissolved in drinking water. Mouse intestinal mdr1a::Luc oscillation profile reflecting P-gp gene expression and locomotor activity pattern were recorded every minute with the photomultiplier tube and infrared sensor respectively. General behavior and clinical signs were monitored, and body weight was measured every day as an index of toxicity. Drug-induced body weight change was expressed relative to body weight on the initial treatment day. Statistical significance of differences between groups was validated with ANOVA. Circadian rhythms were validated with Cosinor Analysis. Everolimus toxicity changed as a function of drug timing, which was least following dosing at ZT13, near the onset of the activity span in male mice. Mean body weight loss was nearly twice as large in mice treated with 5 mg/kg everolimus at ZT1 as compared to ZT13 (8.9% vs. 5.4%; ANOVA, p < 0.001). Based on the body weight loss and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13. Both rest-activity and mdr1a::Luc expression displayed stable 24-h periodic rhythms before everolimus and in both vehicle-treated controls. Real-time bioluminescence pattern of mdr1a revealed a circadian rhythm with a 24-h period with an acrophase at ZT16 (Cosinor, p < 0.001). Mdr1a expression remained rhythmic in everolimus-treated mice, whereas down-regulation was observed in P-gp expression in 2 of 4 mice. The study identified the circadian pattern of intestinal P-gp expression with an unprecedented precision. The circadian timing depending on the P-gp expression rhythms may play a crucial role in the tolerability/toxicity of everolimus. The circadian changes in mdr1a genes deserve further studies regarding their relevance for in vitro and in vivo chronotolerance of mdr1a-transported anticancer drugs. Chronotherapy with P-gp-effluxed anticancer drugs could then be applied according to their rhythmic patterns in host and tumor to jointly maximize treatment efficacy and minimize toxicity.Keywords: circadian rhythm, chronotoxicity, everolimus, mdr1a::Luc mice, p-glycoprotein
Procedia PDF Downloads 34125 Predicting Acceptance and Adoption of Renewable Energy Community solutions: The Prosumer Psychology
Authors: Francois Brambati, Daniele Ruscio, Federica Biassoni, Rebecca Hueting, Alessandra Tedeschi
Abstract:
This research, in the frame of social acceptance of renewable energies and community-based production and consumption models, aims at (1) supporting a data-driven approachable to dealing with climate change and (2) identifying & quantifying the psycho-sociological dimensions and factors that could support the transition from a technology-driven approach to a consumer-driven approach throughout the emerging “prosumer business models.” In addition to the existing Social Acceptance dimensions, this research tries to identify a purely individual psychological fourth dimension to understand processes and factors underling individual acceptance and adoption of renewable energy business models, realizing a Prosumer Acceptance Index. Questionnaire data collection has been performed throughout an online survey platform, combining standardized and ad-hoc questions adapted for the research purposes. To identify the main factors (individual/social) influencing the relation with renewable energy technology (RET) adoption, a Factorial Analysis has been conducted to identify the latent variables that are related to each other, revealing 5 latent psychological factors: Factor 1. Concern about environmental issues: global environmental issues awareness, strong beliefs and pro-environmental attitudes rising concern on environmental issues. Factor 2. Interest in energy sharing: attentiveness to solutions for local community’s collective consumption, to reduce individual environmental impact, sustainably improve the local community, and sell extra energy to the general electricity grid. Factor 3. Concern on climate change: environmental issues consequences on climate change awareness, especially on a global scale level, developing pro-environmental attitudes on global climate change course and sensitivity about behaviours aimed at mitigating such human impact. Factor 4. Social influence: social support seeking from peers. With RET, advice from significant others is looked for internalizing common perceived social norms of the national/geographical region. Factor 5. Impact on bill cost: inclination to adopt a RET when economic incentives from the behaviour perception affect the decision-making process could result in less expensive or unvaried bills. Linear regression has been conducted to identify and quantify the factors that could better predict behavioural intention to become a prosumer. An overall scale measuring “acceptance of a renewable energy solution” was used as the dependent variable, allowing us to quantify the five factors that contribute to measuring: awareness of environmental issues and climate change; environmental attitudes; social influence; and environmental risk perception. Three variables can significantly measure and predict the scores of the “Acceptance in becoming a prosumer” ad hoc scale. Variable 1. Attitude: the agreement to specific environmental issues and global climate change issues of concerns and evaluations towards a behavioural intention. Variable 2. Economic incentive: the perceived behavioural control and its related environmental risk perception, in terms of perceived short-term benefits and long-term costs, both part of the decision-making process as expected outcomes of the behaviour itself. Variable 3. Age: despite fewer economic possibilities, younger adults seem to be more sensitive to environmental dimensions and issues as opposed to older adults. This research can facilitate policymakers and relevant stakeholders to better understand which relevant psycho-sociological factors are intervening in these processes and what and how specifically target when proposing change towards sustainable energy production and consumption.Keywords: behavioural intention, environmental risk perception, prosumer, renewable energy technology, social acceptance
Procedia PDF Downloads 12924 Phytochemicals and Photosynthesis of Grape Berry Exocarp and Seed (Vitis vinifera, cv. Alvarinho): Effects of Foliar Kaolin and Irrigation
Authors: Andreia Garrido, Artur Conde, Ana Cunha, Ric De Vos
Abstract:
Climate changes predictions point to increases in abiotic stress for crop plants in Portugal, like pronounced temperature variation and decreased precipitation, which will have negative impact on grapevine physiology and consequently, on grape berry and wine quality. Short-term mitigation strategies have, therefore, been implemented to alleviate the impacts caused by adverse climatic periods. These strategies include foliar application of kaolin, an inert mineral, which has radiation reflection proprieties that decreases stress from excessive heat/radiation absorbed by its leaves, as well as smart irrigation strategies to avoid water stress. However, little is known about the influence of these mitigation measures on grape berries, neither on the photosynthetic activity nor on the photosynthesis-related metabolic profiles of its various tissues. Moreover, the role of fruit photosynthesis on berry quality is poorly understood. The main objective of our work was to assess the effects of kaolin and irrigation treatments on the photosynthetic activity of grape berry tissues (exocarp and seeds) and on their global metabolic profile, also investigating their possible relationship. We therefore collected berries of field-grown plants of the white grape variety Alvarinho from two distinct microclimates, i.e. from clusters exposed to high light (HL, 150 µmol photons m⁻² s⁻¹) and low light (LL, 50 µmol photons m⁻² s⁻¹), from both kaolin and non-kaolin (control) treated plants at three fruit developmental stages (green, véraison and mature). Plant irrigation was applied after harvesting the green berries, which also enabled comparison of véraison and mature berries from irrigated and non-irrigated growth conditions. Photosynthesis was assessed by pulse amplitude modulated chlorophyll fluorescence imaging analysis, and the metabolite profile of both tissues was assessed by complementary metabolomics approaches. Foliar kaolin application resulted in, for instance, an increased photosynthetic activity of the exocarp of LL-grown berries at green developmental stage, as compared to the control non-kaolin treatment, with a concomitant increase in the levels of several lipid-soluble isoprenoids (chlorophylls, carotenoids, and tocopherols). The exocarp of mature berries grown at HL microclimate on kaolin-sprayed non-irrigated plants had higher total sugar levels content than all other treatments, suggesting that foliar application of this mineral results in an increased accumulation of photoassimilates in mature berries. Unbiased liquid chromatography-mass spectrometry-based profiling of semi-polar compounds followed by ASCA (ANOVA simultaneous component analysis) and ANOVA statistical analysis indicated that kaolin had no or inconsistent effect on the flavonoid and phenylpropanoid composition in both seed and exocarp at any developmental stage; in contrast, both microclimate and irrigation influenced the level of several of these compounds depending on berry ripening stage. Overall, our study provides more insight into the effects of mitigation strategies on berry tissue photosynthesis and phytochemistry, under contrasting conditions of cluster light microclimate. We hope that this may contribute to develop sustainable management in vineyards and to maintain grape berries and wines with high quality even at increasing abiotic stress challenges.Keywords: climate change, grape berry tissues, metabolomics, mitigation strategies
Procedia PDF Downloads 12223 Inhibitory Effects of Crocin from Crocus sativus L. on Cell Proliferation of a Medulloblastoma Human Cell Line
Authors: Kyriaki Hatziagapiou, Eleni Kakouri, Konstantinos Bethanis, Alexandra Nikola, Eleni Koniari, Charalabos Kanakis, Elias Christoforides, George Lambrou, Petros Tarantilis
Abstract:
Medulloblastoma is a highly invasive tumour, as it tends to disseminate throughout the central nervous system early in its course. Despite the high 5-year-survival rate, a significant number of patients demonstrate serious long- or short-term sequelae (e.g., myelosuppression, endocrine dysfunction, cardiotoxicity, neurological deficits and cognitive impairment) and higher mortality rates, unrelated to the initial malignancy itself but rather to the aggressive treatment. A strong rationale exists for the use of Crocus sativus L (saffron) and its bioactive constituents (crocin, crocetin, safranal) as pharmaceutical agents, as they exert significant health-promoting properties. Crocins are water soluble carotenoids. Unlike other carotenoids, crocins are highly water-soluble compounds, with relatively low toxicity as they are not stored in adipose and liver tissues. Crocins have attracted wide attention as promising anti-cancer agents, due to their antioxidant, anti-inflammatory, and immunomodulatory effects, interference with transduction pathways implicated in tumorigenesis, angiogenesis, and metastasis (disruption of mitotic spindle assembly, inhibition of DNA topoisomerases, cell-cycle arrest, apoptosis or cell differentiation) and sensitization of cancer cells to radiotherapy and chemotherapy. The current research aimed to study the potential cytotoxic effect of crocins on TE671 medulloblastoma cell line, which may be useful in the optimization of existing and development of new therapeutic strategies. Crocins were extracted from stigmas of saffron in ultrasonic bath, using petroleum-ether, diethylether and methanol 70%v/v as solvents and the final extract was lyophilized. Identification of crocins according to high-performance liquid chromatography (HPLC) analysis was determined comparing the UV-vis spectra and the retention time (tR) of the peaks with literature data. For the biological assays crocin was diluted to nuclease and protease free water. TE671 cells were incubated with a range of concentrations of crocins (16, 8, 4, 2, 1, 0.5 and 0.25 mg/ml) for 24, 48, 72 and 96 hours. Analysis of cell viability after incubation with crocins was performed with Alamar Blue viability assay. The active ingredient of Alamar Blue, resazurin, is a blue, nontoxic, cell permeable compound virtually nonfluorescent. Upon entering cells, resazurin is reduced to a pink and fluorescent molecule, resorufin. Viable cells continuously convert resazurin to resorufin, generating a quantitative measure of viability. The colour of resorufin was quantified by measuring the absorbance of the solution at 600 nm with a spectrophotometer. HPLC analysis indicated that the most abundant crocins in our extract were trans-crocin-4 and trans-crocin-3. Crocins exerted significant cytotoxicity in a dose and time-dependent manner (p < 0.005 for exposed cells to any concentration at 48, 72 and 96 hours versus cells not exposed); as their concentration and time of exposure increased, the reduction of resazurin to resofurin decreased, indicating reduction in cell viability. IC50 values for each time point were calculated ~3.738, 1.725, 0.878 and 0.7566 mg/ml at 24, 48, 72 and 96 hours, respectively. The results of our study could afford the basis of research regarding the use of natural carotenoids as anticancer agents and the shift to targeted therapy with higher efficacy and limited toxicity. Acknowledgements: The research was funded by Fellowships of Excellence for Postgraduate Studies IKY-Siemens Programme.Keywords: crocetin, crocin, medulloblastoma, saffron
Procedia PDF Downloads 21522 Project Management Practices and Operational Challenges in Conflict Areas: Case Study Kewot Woreda North Shewa Zone, Amhara Region, Ethiopia
Authors: Rahel Birhane Eshetu
Abstract:
This research investigates the complex landscape of project management practices and operational challenges in conflict-affected areas, with a specific focus on Kewot Woreda in the North Shewa Zone of the Amhara region in Ethiopia. The study aims to identify essential project management methodologies, the significant operational hurdles faced, and the adaptive strategies employed by project managers in these challenging environments. Utilizing a mixed-methods approach, the research combines qualitative and quantitative data collection. Initially, a comprehensive literature review was conducted to establish a theoretical framework. This was followed by the administration of questionnaires to gather empirical data, which was then analyzed using statistical software. This sequential approach ensures a robust understanding of the context and challenges faced by project managers. The findings reveal that project managers in conflict zones encounter a range of escalating challenges. Initially, they must contend with immediate security threats and the presence of displaced populations, which significantly disrupt project initiation and execution. As projects progress, additional challenges arise, including limited access to essential resources and environmental disruptions such as natural disasters. These factors exacerbate the operational difficulties that project managers must navigate. In response to these challenges, the study highlights the necessity for project managers to implement formal project plans while simultaneously adopting adaptive strategies that evolve over time. Key adaptive strategies identified include flexible risk management frameworks, change management practices, and enhanced stakeholder engagement approaches. These strategies are crucial for maintaining project momentum and ensuring that objectives are met despite the unpredictable nature of conflict environments. The research emphasizes that structured scope management, clear documentation, and thorough requirements analysis are vital components for effectively navigating the complexities inherent in conflict-affected regions. However, the ongoing threats and logistical barriers necessitate a continuous adjustment to project management methodologies. This adaptability is not only essential for the immediate success of projects but also for fostering long-term resilience within the community. Concluding, the study offers actionable recommendations aimed at improving project management practices in conflict zones. These include the adoption of adaptive frameworks specifically tailored to the unique conditions of conflict environments and targeted training for project managers. Such training should focus on equipping managers with the skills to better address the dynamic challenges presented by conflict situations. The insights gained from this research contribute significantly to the broader field of project management, providing a practical guide for practitioners operating in high-risk areas. By emphasizing sustainable and resilient project outcomes, this study underscores the importance of adaptive management strategies in ensuring the success of projects in conflict-affected regions. The findings serve not only to enhance the understanding of project management practices in Kewot Woreda but also to inform future research and practice in similar contexts, ultimately aiming to promote stability and development in areas beset by conflict.Keywords: project management practices, operational challenges, conflict zones, adaptive strategies
Procedia PDF Downloads 1221 Confirming the Factors of Professional Readiness in Athletic Training
Authors: Philip A. Szlosek, M. Susan Guyer, Mary G. Barnum, Elizabeth M. Mullin
Abstract:
In the United States, athletic training is a healthcare profession that encompasses the prevention, examination, diagnosis, treatment, and rehabilitation of injuries and medical conditions. Athletic trainers work under the direction of or in collaboration with a physician and are recognized by the American Medical Association as allied healthcare professionals. Internationally, this profession is often known as athletic therapy. As healthcare professionals, athletic trainers must be prepared for autonomous practice immediately after graduation. However, new athletic trainers have been shown to have clinical areas of strength and weakness.To better assess professional readiness and improve the preparedness of new athletic trainers, the factors of athletic training professional readiness must be defined. Limited research exists defining the holistic aspects of professional readiness needed for athletic trainers. Confirming the factors of professional readiness in athletic training could enhance the professional preparation of athletic trainers and result in more highly prepared new professionals. The objective of this study was to further explore and confirm the factors of professional readiness in athletic training. Authors useda qualitative design based in grounded theory. Participants included athletic trainers with greater than 24 months of experience from a variety of work settings from each district of the National Athletic Trainer’s Association. Participants took the demographic questionnaire electronically using Qualtrics Survey Software (Provo UT). After completing the demographic questionnaire, 20 participants were selected to complete one-on-one interviews using GoToMeeting audiovisual web conferencing software. IBM Statistical Package for the Social Sciences (SPSS, v. 21.0) was used to calculate descriptive statistics for participant demographics. The first author transcribed all interviews verbatim and utilized a grounded theory approach during qualitative data analysis. Data were analyzed using a constant comparative analysis and open and axial coding. Trustworthiness was established using reflexivity, member checks, and peer reviews. Analysis revealed four overarching themes, including management, interpersonal relations, clinical decision-making, and confidence. Management was categorized as athletic training services not involving direct patient care and was divided into three subthemes, including administration skills, advocacy, and time management. Interpersonal Relations was categorized as the need and ability of the athletic trainer to properly interact with others. Interpersonal relations was divided into three subthemes, including personality traits, communication, and collaborative practice. Clinical decision-making was categorized as the skills and attributes required by the athletic trainer whenmaking clinical decisions related to patient care. Clinical decision-making was divided into three subthemes including clinical skills, continuing education, and reflective practice. The final theme was confidence. Participants discussed the importance of confidence regarding relationships building, clinical and administrative duties, and clinical decision-making. Overall, participants explained the value of a well-rounded athletic trainer and emphasized that athletic trainers need communication and organizational skills, the ability to collaborate, and must value self-reflection and continuing education in addition to having clinical expertise. Future research should finalize a comprehensive model of professional readiness for athletic training, develop a holistic assessment instrument for athletic training professional readiness, and explore the preparedness of new athletic trainers.Keywords: autonomous practice, newly certified athletic trainer, preparedness for professional practice, transition to practice skills
Procedia PDF Downloads 14920 Biosynthesis of Silver Nanoparticles Using Zataria multiflora Extract, and Study of Their Antibacterial Effects on Negative Bacillus Bacteria Causing Urinary Tract Infection
Authors: F. Madani, M. Doudi, L. Rahimzadeh Torabi
Abstract:
The irregular consumption of current antibiotics contributes to an escalation in antibiotic resistance among urinary pathogens on a global scale. The objective of this research was to investigate the process of biologically synthesized silver nanoparticles through the utilization of Zataria multiflora extract. Additionally, the study aimed to evaluate the efficacy of these synthesized nanoparticles in inhibiting the growth of multi-drug resistant negative bacillus bacteria, which commonly contribute to urinary tract infections. The botanical specimen utilized in the current research investigation was Z. multiflora, and its extract was produced employing the Soxhlet extraction technique. The study examined the green synthesis conditions of silver nanoparticles by considering three key parameters: the quantity of extract used, the concentration of silver nitrate salt, and the temperature. The particle dimensions were ascertained using the Zetasizer technique. In order to identify synthesized Silver nanoparticles TEM, XRD, and FTIR methods were used. For evaluating the antibacterial effects of nanoparticles synthesized through a biological method, different concentrations of silver nanoparticles were studied on 140 cases of Multiple drug resistance (MDR) bacteria strains Escherichia coli, Klebsiella pneumoniae, Enterobacter aerogenes, Proteus vulgaris,Citrobacter freundii, Acinetobacter bumanii and Pseudomonas aeruginosa, (each genus of bacteria, 20 samples), which all were MDR and cause urinary tract infections, for identification of bacteria were used of PCR test and laboratory methods (Agar well diffusion and Microdilution methods) to assess their sensitivity to Nanoparticles. The data were subjected to analysis using the statistical software SPSS, specifically employing nonparametric Kruskal-Wallis and Mann-Whitney tests. This study yielded noteworthy findings regarding the impacts of varying concentrations of silver nitrate, different quantities of Z. multiflora extract, and levels of temperature on nanoparticles. Specifically, it was observed that an increase in the concentration of silver nitrate, extract amount, and temperature resulted in a reduction in the size of the nanoparticles synthesized. However, the impact of the aforementioned factors on the index of particle diffusion was found to be statistically non-significant. According to the transmission electron microscopy (TEM) findings, the particles exhibited predominantly spherical morphology, with a diameter spanning from 25 to 50 nanometers. Nanoparticles in the examined sample. Nanocrystals of silver. FTIR method illustrated that the spectrums of Z. multiflora and synthesized nanoparticles had clear peaks in the ranges of 1500-2000, and 3500 - 4000. The obtained results of antibacterial effects of different concentrations of silver nanoparticles on according to agar well diffusion and microdilution method, biologically synthesized nanoparticles showed 1000 mg /ml highest and lowest mean inhibition zone diameter in E. coli, A. bumanii 23 and 15mm, respectively. MIC was observed for all of bacteria 125 mg/ml and for A. bumanii 250 mg/ml. Comparing the growth inhibitory effect of chemically synthesized the results obtained from the experiment indicated that both nanoparticles and biologically synthesized nanoparticles exhibit a notable growth inhibition effect. Specifically, the chemical method of synthesizing nanoparticles demonstrated the highest level of growth inhibition at a concentration of 62.5 mg/mL The present study demonstrated an inhibitory effect on bacterial growth, facilitating the causative factors of urine infection and multidrug resistance (MDR).Keywords: multiple drug resistance, negative bacillus bacteria, urine infection, Zataria multiflora
Procedia PDF Downloads 10219 The Use of the TRIGRS Model and Geophysics Methodologies to Identify Landslides Susceptible Areas: Case Study of Campos do Jordao-SP, Brazil
Authors: Tehrrie Konig, Cassiano Bortolozo, Daniel Metodiev, Rodolfo Mendes, Marcio Andrade, Marcio Moraes
Abstract:
Gravitational mass movements are recurrent events in Brazil, usually triggered by intense rainfall. When these events occur in urban areas, they end up becoming disasters due to the economic damage, social impact, and loss of human life. To identify the landslide-susceptible areas, it is important to know the geotechnical parameters of the soil, such as cohesion, internal friction angle, unit weight, hydraulic conductivity, and hydraulic diffusivity. The measurement of these parameters is made by collecting soil samples to analyze in the laboratory and by using geophysical methodologies, such as Vertical Electrical Survey (VES). The geophysical surveys analyze the soil properties with minimal impact in its initial structure. Statistical analysis and mathematical models of physical basis are used to model and calculate the Factor of Safety for steep slope areas. In general, such mathematical models work from the combination of slope stability models and hydrological models. One example is the mathematical model TRIGRS (Transient Rainfall Infiltration and Grid-based Regional Slope- Stability Model) which calculates the variation of the Factor of Safety of a determined study area. The model relies on changes in pore-pressure and soil moisture during a rainfall event. TRIGRS was written in the Fortran programming language and associates the hydrological model, which is based on the Richards Equation, with the stability model based on the principle of equilibrium limit. Therefore, the aims of this work are modeling the slope stability of Campos do Jordão with TRIGRS, using geotechnical and geophysical methodologies to acquire the soil properties. The study area is located at southern-east of Sao Paulo State in the Mantiqueira Mountains and has a historic landslide register. During the fieldwork, soil samples were collected, and the VES method applied. These procedures provide the soil properties, which were used as input data in the TRIGRS model. The hydrological data (infiltration rate and initial water table height) and rainfall duration and intensity, were acquired from the eight rain gauges installed by Cemaden in the study area. A very high spatial resolution digital terrain model was used to identify the slopes declivity. The analyzed period is from March 6th to March 8th of 2017. As results, the TRIGRS model calculates the variation of the Factor of Safety within a 72-hour period in which two heavy rainfall events stroke the area and six landslides were registered. After each rainfall, the Factor of Safety declined, as expected. The landslides happened in areas identified by the model with low values of Factor of Safety, proving its efficiency on the identification of landslides susceptible areas. This study presents a critical threshold for landslides, in which an accumulated rainfall higher than 80mm/m² in 72 hours might trigger landslides in urban and natural slopes. The geotechnical and geophysics methods are shown to be very useful to identify the soil properties and provide the geological characteristics of the area. Therefore, the combine geotechnical and geophysical methods for soil characterization and the modeling of landslides susceptible areas with TRIGRS are useful for urban planning. Furthermore, early warning systems can be developed by combining the TRIGRS model and weather forecast, to prevent disasters in urban slopes.Keywords: landslides, susceptibility, TRIGRS, vertical electrical survey
Procedia PDF Downloads 17118 Structural Characteristics of HPDSP Concrete on Beam Column Joints
Authors: Hari Krishan Sharma, Sanjay Kumar Sharma, Sushil Kumar Swar
Abstract:
Inadequate transverse reinforcement is considered as the main reason for the beam column joint shear failure observed during recent earthquakes. DSP matrix consists of cement and high content of micro-silica with low water to cement ratio while the aggregates are graded quartz sand. The use of reinforcing fibres leads not only to the increase of tensile/bending strength and specific fracture energy, but also to reduction of brittleness and, consequently, to production of non-explosive ruptures. Besides, fibre-reinforced materials are more homogeneous and less sensitive to small defects and flaws. Recent works on the freeze-thaw durability (also in the presence of de-icing salts) of fibre-reinforced DSP confirm the excellent behaviour in the expected long term service life.DSP materials, including fibre-reinforced DSP and CRC (Compact Reinforced Composites) are obtained by using high quantities of super plasticizers and high volumes of micro-silica. Steel fibres with high tensile yield strength of smaller diameter and short length in different fibre volume percentage and aspect ratio tilized to improve the performance by reducing the brittleness of matrix material. In the case of High Performance Densified Small Particle Concrete (HPDSPC), concrete is dense at the micro-structure level, tensile strain would be much higher than that of the conventional SFRC, SIFCON & SIMCON. Beam-column sub-assemblages used as moment resisting constructed using HPDSPC in the joint region with varying quantities of steel fibres, fibre aspect ratio and fibre orientation in the critical section. These HPDSPC in the joint region sub-assemblages tested under cyclic/earthquake loading. Besides loading measurements, frame displacements, diagonal joint strain and rebar strain adjacent to the joint will also be measured to investigate stress-strain behaviour, load deformation characteristics, joint shear strength, failure mechanism, ductility associated parameters, stiffness and energy dissipated parameters of the beam column sub-assemblages also evaluated. Finally a design procedure for the optimum design of HPDSPC corresponding to moment, shear forces and axial forces for the reinforced concrete beam-column joint sub-assemblage proposed. The fact that the implementation of material brittleness measure in the design of RC structures can improve structural reliability by providing uniform safety margins over a wide range of structural sizes and material compositions well recognized in the structural design and research. This lead to the development of high performance concrete for the optimized combination of various structural ratios in concrete for the optimized combination of various structural properties. The structural applications of HPDSPC, because of extremely high strength, will reduce dead load significantly as compared to normal weight concrete thereby offering substantial cost saving and by providing improved seismic response, longer spans, and thinner sections, less reinforcing steel and lower foundation cost. These cost effective parameters will make this material more versatile for use in various structural applications like beam-column joints in industries, airports, parking areas, docks, harbours, and also containers for hazardous material, safety boxes and mould & tools for polymer composites and metals.Keywords: high performance densified small particle concrete (HPDSPC), steel fibre reinforced concrete (SFRC), slurry infiltrated concrete (SIFCON), Slurry infiltrated mat concrete (SIMCON)
Procedia PDF Downloads 30117 Assessing Diagnostic and Evaluation Tools for Use in Urban Immunisation Programming: A Critical Narrative Review and Proposed Framework
Authors: Tim Crocker-Buque, Sandra Mounier-Jack, Natasha Howard
Abstract:
Background: Due to both the increasing scale and speed of urbanisation, urban areas in low and middle-income countries (LMICs) host increasingly large populations of under-immunized children, with the additional associated risks of rapid disease transmission in high-density living environments. Multiple interdependent factors are associated with these coverage disparities in urban areas and most evidence comes from relatively few countries, e.g., predominantly India, Kenya, Nigeria, and some from Pakistan, Iran, and Brazil. This study aimed to identify, describe, and assess the main tools used to measure or improve coverage of immunisation services in poor urban areas. Methods: Authors used a qualitative review design, including academic and non-academic literature, to identify tools used to improve coverage of public health interventions in urban areas. Authors selected and extracted sources that provided good examples of specific tools, or categories of tools, used in a context relevant to urban immunization. Diagnostic (e.g., for data collection, analysis, and insight generation) and programme tools (e.g., for investigating or improving ongoing programmes) and interventions (e.g., multi-component or stand-alone with evidence) were selected for inclusion to provide a range of type and availability of relevant tools. These were then prioritised using a decision-analysis framework and a tool selection guide for programme managers developed. Results: Authors reviewed tools used in urban immunisation contexts and tools designed for (i) non-immunization and/or non-health interventions in urban areas, and (ii) immunisation in rural contexts that had relevance for urban areas (e.g., Reaching every District/Child/ Zone). Many approaches combined several tools and methods, which authors categorised as diagnostic, programme, and intervention. The most common diagnostic tools were cross-sectional surveys, key informant interviews, focus group discussions, secondary analysis of routine data, and geographical mapping of outcomes, resources, and services. Programme tools involved multiple stages of data collection, analysis, insight generation, and intervention planning and included guidance documents from WHO (World Health Organisation), UNICEF (United Nations Children's Fund), USAID (United States Agency for International Development), and governments, and articles reporting on diagnostics, interventions, and/or evaluations to improve urban immunisation. Interventions involved service improvement, education, reminder/recall, incentives, outreach, mass-media, or were multi-component. The main gaps in existing tools were an assessment of macro/policy-level factors, exploration of effective immunization communication channels, and measuring in/out-migration. The proposed framework uses a problem tree approach to suggest tools to address five common challenges (i.e. identifying populations, understanding communities, issues with service access and use, improving services, improving coverage) based on context and available data. Conclusion: This study identified many tools relevant to evaluating urban LMIC immunisation programmes, including significant crossover between tools. This was encouraging in terms of supporting the identification of common areas, but problematic as data volumes, instructions, and activities could overwhelm managers and tools are not always suitably applied to suitable contexts. Further research is needed on how best to combine tools and methods to suit local contexts. Authors’ initial framework can be tested and developed further.Keywords: health equity, immunisation, low and middle-income countries, poverty, urban health
Procedia PDF Downloads 13916 Identification of a Panel of Epigenetic Biomarkers for Early Detection of Hepatocellular Carcinoma in Blood of Individuals with Liver Cirrhosis
Authors: Katarzyna Lubecka, Kirsty Flower, Megan Beetch, Lucinda Kurzava, Hannah Buvala, Samer Gawrieh, Suthat Liangpunsakul, Tracy Gonzalez, George McCabe, Naga Chalasani, James M. Flanagan, Barbara Stefanska
Abstract:
Hepatocellular carcinoma (HCC), the most prevalent type of primary liver cancer, is the second leading cause of cancer death worldwide. Late onset of clinical symptoms in HCC results in late diagnosis and poor disease outcome. Approximately 85% of individuals with HCC have underlying liver cirrhosis. However, not all cirrhotic patients develop cancer. Reliable early detection biomarkers that can distinguish cirrhotic patients who will develop cancer from those who will not are urgently needed and could increase the cure rate from 5% to 80%. We used Illumina-450K microarray to test whether blood DNA, an easily accessible source of DNA, bear site-specific changes in DNA methylation in response to HCC before diagnosis with conventional tools (pre-diagnostic). Top 11 differentially methylated sites were selected for validation by pyrosequencing. The diagnostic potential of the 11 pyrosequenced probes was tested in blood samples from a prospective cohort of cirrhotic patients. We identified 971 differentially methylated CpG sites in pre-diagnostic HCC cases as compared with healthy controls (P < 0.05, paired Wilcoxon test, ICC ≥ 0.5). Nearly 76% of differentially methylated CpG sites showed lower levels of methylation in cases vs. controls (P = 2.973E-11, Wilcoxon test). Classification of the CpG sites according to their location relative to CpG islands and transcription start site revealed that those hypomethylated loci are located in regulatory regions important for gene transcription such as CpG island shores, promoters, and 5’UTR at higher frequency than hypermethylated sites. Among 735 CpG sites hypomethylated in cases vs. controls, 482 sites were assigned to gene coding regions whereas 236 hypermethylated sites corresponded to 160 genes. Bioinformatics analysis using GO, KEGG and DAVID knowledgebase indicate that differentially methylated CpG sites are located in genes associated with functions that are essential for gene transcription, cell adhesion, cell migration, and regulation of signal transduction pathways. Taking into account the magnitude of the difference, statistical significance, location, and consistency across the majority of matched pairs case-control, we selected 11 CpG loci corresponding to 10 genes for further validation by pyrosequencing. We established that methylation of CpG sites within 5 out of those 10 genes distinguish cirrhotic patients who subsequently developed HCC from those who stayed cancer free (cirrhotic controls), demonstrating potential as biomarkers of early detection in populations at risk. The best predictive value was detected for CpGs located within BARD1 (AUC=0.70, asymptotic significance ˂0.01). Using an additive logistic regression model, we further showed that 9 CpG loci within those 5 genes, that were covered in pyrosequenced probes, constitute a panel with high diagnostic accuracy (AUC=0.887; 95% CI:0.80-0.98). The panel was able to distinguish pre-diagnostic cases from cirrhotic controls free of cancer with 88% sensitivity at 70% specificity. Using blood as a minimally invasive material and pyrosequencing as a straightforward quantitative method, the established biomarker panel has high potential to be developed into a routine clinical test after validation in larger cohorts. This study was supported by Showalter Trust, American Cancer Society (IRG#14-190-56), and Purdue Center for Cancer Research (P30 CA023168) granted to BS.Keywords: biomarker, DNA methylation, early detection, hepatocellular carcinoma
Procedia PDF Downloads 30315 Solid State Fermentation: A Technological Alternative for Enriching Bioavailability of Underutilized Crops
Authors: Vipin Bhandari, Anupama Singh, Kopal Gupta
Abstract:
Solid state fermentation, an eminent bioconversion technique for converting many biological substrates into a value-added product, has proven its role in the biotransformation of crops by nutritionally enriching them. Hence, an effort was made for nutritional enhancement of underutilized crops viz. barnyard millet, amaranthus and horse gram based composite flour using SSF. The grains were given pre-treatments before fermentation and these pre-treatments proved quite effective in diminishing the level of antinutrients in grains and in improving their nutritional characteristics. The present study deals with the enhancement of nutritional characteristics of underutilized crops viz. barnyard millet, amaranthus and horsegram based composite flour using solid state fermentation (SSF) as the principle bioconversion technique to convert the composite flour substrate into a nutritionally enriched value added product. Response surface methodology was used to design the experiments. The variables selected for the fermentation experiments were substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content having three levels of each. Seventeen designed experiments were conducted randomly to find the effect of these variables on microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index. The data from all experiments were analyzed using Design Expert 8.0.6 and the response functions were developed using multiple regression analysis and second order models were fitted for each response. Results revealed that pretreatments proved quite handful in diminishing the level of antinutrients and thus enhancing the nutritional value of the grains appreciably, for instance, there was about 23% reduction in phytic acid levels after decortication of barnyard millet. The carbohydrate content of the decorticated barnyard millet increased to 81.5% from initial value of 65.2%. Similarly popping and puffing of horsegram and amaranthus respectively greatly reduced the trypsin inhibitor activity. Puffing of amaranthus also reduced the tannin content appreciably. Bacillus subtilis was used as the inoculating specie since it is known to produce phytases in solid state fermentation systems. These phytases remarkably reduce the phytic acid content which acts as a major antinutritional factor in food grains. Results of solid state fermentation experiments revealed that phytic acid levels reduced appreciably when fermentation was allowed to continue for 72 hours at a temperature of 35°C. Particle size and substrate blend ratio also affected the responses positively. All the parameters viz. substrate particle size, substrate blend ratio, fermentation time, fermentation temperature and moisture content affected the responses namely microbial count, reducing sugar, pH, total sugar, phytic acid and water absorption index but the effect of fermentation time was found to be most significant on all the responses. Statistical analysis resulted in the optimum conditions (particle size 355µ, substrate blend ratio 50:20:30 of barnyard millet, amaranthus and horsegram respectively, fermentation time 68 hrs, fermentation temperature 35°C and moisture content 47%) for maximum reduction in phytic acid. The model F- value was found to be highly significant at 1% level of significance in case of all the responses. Hence, second order model could be fitted to predict all the dependent parameters. The effect of fermentation time was found to be most significant as compared to other variables.Keywords: composite flour, solid state fermentation, underutilized crops, cereals, fermentation technology, food processing
Procedia PDF Downloads 32614 Employee Engagement
Authors: Jai Bakliya, Palak Dhamecha
Abstract:
Today customer satisfaction is given utmost priority be it any industry. But when it comes to hospitality industry this applies even more as they come in direct contact with customers while providing them services. Employee engagement is new concept adopted by Human Resource Department which impacts customer satisfactions. To satisfy your customers, it is necessary to see that the employees in the organisation are satisfied and engaged enough in their work that they meet the company’s expectations and contribute in the process of achieving company’s goals and objectives. After all employees is human capital of the organisation. Employee engagement has become a top business priority for every organisation. In this fast moving economy, business leaders know that having a potential and high-performing human resource is important for growth and survival. They recognize that a highly engaged manpower can increase innovation, productivity, and performance, while reducing costs related to retention and hiring in highly competitive talent markets. But while most executives see a clear need to improve employee engagement, many have yet to develop tangible ways to measure and tackle this goal. Employee Engagement is an approach which is applied to establish an emotional connection between an employee and the organisation which ensures the employee’s commitment towards his work which affects the productivity and overall performance of the organisation. The study was conducted in hospitality industry. A popular branded hotel was chosen as a sample unit. Data were collected, both qualitative and quantitative from respondents. It is found that employee engagement level of the organisation (Hotel) is quite low. This means that employees are not emotionally connected with the organisation which may in turn, affect performance of the employees it is important to note that in hospitality industry individual employee’s performance specifically in terms of emotional engagement is critical and, therefore, a low engagement level may contribute to low organisation performance. An attempt to this study was made to identify employee engagement level. Another objective to take this study was to explore the factors impeding employee engagement and to explore employee engagement facilitation. While in the hospitality industry where people tend to work for as long as 16 to 18 hours concepts like employee engagement is essential. Because employees get tired of their routine job and in case where job rotation cannot be done employee engagement acts as a solution. The study was conducted at Trident Hotel, Udaipur. It was conducted on the sample size of 30 in-house employees from 6 different departments. The various departments were: Accounts and General, Front Office, Food & Beverage Service, Housekeeping, Food & Beverage Production and Engineering. It was conducted with the help of research instrument. The research instrument was Questionnaire. Data collection source was primary source. Trident Udaipur is one of the busiest hotels in Udaipur. The occupancy rate of the guest over there is nearly 80%. Due the high occupancy rate employees or staff of the hotel used to remain very busy and occupied all the time in their work. They worked for their remuneration only. As a result, they do not have any encouragement for their work nor they are interested in going an extra mile for the organisation. The study result shows working environment factors including recognition and appreciation, opinions of the employee, counselling, feedback from superiors, treatment of managers and respect from the organisation are capable of increasing employee engagement level in the hotel. The above study result encouraged us to explore the factors contributed to low employee engagement. It is being found that factors such as recognition and appreciation, feedback from supervisors, opinion of the employee, counselling, feedback from supervisors, treatment from managers has contributed negatively to employee engagement level. Probable reasons for the low contribution are number of employees gave the negative feedback in accordance to the factors stated above of the organisation. It seems that the structure of organisation itself is responsible for the low contribution of employee engagement. The scope of this study is limited to trident hotel situated in the Udaipur. The limitation of the study was that that the results or findings were only based on the responses of respondents of Trident, Udaipur. And so the recommendations were also applicable in Trident, Udaipur and not to all the like organisations across the country. Through the data collected was further analysed, interpreted and concluded. On the basis of the findings, suggestions were provided to the hotel for improvisation.Keywords: human resource, employee engagement, research, study
Procedia PDF Downloads 30613 Revolutionizing Oil Palm Replanting: Geospatial Terrace Design for High-precision Ground Implementation Compared to Conventional Methods
Authors: Nursuhaili Najwa Masrol, Nur Hafizah Mohammed, Nur Nadhirah Rusyda Rosnan, Vijaya Subramaniam, Sim Choon Cheak
Abstract:
Replanting in oil palm cultivation is vital to enable the introduction of planting materials and provides an opportunity to improve the road, drainage, terrace design, and planting density. Oil palm replanting is fundamentally necessary every 25 years. The adoption of the digital replanting blueprint is imperative as it can assist the Malaysia Oil Palm industry in addressing challenges such as labour shortages and limited expertise related to replanting tasks. Effective replanting planning should commence at least 6 months prior to the actual replanting process. Therefore, this study will help to plan and design the replanting blueprint with high-precision translation on the ground. With the advancement of geospatial technology, it is now feasible to engage in thoroughly researched planning, which can help maximize the potential yield. A blueprint designed before replanting is to enhance management’s ability to optimize the planting program, address manpower issues, or even increase productivity. In terrace planting blueprints, geographic tools have been utilized to design the roads, drainages, terraces, and planting points based on the ARM standards. These designs are mapped with location information and undergo statistical analysis. The geospatial approach is essential in precision agriculture and ensuring an accurate translation of design to the ground by implementing high-accuracy technologies. In this study, geospatial and remote sensing technologies played a vital role. LiDAR data was employed to determine the Digital Elevation Model (DEM), enabling the precise selection of terraces, while ortho imagery was used for validation purposes. Throughout the designing process, Geographical Information System (GIS) tools were extensively utilized. To assess the design’s reliability on the ground compared with the current conventional method, high-precision GPS instruments like EOS Arrow Gold and HIPER VR GNSS were used, with both offering accuracy levels between 0.3 cm and 0.5cm. Nearest Distance Analysis was generated to compare the design with actual planting on the ground. The analysis revealed that it could not be applied to the roads due to discrepancies between actual roads and the blueprint design, which resulted in minimal variance. In contrast, the terraces closely adhered to the GPS markings, with the most variance distance being less than 0.5 meters compared to actual terraces constructed. Considering the required slope degrees for terrace planting, which must be greater than 6 degrees, the study found that approximately 65% of the terracing was constructed at a 12-degree slope, while over 50% of the terracing was constructed at slopes exceeding the minimum degrees. Utilizing blueprint replanting promising strategies for optimizing land utilization in agriculture. This approach harnesses technology and meticulous planning to yield advantages, including increased efficiency, enhanced sustainability, and cost reduction. From this study, practical implementation of this technique can lead to tangible and significant improvements in agricultural sectors. In boosting further efficiencies, future initiatives will require more sophisticated techniques and the incorporation of precision GPS devices for upcoming blueprint replanting projects besides strategic progression aims to guarantee the precision of both blueprint design stages and its subsequent implementation on the field. Looking ahead, automating digital blueprints are necessary to reduce time, workforce, and costs in commercial production.Keywords: replanting, geospatial, precision agriculture, blueprint
Procedia PDF Downloads 8012 Hybrid GNN Based Machine Learning Forecasting Model For Industrial IoT Applications
Authors: Atish Bagchi, Siva Chandrasekaran
Abstract:
Background: According to World Bank national accounts data, the estimated global manufacturing value-added output in 2020 was 13.74 trillion USD. These manufacturing processes are monitored, modelled, and controlled by advanced, real-time, computer-based systems, e.g., Industrial IoT, PLC, SCADA, etc. These systems measure and manipulate a set of physical variables, e.g., temperature, pressure, etc. Despite the use of IoT, SCADA etc., in manufacturing, studies suggest that unplanned downtime leads to economic losses of approximately 864 billion USD each year. Therefore, real-time, accurate detection, classification and prediction of machine behaviour are needed to minimise financial losses. Although vast literature exists on time-series data processing using machine learning, the challenges faced by the industries that lead to unplanned downtimes are: The current algorithms do not efficiently handle the high-volume streaming data from industrial IoTsensors and were tested on static and simulated datasets. While the existing algorithms can detect significant 'point' outliers, most do not handle contextual outliers (e.g., values within normal range but happening at an unexpected time of day) or subtle changes in machine behaviour. Machines are revamped periodically as part of planned maintenance programmes, which change the assumptions on which original AI models were created and trained. Aim: This research study aims to deliver a Graph Neural Network(GNN)based hybrid forecasting model that interfaces with the real-time machine control systemand can detect, predict machine behaviour and behavioural changes (anomalies) in real-time. This research will help manufacturing industries and utilities, e.g., water, electricity etc., reduce unplanned downtimes and consequential financial losses. Method: The data stored within a process control system, e.g., Industrial-IoT, Data Historian, is generally sampled during data acquisition from the sensor (source) and whenpersistingin the Data Historian to optimise storage and query performance. The sampling may inadvertently discard values that might contain subtle aspects of behavioural changes in machines. This research proposed a hybrid forecasting and classification model which combines the expressive and extrapolation capability of GNN enhanced with the estimates of entropy and spectral changes in the sampled data and additional temporal contexts to reconstruct the likely temporal trajectory of machine behavioural changes. The proposed real-time model belongs to the Deep Learning category of machine learning and interfaces with the sensors directly or through 'Process Data Historian', SCADA etc., to perform forecasting and classification tasks. Results: The model was interfaced with a Data Historianholding time-series data from 4flow sensors within a water treatment plantfor45 days. The recorded sampling interval for a sensor varied from 10 sec to 30 min. Approximately 65% of the available data was used for training the model, 20% for validation, and the rest for testing. The model identified the anomalies within the water treatment plant and predicted the plant's performance. These results were compared with the data reported by the plant SCADA-Historian system and the official data reported by the plant authorities. The model's accuracy was much higher (20%) than that reported by the SCADA-Historian system and matched the validated results declared by the plant auditors. Conclusions: The research demonstrates that a hybrid GNN based approach enhanced with entropy calculation and spectral information can effectively detect and predict a machine's behavioural changes. The model can interface with a plant's 'process control system' in real-time to perform forecasting and classification tasks to aid the asset management engineers to operate their machines more efficiently and reduce unplanned downtimes. A series of trialsare planned for this model in the future in other manufacturing industries.Keywords: GNN, Entropy, anomaly detection, industrial time-series, AI, IoT, Industry 4.0, Machine Learning
Procedia PDF Downloads 14911 Circular Tool and Dynamic Approach to Grow the Entrepreneurship of Macroeconomic Metabolism
Authors: Maria Areias, Diogo Simões, Ana Figueiredo, Anishur Rahman, Filipa Figueiredo, João Nunes
Abstract:
It is expected that close to 7 billion people will live in urban areas by 2050. In order to improve the sustainability of the territories and its transition towards circular economy, it’s necessary to understand its metabolism and promote and guide the entrepreneurship answer. The study of a macroeconomic metabolism involves the quantification of the inputs, outputs and storage of energy, water, materials and wastes for an urban region. This quantification and analysis representing one opportunity for the promotion of green entrepreneurship. There are several methods to assess the environmental impacts of an urban territory, such as human and environmental risk assessment (HERA), life cycle assessment (LCA), ecological footprint assessment (EF), material flow analysis (MFA), physical input-output table (PIOT), ecological network analysis (ENA), multicriteria decision analysis (MCDA) among others. However, no consensus exists about which of those assessment methods are best to analyze the sustainability of these complex systems. Taking into account the weaknesses and needs identified, the CiiM - Circular Innovation Inter-Municipality project aims to define an uniform and globally accepted methodology through the integration of various methodologies and dynamic approaches to increase the efficiency of macroeconomic metabolisms and promoting entrepreneurship in a circular economy. The pilot territory considered in CiiM project has a total area of 969,428 ha, comprising a total of 897,256 inhabitants (about 41% of the population of the Center Region). The main economic activities in the pilot territory, which contribute to a gross domestic product of 14.4 billion euros, are: social support activities for the elderly; construction of buildings; road transport of goods, retailing in supermarkets and hypermarkets; mass production of other garments; inpatient health facilities; and the manufacture of other components and accessories for motor vehicles. The region's business network is mostly constituted of micro and small companies (similar to the Central Region of Portugal), with a total of 53,708 companies identified in the CIM Region of Coimbra (39 large companies), 28,146 in the CIM Viseu Dão Lafões (22 large companies) and 24,953 in CIM Beiras and Serra da Estrela (13 large companies). For the construction of the database was taking into account data available at the National Institute of Statistics (INE), General Directorate of Energy and Geology (DGEG), Eurostat, Pordata, Strategy and Planning Office (GEP), Portuguese Environment Agency (APA), Commission for Coordination and Regional Development (CCDR) and Inter-municipal Community (CIM), as well as dedicated databases. In addition to the collection of statistical data, it was necessary to identify and characterize the different stakeholder groups in the pilot territory that are relevant to the different metabolism components under analysis. The CIIM project also adds the potential of a Geographic Information System (GIS) so that it is be possible to obtain geospatial results of the territorial metabolisms (rural and urban) of the pilot region. This platform will be a powerful visualization tool of flows of products/services that occur within the region and will support the stakeholders, improving their circular performance and identifying new business ideas and symbiotic partnerships.Keywords: circular economy tools, life cycle assessment macroeconomic metabolism, multicriteria decision analysis, decision support tools, circular entrepreneurship, industrial and regional symbiosis
Procedia PDF Downloads 9810 Case Report: Ocular Helminth - In Unusual Site (Lens)
Authors: Chandra Shekhar Majumder, Md. Shamsul Haque, Khondaker Anower Hossain, Md. Rafiqul Islam
Abstract:
Introduction: Ocular helminths are parasites that infect the eye or its adnexa. They can be either motile worms or sessile worms that form cysts. These parasites require two hosts for their life cycle, a definite host (usually a human) and an intermediate host (usually an insect). While there have been reports of ocular helminths infecting various structures of the eye, including the anterior chamber and subconjunctival space, there is no previous record of such a case involving the lens. Research Aim: The aim of this case report is to present a rare case of ocular helminth infection in the lens and to contribute to the understanding of this unusual site of infection. Methodology: This study is a case report, presenting the details and findings of an 80-year-old retired policeman who presented with severe pain, redness, and vision loss in the left eye. The patient had a history of diabetes mellitus and hypertension. The examination revealed the presence of a thread-like helminth in the lens. The patient underwent treatment and follow-up, and the helminth specimen was sent for identification to the department of Parasitology. Case report: An 80-year-old retired policeman attended the OPD, Faridpur Medical College Hospital with the complaints of severe pain, redness and gross dimness of vision of the left eye for 5 days. He had a history of diabetes mellitus and hypertension for 3 years. On examination, L/E visual acuity was PL only, moderate ciliary congestion, KP 2+, cells 2+ and posterior synechia from 5 to 7 O’clock position was found. Lens was opaque. A thread like helminth was found under the anterior of the lens. The worm was moving and changing its position during examination. On examination of R/E, visual acuity was 6/36 unaided, 6/18 with pinhole. There was lental opacity. Slit-lamp and fundus examination were within normal limit. Patient was admitted in Faridpur Medical College Hospital. Diabetes mellitus was controlled with insulin. ICCE with PI was done on the same day of admission under depomedrol coverage. The helminth was recovered from the lens. It was thread like, about 5 to 6 mm in length, 1 mm in width and pinkish in colour. The patient followed up after 7 days, VA was HM, mild ciliary congestion, few KPs and cells were present. Media was hazy due to vitreous opacity. The worm was sent to the department of Parasitology, NIPSOM, Dhaka for identification. Findings: The findings of this case report highlight the presence of a helminth in the lens, which has not been previously reported. The helminth was successfully removed from the lens, but the patient experienced complications such as anterior uveitis and vitreous opacity. The exact mechanism by which the helminth enters the lens remains unclear. Theoretical Importance: This case report contributes to the existing literature on ocular helminth infections by reporting a unique case involving the lens. It highlights the need for further research to understand the pathogenesis and mechanism of entry of helminths in the lens. Data Collection and Analysis Procedures: The data for this case report were collected through clinical examination and medical records of the patient. The findings were described and presented in a descriptive manner. No statistical analysis was conducted. Question Addressed: This case report addresses the question of whether ocular helminth infections can occur in the lens, which has not been previously reported. Conclusion: To the best of our knowledge, this is the first reported case of ocular helminth infection in the lens. The presence of the helminth in the lens raises interesting questions regarding its pathogenesis and entry mechanism. Further study and research are needed to explore these aspects. Ophthalmologists and parasitologists should be aware of the possibility of ocular helminth infections in unusual sites like the lens.Keywords: ocular, helminth, unsual site, lens
Procedia PDF Downloads 63