Search results for: user generated content
1934 War Heritage: Different Perceptions of the Dominant Discourse among Visitors to the “Adem Jashari” Memorial Complex in Prekaz
Authors: Zana Llonçari Osmani, Nita Llonçari
Abstract:
In Kosovo, public rhetoric and popular sentiment position the War of 1998-99 (the war) as central to the formation of contemporary Kosovo's national identity. This period was marked by the forced massive displacement of Kosovo Albanians, the destruction of entire settlements, the loss of family members, and the profound emotional trauma experienced by civilians, particularly those who actively participated in the war as members of the Kosovo Liberation Army (KLA). Amidst these profound experiences, the Prekaz Massacre (The Massacre) is widely regarded as the defining event that preceded the final struggles of 1999 and the long-awaited attainment of independence. This study aims to explore how different visitors perceive the dominant discourse at The Memorial, a site dedicated to commemorating the Prekaz Massacre, and to identify the factors that influence their perceptions. The research employs a comprehensive mixed-method approach, combining online surveys, critical discourse analysis of visitor impressions, and content analysis of media representations. The findings of the study highlight the significant role played by original material remains in shaping visitor perceptions of The Memorial in comparison to the curated symbols and figurative representations interspersed throughout the landscape. While the design elements and physical layout of the memorial undeniably hold significance in conveying the memoryscape, there are notable shortcomings in enhancing the overall visitor experience. Visitors are still primarily influenced by the tangible remnants of the war, suggesting that there is room for improvement in how design elements can more effectively contribute to the memorial's narrative and the collective memory of the Prekaz Massacre.Keywords: critical discourse analysis, memorialisation, national discourse, public rhetoric, war tourism
Procedia PDF Downloads 911933 Monitoring of Indoor Air Quality in Museums
Authors: Olympia Nisiforou
Abstract:
The cultural heritage of each country represents a unique and irreplaceable witness of the past. Nevertheless, on many occasions, such heritage is extremely vulnerable to natural disasters and reckless behaviors. Even if such exhibits are now located in Museums, they still receive insufficient protection due to improper environmental conditions. These external changes can negatively affect the conditions of the exhibits and contribute to inefficient maintenance in time. Hence, it is imperative to develop an innovative, low-cost system, to monitor indoor air quality systematically, since conventional methods are quite expensive and time-consuming. The present study gives an insight into the indoor air quality of the National Byzantine Museum of Cyprus. In particular, systematic measurements of particulate matter, bio-aerosols, the concentration of targeted chemical pollutants (including Volatile organic compounds (VOCs), temperature, relative humidity, and lighting conditions as well as microbial counts have been performed using conventional techniques. Measurements showed that most of the monitored physiochemical parameters did not vary significantly within the various sampling locations. Seasonal fluctuations of ammonia were observed, showing higher concentrations in the summer and lower in winter. It was found that the outdoor environment does not significantly affect indoor air quality in terms of VOC and Nitrogen oxides (NOX). A cutting-edge portable Gas Chromatography-Mass Spectrometry (GC-MS) system (TORION T-9) was used to identify and measure the concentrations of specific Volatile and Semi-volatile Organic Compounds. A large number of different VOCs and SVOCs found such as Benzene, Toluene, Xylene, Ethanol, Hexadecane, and Acetic acid, as well as some more complex compounds such as 3-ethyl-2,4-dimethyl-Isopropyl alcohol, 4,4'-biphenylene-bis-(3-aminobenzoate) and trifluoro-2,2-dimethylpropyl ester. Apart from the permanent indoor/outdoor sources (i.e., wooden frames, painted exhibits, carpets, ventilation system and outdoor air) of the above organic compounds, the concentration of some of them within the areas of the museum were found to increase when large groups of visitors were simultaneously present at a specific place within the museum. The high presence of Particulate Matter (PM), fungi and bacteria were found in the museum’s areas where carpets were present but low colonial counts were found in rooms where artworks are exhibited. Measurements mentioned above were used to validate an innovative low-cost air-quality monitoring system that has been developed within the present work. The developed system is able to monitor the average concentrations (on a bidaily basis) of several pollutants and presents several innovative features, including the prompt alerting in case of increased average concentrations of monitored pollutants, i.e., exceeding the limit values defined by the user.Keywords: exibitions, indoor air quality , VOCs, pollution
Procedia PDF Downloads 1261932 The Influence of Market Attractiveness and Core Competence on Value Creation Strategy and Competitive Advantage and Its Implication on Business Performance
Authors: Firsan Nova
Abstract:
The average Indonesian watches 5.5 hours of TV a day. With a population of 242 million people and a Free-to-Air (FTA) TV penetration rate of 56%, that equates to 745 million hours of television watched each day. With such potential, it is no wonder that many companies are now attempting to get into the Pay TV market. Research firm Media Partner Asia has forecast in its study that the number of Indonesian pay-television subscribers will climb from 2.4 million in 2012 to 8.7 million by 2020, with penetration scaling up from 7 percent to 21 percent. Key drivers of market growth, the study says, include macro trends built around higher disposable income and a rising middle class, with leading players continuing to invest significantly in sales, distribution and content. New entrants, in the meantime, will boost overall prospects. This study aims to examine and analyze the effect of Market Attractiveness and the Core Competence on Value Creation and Competitive Advantage and its impact to Business Performance in the pay TV industry in Indonesia. The study using strategic management science approach with the census method in which all members of the population are as sample. Verification method is used to examine the relationship between variables. The unit of analysis in this research is all Indonesian Pay TV business units totaling 19 business units. The unit of observation is the director and managers of each business unit. Hypothesis testing is performed by using statistical Partial Least Square (PLS). The conclusion of the study shows that the market attractiveness affects business performance through value creation and competitive advantage. The appropriate value creation comes from the company ability to optimize its core competence and exploit market attractiveness. Value creation affects competitive advantage. The competitive advantage can be determined based on the company's ability to create value for customers and the competitive advantage has an impact on business performance.Keywords: market attractiveness, core competence, value creation, competitive advantage, business performance
Procedia PDF Downloads 3521931 Fibromyalgia and Personality: A Review of the Different Personality Types Identified
Authors: Lize Tibiriçá, Ronnie Lee, Samantha Behbahani
Abstract:
Fibromyalgia (FM) is a musculoskeletal disorder affecting men and women of different ages and cultures. The cause of this disorder is unknown; however, studies suggest an etiology that involves biological and psychosocial factors. Few studies have shown that a personality type such as neuroticism is associated with chronic pain conditions. Past research has explored whether patients with FM present with a specific personality trait. However, studies have used different methods (i.e. Minnesota Multiphasic Personality Inventory (MMPI), Sociotropy and Autonomy Scale (SAS) and Dysfunctional Attitude Scale (DAS), Tridimensional Personality Questionnaire or Temperament and Character Inventory (TCI), Karolinska scale of personality, Big Five Inventory or NEO Personality Inventory) to explore the connection between FM and a personality type. They have identified personality types that present similar characteristics but vary in the name (i.e. high harm avoidance and low novelty seeking, psychasthenia/muscular tension/somatic anxiety, neuroticism). Although Zuckerman-Kuhlman Personality Questionnaire and the Big Five Inventory differ in terms of content and structure, both of them identify neuroticism as the personality type of FM patients, and the former also identifies these patients as having a low sociability personality trait. Previous research also shows a trend of sociotropic personality style with FM patients that also suffer from Major Depressive Disorder. Participants in these studies were, for the most part, adult female and researchers have recognized that as a limitation and whether their findings can be generalized to men and younger patients with FM. Furthermore, most studies reviewed were conducted in Europe (i.e. Spain) and had a cross-sectional design. Future research should replicate past studies in different countries and consider conducting a longitudinal study. Although it is suspected that FM course is modulated by FM patients’ personality, it is not known whether individuals with similar personalities will develop FM. This review sought to explain the differences and similarities between the personality types identified. Limitations in the studies reviewed were addressed, and considerations for future research and treatment were discussed.Keywords: chronic pain, fibromyalgia, neuroticism, personality type
Procedia PDF Downloads 3281930 Effect of Hormones Priming on Enzyme Activity and Lipid Peroxidation in Wheat Seed under Accelerated Aging
Authors: Amin Abbasi, Fariborz Shekari, Seyed Bahman Mousavi
Abstract:
Seed aging during storage is a complex biochemical and physiological processes that can lead to reduce seed germination. This phenomenon associated with increasing of total antioxidant activity during aging. To study the effects of hormones on seed aging, aged wheat seeds (control, 90 and 80% viabilities) were treated with GA3, Salicylic Acid, and paclobutrazol and antioxidant system were investigated as molecular biomarkers for seed vigor. The results showed that, seed priming treatment significantly affected germination percentage, normality seedling percentage, H2O2, MDA, CAT, APX, and GPX activates. Maximum germination percentage achieve in GA3 priming in control treatment. Germination percentage and normal seedling percentage increased in other GA3 priming treatment compared with other hormones. Also aging increased MDA, H2O2 content. MDA is considered sensitive marker commonly used for assessing membrane lipid peroxidation and H2O2result in toxicity to cellular membrane system and damages to plant cells. Amount of H2O2 and MDA declined in GA3 treatment. CAT, GPX and APX activities were reduced by increasing the aging time and at different levels of priming. The highest APX activity was observed in Salicylic Acid control treatment and the highest GPX and CAT activity was obtained in GA3 control treatment. The lowest MDA and H2O2 showed in GA3 control treatment, too. Hormone priming increased Antioxidant enzyme activity and decreased amount of reactive oxygen space and malondialdehyde (MDA) under aging treatment. Also, GA3 priming treatments have a significant effect on germination percentage and number of normal seedling. Generally aging seed, increase ROS and lipid peroxidation. Antioxidant enzymes activity of aged seeds increased after hormone priming.Keywords: hormones priming, wheat, aging seed, antioxidant, lipid peroxidation
Procedia PDF Downloads 5001929 Cardiothoracic Ratio in Postmortem Computed Tomography: A Tool for the Diagnosis of Cardiomegaly
Authors: Alex Eldo Simon, Abhishek Yadav
Abstract:
This study aimed to evaluate the utility of postmortem computed tomography (CT) and heart weight measurements in the assessment of cardiomegaly in cases of sudden death due to cardiac origin by comparing the results of these two diagnostic methods. The study retrospectively analyzed postmortem computed tomography (PMCT) data from 54 cases of sudden natural death and compared the findings with those of the autopsy. The study involved measuring the cardiothoracic ratio (CTR) from coronal computed tomography (CT) images and determining the actual cardiac weight by weighing the heart during the autopsy. The inclusion criteria for the study were cases of sudden death suspected to be caused by cardiac pathology, while exclusion criteria included death due to unnatural causes such as trauma or poisoning, diagnosed natural causes of death related to organs other than the heart, and cases of decomposition. Sensitivity, specificity, and diagnostic accuracy were calculated, and to evaluate the accuracy of using the cardiothoracic ratio (CTR) to detect an enlarged heart, the study generated receiver operating characteristic (ROC) curves. The cardiothoracic ratio (CTR) is a radiological tool used to assess cardiomegaly by measuring the maximum cardiac diameter in relation to the maximum transverse diameter of the chest wall. The clinically used criteria for CTR have been modified from 0.50 to 0.57 for use in postmortem settings, where abnormalities can be detected by comparing CTR values to this threshold. A CTR value of 0.57 or higher is suggestive of hypertrophy but not conclusive. Similarly, heart weight is measured during the traditional autopsy, and a cardiac weight greater than 450 grams is defined as hypertrophy. Of the 54 cases evaluated, 22 (40.7%) had a cardiothoracic ratio (CTR) ranging from > 0.50 to equal 0.57, and 12 cases (22.2%) had a CTR greater than 0.57, which was defined as hypertrophy. The mean CTR was calculated as 0.52 ± 0.06. Among the 54 cases evaluated, the weight of the heart was measured, and the mean was calculated as 369.4 ± 99.9 grams. Out of the 54 cases evaluated, 12 were found to have hypertrophy as defined by PMCT, while only 9 cases were identified with hypertrophy in traditional autopsy. The sensitivity and specificity of the test were calculated as 55.56% and 84.44%, respectively. The sensitivity of the hypertrophy test was found to be 55.56% (95% CI: 26.66, 81.12¹), the specificity was 84.44% (95% CI: 71.22, 92.25¹), and the diagnostic accuracy was 79.63% (95% CI: 67.1, 88.23¹). The limitation of the study was a low sample size of only 54 cases, which may limit the generalizability of the findings. The comparison of the cardiothoracic ratio with heart weight in this study suggests that PMCT may serve as a screening tool for medico-legal autopsies when performed by forensic pathologists. However, it should be noted that the low sensitivity of the test (55.5%) may limit its diagnostic accuracy, and therefore, further studies with larger sample sizes and more diverse populations are needed to validate these findings.Keywords: PMCT, virtopsy, CTR, cardiothoracic ratio
Procedia PDF Downloads 841928 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 861927 Effectiveness of Cold Calling on Students’ Behavior and Participation during Class Discussions: Punishment or Opportunity to Shine
Authors: Maimuna Akram, Khadija Zia, Sohaib Naseer
Abstract:
Pedagogical objectives and the nature of the course content may lead instructors to take varied approaches to selecting a student for the cold call, specifically in a studio setup where students work on different projects independently and show progress work time to time at scheduled critiques. Cold-calling often proves to be an effective tool in eliciting a response without enforcing judgment onto the recipients. While there is a mixed range of behavior exhibited by students who are cold-called, a classification of responses from anxiety-provoking to inspiring may be elicited; there is a need for a greater understanding of utilizing the exchanges in bringing about fruitful and engaging outcomes of studio discussions. This study aims to unravel the dimensions of utilizing the cold-call approach in a didactic exchange within studio pedagogy. A questionnaire survey was conducted in an undergraduate class at Arts and Design School. The impact of cold calling on students’ participation was determined through various parameters, including course choice, participation frequency, students’ comfortability, and teaching methodology. After analyzing the surveys, specific classroom teachers were interviewed to provide a qualitative perspective of the faculty. It was concluded that cold-calling increases students’ participation frequency and also increases preparation for class. Around 67% of students responded that teaching methods play an important role in learning activities and students’ participation during class discussions. 84% of participants agreed that cold calling is an effective way of learning. According to research, cold-calling can be done in large numbers without making students uncomfortable. As a result, the findings of this study support the use of this instructional method to encourage more students to participate in class discussions.Keywords: active learning, class discussion, class participation, cold calling, pedagogical methods, student engagement
Procedia PDF Downloads 421926 The Mechanical and Comfort Properties of Cotton/Micro-Tencel Lawn Fabrics
Authors: Abdul Basit, Shahid Latif, Shah Mehmood
Abstract:
Lawn fabric was usually prepared from originally of linen but at present chiefly cotton. Lawn fabric is worn in summer. Cotton Lawn is a lightweight pure cloth which is heavier than voile. It is so fine that it is somewhat transparent. It is soft and superb to wear thus it is perfect for summer clothes or for regular wear in hotter climates. Tencel (Lyocell) fiber is considered as the fiber of the future as Tencel fibers are absorbent, soft, and extremely strong when wet or dry, and resistant to wrinkles. Fibers are more absorbent than cotton, softer than silk and cooler than linen. High water absorption and water vapor absorption give more heat capacity and heat balancing effect for thermo-regulation. This thermo-regulation is analogous with the action of phase-change-materials. The thermal wear properties result in cool and dry touch that gives cooling effect in sportswear, and the warmth properties (when used as an insulation layer). These cooling and warming effects are adaptive to the environment giving comfort in a broad range of climatic conditions. In this work, single yarns of Ne 80s were made. Yarns were made from conventional ring spinning. Different yarns of 100% cotton, 100% micro-Tencel and Cotton:micro-Tencel blends (67:33, 50:50:33:67) were made. The mechanical and comfort properties of the woven fabrics were compared. The mechanical properties include the tensile and tear strength, bending length, pilling and abrasion resistance whereas comfort properties include the air permeability, moisture management and thermal resistance. It is found that as the content of the micro-Tencel is increased, the mechanical and comfort properties of the woven fabric are also increased.Keywords: combed cotton, comfort properties , mechanical properties, micro-Tencel
Procedia PDF Downloads 3231925 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 341924 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 1041923 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation
Authors: Li-hsing Shih, Wei-Jen Hsu
Abstract:
Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation
Procedia PDF Downloads 761922 Determination of Authorship of the Works Created by the Artificial Intelligence
Authors: Vladimir Sharapaev
Abstract:
This paper seeks to address the question of the authorship of copyrighted works created solely by the artificial intelligence or with the use thereof, and proposes possible interpretational or legislative solutions to the problems arising from the plurality of the persons potentially involved in the ultimate creation of the work and division of tasks among such persons. Being based on the commonly accepted assumption that a copyrighted work can only be created by a natural person, the paper does not deal with the issues regarding the creativity of the artificial intelligence per se (or the lack thereof), and instead focuses on the distribution of the intellectual property rights potentially belonging to the creators of the artificial intelligence and/or the creators of the content used for the formation of the copyrighted work. Moreover, the technical development and rapid improvement of the AI-based programmes, which tend to be reaching even greater independence on a human being, give rise to the question whether the initial creators of the artificial intelligence can be entitled to the intellectual property rights to the works created by such AI at all. As the juridical practice of some European courts and legal doctrine tends to incline to the latter opinion, indicating that the works created by the AI may not at all enjoy copyright protection, the questions of authorships appear to be causing great concerns among the investors in the development of the relevant technology. Although the technology companies dispose with further instruments of protection of their investments, the risk of the works in question not being copyrighted caused by the inconsistency of the case law and a certain research gap constitutes a highly important issue. In order to assess the possible interpretations, the author adopted a doctrinal and analytical approach to the research, systematically analysing the European and Czech copyright laws and case law in some EU jurisdictions. This study aims to contribute to greater legal certainty regarding the issues of the authorship of the AI-created works and define possible clues for further research.Keywords: artificial intelligence, copyright, authorship, copyrighted work, intellectual property
Procedia PDF Downloads 1261921 Challenges of Implementing Participatory Irrigation Management for Food Security in Semi Arid Areas of Tanzania
Authors: Pilly Joseph Kagosi
Abstract:
The study aims at assessing challenges observed during the implementation of participatory irrigation management (PIM) approach for food security in semi-arid areas of Tanzania. Data were collected through questionnaire, PRA tools, key informants discussion, Focus Group Discussion (FGD), participant observation, and literature review. Data collected from the questionnaire was analysed using SPSS while PRA data was analysed with the help of local communities during PRA exercise. Data from other methods were analysed using content analysis. The study revealed that PIM approach has a contribution in improved food security at household level due to the involvement of communities in water management activities and decision making which enhanced the availability of water for irrigation and increased crop production. However, there were challenges observed during the implementation of the approach including; minimum participation of beneficiaries in decision-making during planning and designing stages, meaning inadequate devolution of power among scheme owners. Inadequate and lack of transparency on income expenditure in Water Utilization Associations’ (WUAs), water conflict among WUAs members, conflict between farmers and livestock keepers and conflict between WUAs leaders and village government regarding training opportunities and status; WUAs rules and regulation are not legally recognized by the National court and few farmers involved in planting trees around water sources. However, it was realized that some of the mentioned challenges were rectified by farmers themselves facilitated by government officials. The study recommends that the identified challenges need to be rectified for farmers to realize impotence of PIM approach as it was realized by other Asian countries.Keywords: challenges, participatory approach, irrigation management, food security, semi arid areas
Procedia PDF Downloads 3321920 Designing of Multi-Epitope Peptide Vaccines for Fasciolosis (Fasciola gigantica) using Immune Epitope and Analysis Resource (IEDB) Server
Authors: Supanan Chansap, Werachon Cheukamud, Pornanan Kueakhai, Narin Changklungmoa
Abstract:
Fasciola species (Fasciola spp.) is caused fasciolosis in ruminants such as cattle, sheep, and buffalo. Fasciola gigantica (F.gigantica) commonly infects tropical regions. Fasciola hepatica (F.hepatica) in temperate regions. Liver fluke infection affects livestock economically, for example, reduced milk and meat production, weight loss, sterile animals. Currently, Triclabendazole is used to treat liver flukes. However, liver flukes have also been found to be resistant to drugs in countries. Therefore, vaccination is an attractive alternative to prevent liver fluke infection. Peptide vaccines are new vaccine technologies that mimic epitope antigens that trigger an immune response. An interesting antigen used in vaccine production is catepsin L, a family of proteins that play an important role in the life of the parasite in the host. This study aims to identify immunogenic regions of protein and construct a multi-epidetope vaccine using an immunoinformatic tool. Fasciola gigantica Cathepsin L1 (FgCatL1), Fasciola gigantica Cathepsin L1G (FgCatL1G), and Fasciola gigantica Cathepsin L1H (FgCatL1H) were predicted B-cell and Helper T lymphocytes (HTL) by Immune Epitope and Analysis Resource (IEDB) servers. Both B-cell and HTL epitopes aligned with cathepsin L of the host and Fasciola hepatica (F. hepatica). Epitope groups were selected from non-conserved regions and overlapping sequences with F. hepatica. All overlapping epitopes were linked with the GPGPG and KK linker. GPGPG linker was linked between B-cell epitope. KK linker was linked between HTL epitope and B-cell and HTL epitope. The antigenic scores of multi-epitope peptide vaccine was 0.7824. multi-epitope peptide vaccine was non-allergen, non-toxic, and good soluble. Multi-epitope peptide vaccine was predicted tertiary structure and refinement model by I-Tasser and GalaxyRefine server, respectively. The result of refine structure model was good quality that was generated by Ramachandran plot analysis. Discontinuous and linear B-cell epitopes were predicted by ElliPro server. Multi-epitope peptide vaccine model was two and seven of discontinuous and linear B-cell epitopes, respectively. Furthermore, multi-epitope peptide vaccine was docked with Toll-like receptor 2 (TLR-2). The lowest energy ranged from -901.3 kJ/mol. In summary, multi-epitope peptide vaccine was antigenicity and probably immune response. Therefore, multi-epitope peptide vaccine could be used to prevent F. gigantica infections in the future.Keywords: fasciola gigantica, Immunoinformatic tools, multi-epitope, Vaccine
Procedia PDF Downloads 831919 Synthesis and Properties of Oxidized Corn Starch Based Wood Adhesive
Authors: Salise Oktay, Nilgun Kizilcan, Basak Bengu
Abstract:
At present, formaldehyde-based adhesives such as urea-formaldehyde (UF), melamine-formaldehyde (MF), melamine – urea-formaldehyde (MUF), etc. are mostly used in wood-based panel industry because of their high reactivity, chemical versatility, and economic competitiveness. However, formaldehyde-based wood adhesives are produced from non- renewable resources and also formaldehyde is classified as a probable human carcinogen (Group B1) by the U.S. Environmental Protection Agency (EPA). Therefore, there has been a growing interest in the development of environment-friendly, economically competitive, bio-based wood adhesives to meet wood-based panel industry requirements. In this study, like a formaldehyde-free adhesive, oxidized starch – urea wood adhesives was synthesized. In this scope, firstly, acid hydrolysis of corn starch was conducted and then acid thinned corn starch was oxidized by using hydrogen peroxide and CuSO₄ as an oxidizer and catalyst, respectively. Secondly, the polycondensation reaction between oxidized starch and urea conducted. Finally, nano – TiO₂ was added to the reaction system to strengthen the adhesive network. Solid content, viscosity, and gel time analyses of the prepared adhesive were performed to evaluate the adhesive processability. FTIR, DSC, TGA, SEM characterization techniques were used to investigate chemical structures, thermal, and morphological properties of the adhesive, respectively. Rheological analysis of the adhesive was also performed. In order to evaluate the quality of oxidized corn starch – urea adhesives, particleboards were produced in laboratory scale and mechanical and physical properties of the boards were investigated such as an internal bond, modulus of rupture, modulus of elasticity, formaldehyde emission, etc. The obtained results revealed that oxidized starch – urea adhesives were synthesized successfully and it can be a good potential candidate to use the wood-based panel industry with some developments.Keywords: nano-TiO₂, corn starch, formaldehyde emission, wood adhesives
Procedia PDF Downloads 1581918 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries
Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman
Abstract:
There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems
Procedia PDF Downloads 1531917 Interpersonal Competence Related to the Practice Learning of Occupational Therapy Students in Hong Kong
Authors: Lik Hang Gary Wong
Abstract:
Background: Practice learning is crucial for preparing the healthcare profession to meet the real challenge upon graduation. Students are required to demonstrate their competence in managing interpersonal challenges, such as teamwork with other professionals and communicating well with the service users, during the placement. Such competence precedes clinical practice, and it may eventually affect students' actual performance in a clinical context. Unfortunately, there were limited studies investigating how such competence affects students' performance in practice learning. Objectives: The aim of this study is to investigate how self-rated interpersonal competence affects students' actual performance during clinical placement. Methods: 40 occupational therapy students from Hong Kong were recruited in this study. Prior to the clinical placement (level two or above), they completed an online survey that included the Interpersonal Communication Competence Scale (ICCS) measuring self-perceived competence in interpersonal communication. Near the end of their placement, the clinical educator rated students’ performance with the Student Practice Evaluation Form - Revised edition (SPEF-R). The SPEF-R measures the eight core competency domains required for an entry-level occupational therapist. This study adopted the cross-sectional observational design. Pearson correlation and multiple regression are conducted to examine the relationship between students' interpersonal communication competence and their actual performance in clinical placement. Results: The ICCS total scores were significantly correlated with all the SPEF-R domains, with correlation coefficient r ranging from 0.39 to 0.51. The strongest association was found with the co-worker communication domain (r = 0.51, p < 0.01), followed by the information gathering domain (r = 0.50, p < 0.01). Regarding the ICCS total scores as the independent variable and the rating in various SPEF-R domains as the dependent variables in the multiple regression analyses, the interpersonal competence measures were identified as a significant predictor of the co-worker communication (R² = 0.33, β = 0.014, SE = 0.006, p = 0.026), information gathering (R² = 0.27, β = 0.018, SE = 0.007, p = 0.011), and service provision (R² = 0.17, β = 0.017, SE = 0.007, p = 0.020). Moreover, some specific communication skills appeared to be especially important to clinical practice. For example, immediacy, which means whether the students were readily approachable on all social occasions, correlated with all the SPEF-R domains, with r-values ranging from 0.45 to 0.33. Other sub-skills, such as empathy, interaction management, and supportiveness, were also found to be significantly correlated to most of the SPEF-R domains. Meanwhile, the ICCS scores correlated differently with the co-worker communication domain (r = 0.51, p < 0.01) and the communication with the service user domain (r = 0.39, p < 0.05). It suggested that different communication skill sets would be required for different interpersonal contexts within the workplace. Conclusion: Students' self-perceived interpersonal communication competence could predict their actual performance during clinical placement. Moreover, some specific communication skills were more important to the co-worker communication but not to the daily interaction with the service users. There were implications on how to better prepare the students to meet the future challenge upon graduation.Keywords: interpersonal competence, clinical education, healthcare professional education, occupational therapy, occupational therapy students
Procedia PDF Downloads 771916 The Design of an Afghan Refugee Camp in Kerman City through Ecotech Architecture
Authors: Kourosh Ghaffari, Baghaei Azhang
Abstract:
This study aims to address two main questions whether a camp designed for refugees will affect their quality of life and how to effectively incorporate ecotech architecture into the architectural design of a refugee camp. The current study planned to ensure that the final design reflects the principles of ecotech architecture in most refugee camps. The design process has taken into account various factors, including flexibility, diversity in the camp space according to the ecotech approach, expandability in the building, spatial hierarchy in the design of camp spaces, and the assignment of territories and space sanctuaries to refugees. It should be noted that this study is not a research-oriented type of study and is only limited to collecting information and making hypotheses and questions related to the plan. The researchers attempted to provide a general summary of similar domestic and foreign examples and examine them in similar conditions using the ecotech architecture. The research method utilized in this study was qualitative. Afterwards, the climate studies of the target area, citing and paying attention to the criteria and points extracted from the theoretical framework, reaching the desired conclusion and examining similar examples were followed. Additionally, placement on the site, compliance with relevant standards and regulations, attention to the content and physical program, and addressing the idea and its evolution in all the details of the plan were presented. The data collection procedure included observation and library studies, and the design method was to determine and recognize the subject and examine similar samples. In conclusion, the principles of theoretical foundations, the design protocols in ecotech architecture and the scope of the study are dealt. Furthermore, the site analysis, the design process and the final plan are presented.Keywords: ecotech architecture, livable city, shelter, refugee camp
Procedia PDF Downloads 831915 Modelling and Optimization of a Combined Sorption Enhanced Biomass Gasification with Hydrothermal Carbonization, Hot Gas Cleaning and Dielectric Barrier Discharge Plasma Reactor to Produce Pure H₂ and Methanol Synthesis
Authors: Vera Marcantonio, Marcello De Falco, Mauro Capocelli, Álvaro Amado-Fierro, Teresa A. Centeno, Enrico Bocci
Abstract:
Concerns about energy security, energy prices, and climate change led scientific research towards sustainable solutions to fossil fuel as renewable energy sources coupled with hydrogen as an energy vector and carbon capture and conversion technologies. Among the technologies investigated in the last decades, biomass gasification acquired great interest owing to the possibility of obtaining low-cost and CO₂ negative emission hydrogen production from a large variety of everywhere available organic wastes. Upstream and downstream treatment were then studied in order to maximize hydrogen yield, reduce the content of organic and inorganic contaminants under the admissible levels for the technologies which are coupled with, capture, and convert carbon dioxide. However, studies which analyse a whole process made of all those technologies are still missing. In order to fill this lack, the present paper investigated the coexistence of hydrothermal carbonization (HTC), sorption enhance gasification (SEG), hot gas cleaning (HGC), and CO₂ conversion by dielectric barrier discharge (DBD) plasma reactor for H₂ production from biomass waste by means of Aspen Plus software. The proposed model aimed to identify and optimise the performance of the plant by varying operating parameters (such as temperature, CaO/biomass ratio, separation efficiency, etc.). The carbon footprint of the global plant is 2.3 kg CO₂/kg H₂, lower than the latest limit value imposed by the European Commission to consider hydrogen as “clean”, that was set to 3 kg CO₂/kg H₂. The hydrogen yield referred to the whole plant is 250 gH₂/kgBIOMASS.Keywords: biomass gasification, hydrogen, aspen plus, sorption enhance gasification
Procedia PDF Downloads 861914 Stability Indicating RP – HPLC Method Development, Validation and Kinetic Study for Amiloride Hydrochloride and Furosemide in Pharmaceutical Dosage Form
Authors: Jignasha Derasari, Patel Krishna M, Modi Jignasa G.
Abstract:
Chemical stability of pharmaceutical molecules is a matter of great concern as it affects the safety and efficacy of the drug product.Stability testing data provides the basis to understand how the quality of a drug substance and drug product changes with time under the influence of various environmental factors. Besides this, it also helps in selecting proper formulation and package as well as providing proper storage conditions and shelf life, which is essential for regulatory documentation. The ICH guideline states that stress testing is intended to identify the likely degradation products which further help in determination of the intrinsic stability of the molecule and establishing degradation pathways, and to validate the stability indicating procedures. A simple, accurate and precise stability indicating RP- HPLC method was developed and validated for simultaneous estimation of Amiloride Hydrochloride and Furosemide in tablet dosage form. Separation was achieved on an Phenomenexluna ODS C18 (250 mm × 4.6 mm i.d., 5 µm particle size) by using a mobile phase consisting of Ortho phosphoric acid: Acetonitrile (50:50 %v/v) at a flow rate of 1.0 ml/min (pH 3.5 adjusted with 0.1 % TEA in Water) isocratic pump mode, Injection volume 20 µl and wavelength of detection was kept at 283 nm. Retention time for Amiloride Hydrochloride and Furosemide was 1.810 min and 4.269 min respectively. Linearity of the proposed method was obtained in the range of 40-60 µg/ml and 320-480 µg/ml and Correlation coefficient was 0.999 and 0.998 for Amiloride hydrochloride and Furosemide, respectively. Forced degradation study was carried out on combined dosage form with various stress conditions like hydrolysis (acid and base hydrolysis), oxidative and thermal conditions as per ICH guideline Q2 (R1). The RP- HPLC method has shown an adequate separation for Amiloride hydrochloride and Furosemide from its degradation products. Proposed method was validated as per ICH guidelines for specificity, linearity, accuracy; precision and robustness for estimation of Amiloride hydrochloride and Furosemide in commercially available tablet dosage form and results were found to be satisfactory and significant. The developed and validated stability indicating RP-HPLC method can be used successfully for marketed formulations. Forced degradation studies help in generating degradants in much shorter span of time, mostly a few weeks can be used to develop the stability indicating method which can be applied later for the analysis of samples generated from accelerated and long term stability studies. Further, kinetic study was also performed for different forced degradation parameters of the same combination, which help in determining order of reaction.Keywords: amiloride hydrochloride, furosemide, kinetic study, stability indicating RP-HPLC method validation
Procedia PDF Downloads 4691913 Landslide Hazard Zonation Using Satellite Remote Sensing and GIS Technology
Authors: Ankit Tyagi, Reet Kamal Tiwari, Naveen James
Abstract:
Landslide is the major geo-environmental problem of Himalaya because of high ridges, steep slopes, deep valleys, and complex system of streams. They are mainly triggered by rainfall and earthquake and causing severe damage to life and property. In Uttarakhand, the Tehri reservoir rim area, which is situated in the lesser Himalaya of Garhwal hills, was selected for landslide hazard zonation (LHZ). The study utilized different types of data, including geological maps, topographic maps from the survey of India, Landsat 8, and Cartosat DEM data. This paper presents the use of a weighted overlay method in LHZ using fourteen causative factors. The various data layers generated and co-registered were slope, aspect, relative relief, soil cover, intensity of rainfall, seismic ground shaking, seismic amplification at surface level, lithology, land use/land cover (LULC), normalized difference vegetation index (NDVI), topographic wetness index (TWI), stream power index (SPI), drainage buffer and reservoir buffer. Seismic analysis is performed using peak horizontal acceleration (PHA) intensity and amplification factors in the evaluation of the landslide hazard index (LHI). Several digital image processing techniques such as topographic correction, NDVI, and supervised classification were widely used in the process of terrain factor extraction. Lithological features, LULC, drainage pattern, lineaments, and structural features are extracted using digital image processing techniques. Colour, tones, topography, and stream drainage pattern from the imageries are used to analyse geological features. Slope map, aspect map, relative relief are created by using Cartosat DEM data. DEM data is also used for the detailed drainage analysis, which includes TWI, SPI, drainage buffer, and reservoir buffer. In the weighted overlay method, the comparative importance of several causative factors obtained from experience. In this method, after multiplying the influence factor with the corresponding rating of a particular class, it is reclassified, and the LHZ map is prepared. Further, based on the land-use map developed from remote sensing images, a landslide vulnerability study for the study area is carried out and presented in this paper.Keywords: weighted overlay method, GIS, landslide hazard zonation, remote sensing
Procedia PDF Downloads 1381912 Designing and Formulating Action Plan for Development of Corporate Citizenship in Producing Units in Iran
Authors: Freyedon Ahmadi
Abstract:
Corporate citizenship is considered as one of the most discussed topics in the developed countries, in which a citizen considers a Corporate just like a usual citizen with every civil right as respectful for corporate as for actual citizens, and in return citizens expect that corporate would pay a reciprocal respect to them. The current study’s purpose is to identify the impact of the current state of corporate citizenship along effective factors on its condition on industrial producing units, in order to find an accession plane for corporate citizenship development. In this study corporate citizenship is studied in four dimensions like legal corporate, economical corporate, ethical corporate and voluntary corporate. Moreover, effective factors’ impact on corporate citizenship is explored based on threefold dimensional model: behavioral, structural, and content factors, as well. In this study, 50 corporate of Food industry and of petrochemical industry, along with 200 selected individuals from directors’ board on Tehran province’s scale with stratified random sampling method, are chosen as actuarial sample. If based on functional goal and compilation methods, the present study is a description of correlation type; questionnaire is used for accumulation of initial Data. For Instrument Validity expert’s opinion is used and structural equations and its reliability is qualified by using Cronbach Alpha. The results of this study indicate that close to 70 percent of under survey corporate have not a good condition in corporate citizenship. And all of structural factors, behavioral factors, contextual factors, have a great deal of impression and impact on the advent corporate citizenship behavior in the producing Units. Among the behavioral factors, social responsibility; among structural factors, organic structure and human centered orientation, medium size, high organizational capacity; and among the contextual factors, the clientele’s positive viewpoints toward corporate had the utmost importance in impression on under survey Producing units.Keywords: corporate citizenship, structural factors, behavioral factors, contextual factors, producing units
Procedia PDF Downloads 2341911 Evaluate Effects of Different Curing Methods on Compressive Strength, Modulus of Elasticity and Durability of Concrete
Authors: Dhara Shah, Chandrakant Shah
Abstract:
Construction industry utilizes plenty of water in the name of curing. Looking at the present scenario, the days are not so far when all construction industries will have to switch over to an alternative-self curing system, not only to save water for sustainable development of the environment but also to promote indoor and outdoor construction activities even in water scarce areas. At the same time, curing is essential for the development of proper strength and durability. IS 456-2000 recommends a curing period of 7 days for ordinary Portland cement concrete, and 10 to 14 days for concrete prepared using mineral admixtures or blended cements. But, being the last act in the concreting operations, it is often neglected or not fully done. Consequently, the quality of hardened concrete suffers, more so, if the freshly laid concrete gets exposed to the environmental conditions of low humidity, high wind velocity and high ambient temperature. To avoid the adverse effects of neglected or insufficient curing, which is considered a universal phenomenon, concrete technologist and research scientists have come up with curing compounds. Concrete is said to be self-cured, if it is able to retain its water content to perform chemical reaction for the development of its strength. Curing compounds are liquids which are either incorporated in concrete or sprayed directly onto concrete surfaces and which then dry to form a relatively impermeable membrane that retards the loss of moisture from the concrete. They are an efficient and cost-effective means of curing concrete and may be applied to freshly placed concrete or that which has been partially cured by some other means. However, they may affect the bond between concrete and subsequent surface treatments. Special care in the choice of a suitable compound needs to be exercised in such circumstances. Curing compounds are generally formulated from wax emulsions, chlorinated rubbers, synthetic and natural resins, and from PVA emulsions. Their effectiveness varies quite widely, depending on the material and strength of the emulsion.Keywords: curing methods, self-curing compound, compressive strength, modulus of elasticity, durability
Procedia PDF Downloads 3321910 The Usefulness and Usability of a Linkedin Group for the Maintenance of a Community of Practice among Hand Surgeons Worldwide
Authors: Vaikunthan Rajaratnam
Abstract:
Maintaining continuous professional development among clinicians has been a challenge. Hand surgery is a unique speciality with the coming together of orthopaedics, plastics and trauma surgeons. The requirements for a team-based approach to care with the inclusion of other experts such as occupational, physiotherapist and orthotic and prosthetist provide the impetus for the creation of communities of practice. This study analysed the community of practice in hand surgery that was created through a social networking website for professionals. The main objectives were to discover the usefulness of this community of practice created in the platform of the group function of LinkedIn. The second objective was to determine the usability of this platform for the purposes of continuing professional development among members of this community of practice. The methodology used was one of mixed methods which included a quantitative analysis on the usefulness of the social network website as a community of practice, using the analytics provided by the LinkedIn platform. Further qualitative analysis was performed on the various postings that were generated by the community of practice within the social network website. This was augmented by a respondent driven survey conducted online to assess the usefulness of the platform for continuous professional development. A total of 31 respondents were involved in this study. This study has shown that it is possible to create an engaging and interactive community of practice among hand surgeons using the group function of this professional social networking website LinkedIn. Over three years the group has grown significantly with members from multiple regions and has produced engaging and interactive conversations online. From the results of the respondents’ survey, it can be concluded that there was satisfaction of the functionality and that it was an excellent platform for discussions and collaboration in the community of practice with a 69 % of satisfaction. Case-based discussions were the most useful functions of the community of practice. This platform usability was graded as excellent using the validated usability tool. This study has shown that the social networking site LinkedIn’s group function can be easily used as a community of practice effectively and provides convenience to professionals and has made an impact on their practice and better care for patients. It has also shown that this platform was easy to use and has a high level of usability for the average healthcare professional. This platform provided the improved connectivity among professionals involved in hand surgery care which allowed for the community to grow and with proper support and contribution of relevant material by members allowed for a safe environment for the exchange of knowledge and sharing of experience that is the foundation of a community practice.Keywords: community of practice, online community, hand surgery, lifelong learning, LinkedIn, social media, continuing professional development
Procedia PDF Downloads 3181909 Definition, Barriers to and Facilitators of Moral Distress as Perceived by Neonatal Intensive Care Physicians
Authors: M. Deligianni, P. Voultsos, E. Tsamadou
Abstract:
Background/Introduction: Moral distress is a common occurrence for health professionals working in neonatal critical care. Despite a growing number of critically ill neonatal and pediatric patients, only a few articles related to moral distress as experienced by neonatal physicians have been published over the last years. Objectives/Aims: The aim of this study was to define and identify barriers to and facilitators of moral distress based on the perceptions and experiences of neonatal physicians working in neonatal intensive care units (NICUs). This pilot study is a part of a larger nationwide project. Methods: A multicenter qualitative descriptive study using focus group methodology was conducted. In-depth interviews lasting 45 to 60 minutes were audio-recorded. Once data were transcribed, conventional content analysis was used to develop the definition and categories, as well as to identify the barriers to and facilitators of moral distress. Results: Participants defined moral distress broadly in the context of neonatal critical care. A wide variation of definitions was displayed. The physicians' responses to moral distress included different feelings and other situations. The overarching categories that emerged from the data were patient-related, family-related, and physician-related factors. Moreover, organizational factors may constitute major facilitators of moral distress among neonatal physicians in NICUs. Note, however, that moral distress may be regarded as an essential component to caring for neonates in critical care. The present study provides further insight into the moral distress experienced by physicians working in Greek NICUs. Discussion/Conclusions: Understanding how neonatal and pediatric critical care nurses define moral distress and what contributes to its development is foundational to developing targeted strategies for mitigating the prevalence of moral distress among neonate physicians in the context of NICUs.Keywords: critical care, moral distress, neonatal physician, neonatal intensive care unit, NICU
Procedia PDF Downloads 1551908 Full Disclosure Policy: Transparency in Fiscal Administration
Authors: Joyly Jill Apud
Abstract:
Corruption is an all-encompassing issue worldwide. Many attempts have been done to address such cases especially by the government through increasing transparency. The Philippine government increased the mechanism of transparency by opening to public its financial transactions through Full Disclosure Policy – mandating all local governments to post in their websites all financial transactions (Philippine Public Transparency Reporting Project, 2011). For transparency to be fully realized, the challenge lies in creating a mechanism where the constituents are encouraged to engage as social auditors. In line of the said challenge, the study focused in Davao City, Philippines measuring the respondent’s awareness, access and utilization of Full Disclosure Policy (FDP). Particularly, this study determined the significant difference on the awareness, access and utilization of respondents when grouped according to sector and the significant relationship between respondents’ awareness and in the access and utilization of FDP reports. The study used descriptive-correlation, Mean, Anova and Pearson R as statistical treatment. The 120 respondents are from the different sectors of Davao City. These are the Academe, Youth, LGUs, NGOs, Business, and Church groups. The awareness of the respondents was measured in three main categories: Existence of the Policy, Content of the Policy and the Manner of Publication. Access and Utilization of the FDP reports is divided into three: Budget Reports, Procurement Reports and Special Purpose Fund Reports. Results showed that the respondents are moderately aware of the Policy. Though it manifested that the respondents are aware of the disclosure, they are unaware of the Full Disclosure Policy and Full Disclosure Policy Portal. Moreover, the respondents seldom access and utilize all the FDP reports. Further results revealed that there is a significant difference in the awareness and the access and utilization of FDP when grouped according to sector. Moreover, significant relationship in the awareness and the access and utilization of the FDP is evident. It showed that the higher the awareness on FDP, the higher the level of access and utilization on the FDP reports.Keywords: corruption, e-governance, budget transparency, participation
Procedia PDF Downloads 3991907 A Segmentation Method for Grayscale Images Based on the Firefly Algorithm and the Gaussian Mixture Model
Authors: Donatella Giuliani
Abstract:
In this research, we propose an unsupervised grayscale image segmentation method based on a combination of the Firefly Algorithm and the Gaussian Mixture Model. Firstly, the Firefly Algorithm has been applied in a histogram-based research of cluster means. The Firefly Algorithm is a stochastic global optimization technique, centered on the flashing characteristics of fireflies. In this context it has been performed to determine the number of clusters and the related cluster means in a histogram-based segmentation approach. Successively these means are used in the initialization step for the parameter estimation of a Gaussian Mixture Model. The parametric probability density function of a Gaussian Mixture Model is represented as a weighted sum of Gaussian component densities, whose parameters are evaluated applying the iterative Expectation-Maximization technique. The coefficients of the linear super-position of Gaussians can be thought as prior probabilities of each component. Applying the Bayes rule, the posterior probabilities of the grayscale intensities have been evaluated, therefore their maxima are used to assign each pixel to the clusters, according to their gray-level values. The proposed approach appears fairly solid and reliable when applied even to complex grayscale images. The validation has been performed by using different standard measures, more precisely: the Root Mean Square Error (RMSE), the Structural Content (SC), the Normalized Correlation Coefficient (NK) and the Davies-Bouldin (DB) index. The achieved results have strongly confirmed the robustness of this gray scale segmentation method based on a metaheuristic algorithm. Another noteworthy advantage of this methodology is due to the use of maxima of responsibilities for the pixel assignment that implies a consistent reduction of the computational costs.Keywords: clustering images, firefly algorithm, Gaussian mixture model, meta heuristic algorithm, image segmentation
Procedia PDF Downloads 2181906 Biogas Production from Lake Bottom Biomass from Forest Management Areas
Authors: Dessie Tegegne Tibebu, Kirsi Mononen, Ari Pappinen
Abstract:
In areas with forest management, agricultural, and industrial activity, sediments and biomass are accumulated in lakes through drainage system, which might be a cause for biodiversity loss and health problems. One possible solution can be utilization of lake bottom biomass and sediments for biogas production. The main objective of this study was to investigate the potentials of lake bottom materials for production of biogas by anaerobic digestion and to study the effect of pretreatment methods for feed materials on biogas yield. In order to study the potentials of biogas production lake bottom materials were collected from two sites, Likokanta and Kutunjärvi lake. Lake bottom materials were mixed with straw-horse manure to produce biogas in a laboratory scale reactor. The results indicated that highest yields of biogas values were observed when feeds were composed of 50% lake bottom materials with 50% straw horse manure mixture-while with above 50% lake bottom materials in the feed biogas production decreased. CH4 content from Likokanta lake materials with straw-horse manure and Kutunjärvi lake materials with straw-horse manure were similar values when feed consisted of 50% lake bottom materials with 50% straw horse manure mixtures. However, feeds with lake bottom materials above 50%, the CH4 concentration started to decrease, impairing gas process. Pretreatment applied on Kutunjärvi lake materials showed a slight negative effect on the biogas production and lowest CH4 concentration throughout the experiment. The average CH4 production (ml g-1 VS) from pretreated Kutunjärvi lake materials with straw horse manure (208.9 ml g-1 VS) and untreated Kutunjärvi lake materials with straw horse manure (182.2 ml g-1 VS) were markedly higher than from Likokanta lake materials with straw horse manure (157.8 ml g-1 VS). According to the experimental results, utilization of 100% lake bottom materials for biogas production is likely to be impaired negatively. In the future, further analyses to improve the biogas yields, assessment of costs and benefits is needed before utilizing lake bottom materials for the production of biogas.Keywords: anaerobic digestion, biogas, lake bottom materials, sediments, pretreatment
Procedia PDF Downloads 3401905 Iranian Processed Cheese under Effect of Emulsifier Salts and Cooking Time in Process
Authors: M. Dezyani, R. Ezzati bbelvirdi, M. Shakerian, H. Mirzaei
Abstract:
Sodium Hexametaphosphate (SHMP) is commonly used as an Emulsifying Salt (ES) in process cheese, although rarely as the sole ES. It appears that no published studies exist on the effect of SHMP concentration on the properties of process cheese when pH is kept constant; pH is well known to affect process cheese functionality. The detailed interactions between the added phosphate, Casein (CN), and indigenous Ca phosphate are poorly understood. We studied the effect of the concentration of SHMP (0.25-2.75%) and holding time (0-20 min) on the textural and Rheological properties of pasteurized process Cheddar cheese using a central composite rotatable design. All cheeses were adjusted to pH 5.6. The meltability of process cheese (as indicated by the decrease in loss tangent parameter from small amplitude oscillatory rheology, degree of flow, and melt area from the Schreiber test) decreased with an increase in the concentration of SHMP. Holding time also led to a slight reduction in meltability. Hardness of process cheese increased as the concentration of SHMP increased. Acid-base titration curves indicated that the buffering peak at pH 4.8, which is attributable to residual colloidal Ca phosphate, was shifted to lower pH values with increasing concentration of SHMP. The insoluble Ca and total and insoluble P contents increased as concentration of SHMP increased. The proportion of insoluble P as a percentage of total (indigenous and added) P decreased with an increase in ES concentration because of some of the (added) SHMP formed soluble salts. The results of this study suggest that SHMP chelated the residual colloidal Ca phosphate content and dispersed CN; the newly formed Ca-phosphate complex remained trapped within the process cheese matrix, probably by cross-linking CN. Increasing the concentration of SHMP helped to improve fat emulsification and CN dispersion during cooking, both of which probably helped to reinforce the structure of process cheese.Keywords: Iranian processed cheese, emulsifying salt, rheology, texture
Procedia PDF Downloads 434