Search results for: information transparency
627 Impact of Financial Factors on Total Factor Productivity: Evidence from Indian Manufacturing Sector
Authors: Lopamudra D. Satpathy, Bani Chatterjee, Jitendra Mahakud
Abstract:
The rapid economic growth in terms of output and investment necessitates a substantial growth of Total Factor Productivity (TFP) of firms which is an indicator of an economy’s technological change. The strong empirical relationship between financial sector development and economic growth clearly indicates that firms financing decisions do affect their levels of output via their investment decisions. Hence it establishes a linkage between the financial factors and productivity growth of the firms. To achieve the smooth and continuous economic growth over time, it is imperative to understand the financial channel that serves as one of the vital channels. The theoretical or logical argument behind this linkage is that when the internal financial capital is not sufficient enough for the investment, the firms always rely upon the external sources of finance. But due to the frictions and existence of information asymmetric behavior, it is always costlier for the firms to raise the external capital from the market, which in turn affect their investment sentiment and productivity. This kind of financial position of the firms puts heavy pressure on their productive activities. Keeping in view this theoretical background, the present study has tried to analyze the role of both external and internal financial factors (leverage, cash flow and liquidity) on the determination of total factor productivity of the firms of manufacturing industry and its sub-industries, maintaining a set of firm specific variables as control variables (size, age and disembodied technological intensity). An estimate of total factor productivity of the Indian manufacturing industry and sub-industries is computed using a semi-parametric approach, i.e., Levinsohn- Petrin method. It establishes the relationship between financial factors and productivity growth of 652 firms using a dynamic panel GMM method covering the time period between 1997-98 and 2012-13. From the econometric analyses, it has been found that the internal cash flow has a positive and significant impact on the productivity of overall manufacturing sector. The other financial factors like leverage and liquidity also play the significant role in the determination of total factor productivity of the Indian manufacturing sector. The significant role of internal cash flow on determination of firm-level productivity suggests that access to external finance is not available to Indian companies easily. Further, the negative impact of leverage on productivity could be due to the less developed bond market in India. These findings have certain implications for the policy makers to take various policy reforms to develop the external bond market and easily workout through which the financially constrained companies will be able to raise the financial capital in a cost-effective manner and would be able to influence their investments in the highly productive activities, which would help for the acceleration of economic growth.Keywords: dynamic panel, financial factors, manufacturing sector, total factor productivity
Procedia PDF Downloads 332626 Three-Dimensional Model of Leisure Activities: Activity, Relationship, and Expertise
Authors: Taekyun Hur, Yoonyoung Kim, Junkyu Lim
Abstract:
Previous works on leisure activities had been categorizing activities arbitrarily and subjectively while focusing on a single dimension (e.g. active-passive, individual-group). To overcome these problems, this study proposed a Korean leisure activities’ matrix model that considered multidimensional features of leisure activities, which was comprised of 3 main factors and 6 sub factors: (a) Active (physical, mental), (b) Relational (quantity, quality), (c) Expert (entry barrier, possibility of improving). We developed items for measuring the degree of each dimension for every leisure activity. Using the developed Leisure Activities Dimensions (LAD) questionnaire, we investigated the presented dimensions of a total of 78 leisure activities which had been enjoyed by most Koreans recently (e.g. watching movie, taking a walk, watching media). The study sample consisted of 1348 people (726 men, 658 women) ranging in age from teenagers to elderlies in their seventies. This study gathered 60 data for each leisure activity, a total of 4860 data, which were used for statistical analysis. First, this study compared 3-factor model (Activity, Relation, Expertise) fit with 6-factor model (physical activity, mental activity, relational quantity, relational quality, entry barrier, possibility of improving) fit by using confirmatory factor analysis. Based on several goodness-of-fit indicators, the 6-factor model for leisure activities was a better fit for the data. This result indicates that it is adequate to take account of enough dimensions of leisure activities (6-dimensions in our study) to specifically apprehend each leisure attributes. In addition, the 78 leisure activities were cluster-analyzed with the scores calculated based on the 6-factor model, which resulted in 8 leisure activity groups. Cluster 1 (e.g. group sports, group musical activity) and Cluster 5 (e.g. individual sports) had generally higher scores on all dimensions than others, but Cluster 5 had lower relational quantity than Cluster 1. In contrast, Cluster 3 (e.g. SNS, shopping) and Cluster 6 (e.g. playing a lottery, taking a nap) had low scores on a whole, though Cluster 3 showed medium levels of relational quantity and quality. Cluster 2 (e.g. machine operating, handwork/invention) required high expertise and mental activity, but low physical activity. Cluster 4 indicated high mental activity and relational quantity despite low expertise. Cluster 7 (e.g. tour, joining festival) required not only moderate degrees of physical activity and relation, but low expertise. Lastly, Cluster 8 (e.g. meditation, information searching) had the appearance of high mental activity. Even though clusters of our study had a few similarities with preexisting taxonomy of leisure activities, there was clear distinctiveness between them. Unlike the preexisting taxonomy that had been created subjectively, we assorted 78 leisure activities based on objective figures of 6-dimensions. We also could identify that some leisure activities, which used to belong to the same leisure group, were included in different clusters (e.g. filed ball sports, net sports) because of different features. In other words, the results can provide a different perspective on leisure activities research and be helpful for figuring out what various characteristics leisure participants have.Keywords: leisure, dimensional model, activity, relationship, expertise
Procedia PDF Downloads 311625 Lessons Learnt from Tutors’ Perspectives on Online Tutorial’s Policies in Open and Distance Education Institution
Authors: Durri Andriani, Irsan Tahar, Lilian Sarah Hiariey
Abstract:
Every institution has to develop, implement, and control its policies to ensure the effectiveness of the institution. In doing so, all related stakeholders have to be involved to maximize the benefit of the policies and minimize the potential constraints and resistances. Open and distance education (ODE) institution is no different. As an education institution, ODE institution has to focus their attention to fulfilling academic needs of their students through open and distance measures. One of them is quality learning support system. Significant stakeholders in learning support system are tutors since they are the ones who directly communicate with students. Tutors are commonly seen as objects whose main responsibility is limited to implementing policies decided by management in ODE institutions. Nonetheless, tutors’ perceptions of tutorials are believed to influence tutors’ performances in facilitating learning support. It is therefore important to analyze tutors’ perception on various aspects of learning support. This paper presents analysis of tutors’ perceptions on policies of tutoriala in ODE institution using Policy Analysis Framework (PAF) modified by King, Nugent, Russell, and Lacy. Focus of this paper is on on-line tutors, those who provide tutorials via Internet. On-line tutors were chosen to stress the increasingly important used of Internet in ODE system. The research was conducted in Universitas Terbuka (UT), Indonesia. UT is purposely selected because of its large number (1,234) of courses offered and large area coverage (6000 inhabited islands). These posed UT in a unique position where learning support system has, to some extent, to be standardized while at the same time it has to be able to cater the needs of different courses in different places for students with different backgrounds. All 598 listed on-line tutors were sent the research questionnaires. Around 20% of the email addresses could not be reached. Tutors were asked to fill out open-ended questionnaires on their perceptions on definition of on-line tutorial, roles of tutors and students in on-line tutorials, requirement for on-line tutors, learning materials, and student evaluation in on-line tutorial. Data analyzed was gathered from 40 on-line tutors who sent back filled-out questionnaires. Data were analyzed qualitatively using content analysis from all 40 tutors. The results showed that using PAF as entry point in choosing learning support services as area of policy with delivery learning materials as the issue at UT has been able to provide new insights of aspects need to be consider in formulating policies in online tutorial and in learning support services. Involving tutors as source of information could be proven to be productive. In general, tutors had clear understanding about definition of online tutorial, roles of tutors and roles of students, and requirement of tutor. Tutors just need to be more involved in the policy formulation since they could provide data on students and problem faced in online tutorial. However, tutors need an adjustment in student evaluation which according tutors too focus on administrative aspects and subjective.Keywords: distance education, on-line tutorial, tutorial policy, tutors’ perspectives
Procedia PDF Downloads 253624 The Properties of Risk-based Approaches to Asset Allocation Using Combined Metrics of Portfolio Volatility and Kurtosis: Theoretical and Empirical Analysis
Authors: Maria Debora Braga, Luigi Riso, Maria Grazia Zoia
Abstract:
Risk-based approaches to asset allocation are portfolio construction methods that do not rely on the input of expected returns for the asset classes in the investment universe and only use risk information. They include the Minimum Variance Strategy (MV strategy), the traditional (volatility-based) Risk Parity Strategy (SRP strategy), the Most Diversified Portfolio Strategy (MDP strategy) and, for many, the Equally Weighted Strategy (EW strategy). All the mentioned approaches were based on portfolio volatility as a reference risk measure but in 2023, the Kurtosis-based Risk Parity strategy (KRP strategy) and the Minimum Kurtosis strategy (MK strategy) were introduced. Understandably, they used the fourth root of the portfolio-fourth moment as a proxy for portfolio kurtosis to work with a homogeneous function of degree one. This paper contributes mainly theoretically and methodologically to the framework of risk-based asset allocation approaches with two steps forward. First, a new and more flexible objective function considering a linear combination (with positive coefficients that sum to one) of portfolio volatility and portfolio kurtosis is used to alternatively serve a risk minimization goal or a homogeneous risk distribution goal. Hence, the new basic idea consists in extending the achievement of typical risk-based approaches’ goals to a combined risk measure. To give the rationale behind operating with such a risk measure, it is worth remembering that volatility and kurtosis are expressions of uncertainty, to be read as dispersion of returns around the mean and that both preserve adherence to a symmetric framework and consideration for the entire returns distribution as well, but also that they differ from each other in that the former captures the “normal” / “ordinary” dispersion of returns, while the latter is able to catch the huge dispersion. Therefore, the combined risk metric that uses two individual metrics focused on the same phenomena but differently sensitive to its intensity allows the asset manager to express, in the context of an objective function by varying the “relevance coefficient” associated with the individual metrics, alternatively, a wide set of plausible investment goals for the portfolio construction process while serving investors differently concerned with tail risk and traditional risk. Since this is the first study that also implements risk-based approaches using a combined risk measure, it becomes of fundamental importance to investigate the portfolio effects triggered by this innovation. The paper also offers a second contribution. Until the recent advent of the MK strategy and the KRP strategy, efforts to highlight interesting properties of risk-based approaches were inevitably directed towards the traditional MV strategy and SRP strategy. Previous literature established an increasing order in terms of portfolio volatility, starting from the MV strategy, through the SRP strategy, arriving at the EQ strategy and provided the mathematical proof for the “equalization effect” concerning marginal risks when the MV strategy is considered, and concerning risk contributions when the SRP strategy is considered. Regarding the validity of similar conclusions when referring to the MK strategy and KRP strategy, the development of a theoretical demonstration is still pending. This paper fills this gap.Keywords: risk parity, portfolio kurtosis, risk diversification, asset allocation
Procedia PDF Downloads 65623 Foreseen the Future: Human Factors Integration in European Horizon Projects
Authors: José Manuel Palma, Paula Pereira, Margarida Tomás
Abstract:
Foreseen the future: Human factors integration in European Horizon Projects The development of new technology as artificial intelligence, smart sensing, robotics, cobotics or intelligent machinery must integrate human factors to address the need to optimize systems and processes, thereby contributing to the creation of a safe and accident-free work environment. Human Factors Integration (HFI) consistently pose a challenge for organizations when applied to daily operations. AGILEHAND and FORTIS projects are grounded in the development of cutting-edge technology - industry 4.0 and 5.0. AGILEHAND aims to create advanced technologies for autonomously sort, handle, and package soft and deformable products, whereas FORTIS focuses on developing a comprehensive Human-Robot Interaction (HRI) solution. Both projects employ different approaches to explore HFI. AGILEHAND is mainly empirical, involving a comparison between the current and future work conditions reality, coupled with an understanding of best practices and the enhancement of safety aspects, primarily through management. FORTIS applies HFI throughout the project, developing a human-centric approach that includes understanding human behavior, perceiving activities, and facilitating contextual human-robot information exchange. it intervention is holistic, merging technology with the physical and social contexts, based on a total safety culture model. In AGILEHAND we will identify safety emergent risks, challenges, their causes and how to overcome them by resorting to interviews, questionnaires, literature review and case studies. Findings and results will be presented in “Strategies for Workers’ Skills Development, Health and Safety, Communication and Engagement” Handbook. The FORTIS project will implement continuous monitoring and guidance of activities, with a critical focus on early detection and elimination (or mitigation) of risks associated with the new technology, as well as guidance to adhere correctly with European Union safety and privacy regulations, ensuring HFI, thereby contributing to an optimized safe work environment. To achieve this, we will embed safety by design, and apply questionnaires, perform site visits, provide risk assessments, and closely track progress while suggesting and recommending best practices. The outcomes of these measures will be compiled in the project deliverable titled “Human Safety and Privacy Measures”. These projects received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND) and No 101135707 (FORTIS).Keywords: human factors integration, automation, digitalization, human robot interaction, industry 4.0 and 5.0
Procedia PDF Downloads 65622 Morphology, Qualitative, and Quantitative Elemental Analysis of Pheasant Eggshells in Thailand
Authors: Kalaya Sribuddhachart, Mayuree Pumipaiboon, Mayuva Youngsabanant-Areekijseree
Abstract:
The ultrastructure of 20 species of pheasant eggshells in Thailand, (Simese Fireback, Lophura diardi), (Silver Pheasant, Lophura nycthemera), (Kalij Pheasant, Lophura leucomelanos crawfurdii), (Kalij Pheasant, Lophura leucomelanos lineata), (Red Junglefowl, Gallus gallus spadiceus), (Crested Fireback, Lophura ignita rufa), (Green Peafowl, Pavo muticus), (Indian Peafowl, Pavo cristatus), (Grey Peacock Pheasant, Polyplectron bicalcaratum bicalcaratum), (Lesser Bornean Fireback, Lophura ignita ignita), (Green Junglefowl, Gallus varius), (Hume's Pheasant, Syrmaticus humiae humiae), (Himalayan Monal, Lophophorus impejanus), Golden Pheasant, Chrysolophus pictus, (Ring-Neck Pheasant, Phasianus sp.), (Reeves’s Pheasant, Syrmaticus reevesi), (Polish Chicken, Gallus sp.), (Brahma Chicken, Gallus sp.), (Yellow Golden Pheasant, Chrysolophus pictus luteus), and (Lady Amhersts Pheasant, Chrysolophus amherstiae) were studied by Secondary electron imaging (SEI) and Energy dispersive X-ray analysis (EDX) detectors of scanning electron microscope. Generally, all pheasant eggshells showed 3 layers of cuticle, palisade, and mammillary. The total thickness was ranging from 190.28±5.94-838.96±16.31µm. The palisade layer is the most thickness layer following by mammillary and cuticle layers. The palisade layer in all pheasant eggshells consisted of numerous vesicle holes that were firmly forming as network thorough the layer. The vesicle holes in all pheasant eggshells had difference porosity ranging from 0.44±0.11-0.23±0.05 µm. While the mammillary layer was the most compact layer with a variable shape (broad-base V and U-shape) connect to shell membrane. Elemental analysis by of 20 specie eggshells showed 9 apparent elements including carbon (C), oxygen (O), calcium (Ca), phosphorous (P), sulfur (S), magnesium (Mg), silicon (Si), aluminum (Al), and copper (Cu) at the percentage of 28.90- 8.33%, 60.64-27.61%, 55.30-14.49%, 1.97-0.03%, 0.08-0.03%, 0.50-0.16%, 0.30-0.04%, 0.06-0.02%, and 2.67-1.73%, respectively. It was found that Ca, C, and O showed highest elemental compositions, which essential for pheasant embryonic development, mainly presented as composited structure of calcium carbonate (CaCO3) more than 97%. Meanwhile, Mg, S, Si, Al, and P were major inorganic constituents of the eggshells which directly related to an increase of the shell hardness. Finally, the percentage of heavy metal copper (Cu) has been observed in 4 eggshell species. There are Golden Pheasant (2.67±0.16%), Indian Peafowl (2.61±0.13%), Green Peafowl (1.97±0.74%), and Silver Pheasant (1.73±0.11%), respectively. A non-significant difference was found in the percentages of 9 elements in all pheasant eggshells. This study is useful to provide the information of biology and taxonomic of pheasant study in Thailand for conservation.Keywords: pheasants eggshells, secondary electron imaging (SEI) and energy dispersive X-ray analysis (EDX), morphology, Thailand
Procedia PDF Downloads 235621 The Impact of Anxiety on the Access to Phonological Representations in Beginning Readers and Writers
Authors: Regis Pochon, Nicolas Stefaniak, Veronique Baltazart, Pamela Gobin
Abstract:
Anxiety is known to have an impact on working memory. In reasoning or memory tasks, individuals with anxiety tend to show longer response times and poorer performance. Furthermore, there is a memory bias for negative information in anxiety. Given the crucial role of working memory in lexical learning, anxious students may encounter more difficulties in learning to read and spell. Anxiety could even affect an earlier learning, that is the activation of phonological representations, which are decisive for the learning of reading and writing. The aim of this study is to compare the access to phonological representations of beginning readers and writers according to their level of anxiety, using an auditory lexical decision task. Eighty students of 6- to 9-years-old completed the French version of the Revised Children's Manifest Anxiety Scale and were then divided into four anxiety groups according to their total score (Low, Median-Low, Median-High and High). Two set of eighty-one stimuli (words and non-words) have been auditory presented to these students by means of a laptop computer. Stimuli words were selected according to their emotional valence (positive, negative, neutral). Students had to decide as quickly and accurately as possible whether the presented stimulus was a real word or not (lexical decision). Response times and accuracy were recorded automatically on each trial. It was anticipated a) longer response times for the Median-High and High anxiety groups in comparison with the two others groups, b) faster response times for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups, c) lower response accuracy for Median-High and High anxiety groups in comparison with the two others groups, d) better response accuracy for negative-valence words in comparison with positive and neutral-valence words only for the Median-High and High anxiety groups. Concerning the response times, our results showed no difference between the four groups. Furthermore, inside each group, the average response times was very close regardless the emotional valence. Otherwise, group differences appear when considering the error rates. Median-High and High anxiety groups made significantly more errors in lexical decision than Median-Low and Low groups. Better response accuracy, however, is not found for negative-valence words in comparison with positive and neutral-valence words in the Median-High and High anxiety groups. Thus, these results showed a lower response accuracy for above-median anxiety groups than below-median groups but without specificity for the negative-valence words. This study suggests that anxiety can negatively impact the lexical processing in young students. Although the lexical processing speed seems preserved, the accuracy of this processing may be altered in students with moderate or high level of anxiety. This finding has important implication for the prevention of reading and spelling difficulties. Indeed, during these learnings, if anxiety affects the access to phonological representations, anxious students could be disturbed when they have to match phonological representations with new orthographic representations, because of less efficient lexical representations. This study should be continued in order to precise the impact of anxiety on basic school learning.Keywords: anxiety, emotional valence, childhood, lexical access
Procedia PDF Downloads 288620 A Geographical Information System Supported Method for Determining Urban Transformation Areas in the Scope of Disaster Risks in Kocaeli
Authors: Tayfun Salihoğlu
Abstract:
Following the Law No: 6306 on Transformation of Disaster Risk Areas, urban transformation in Turkey found its legal basis. In the best practices all over the World, the urban transformation was shaped as part of comprehensive social programs through the discourses of renewing the economic, social and physical degraded parts of the city, producing spaces resistant to earthquakes and other possible disasters and creating a livable environment. In Turkish practice, a contradictory process is observed. In this study, it is aimed to develop a method for better understanding of the urban space in terms of disaster risks in order to constitute a basis for decisions in Kocaeli Urban Transformation Master Plan, which is being prepared by Kocaeli Metropolitan Municipality. The spatial unit used in the study is the 50x50 meter grids. In order to reflect the multidimensionality of urban transformation, three basic components that have spatial data in Kocaeli were identified. These components were named as 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings', and 'Inadequacy of Urban Services'. Each component was weighted and scored for each grid. In order to delimitate urban transformation zones Optimized Outlier Analysis (Local Moran I) in the ArcGIS 10.6.1 was conducted to test the type of distribution (clustered or scattered) and its significance on the grids by assuming the weighted total score of the grid as Input Features. As a result of this analysis, it was found that the weighted total scores were not significantly clustering at all grids in urban space. The grids which the input feature is clustered significantly were exported as the new database to use in further mappings. Total Score Map reflects the significant clusters in terms of weighted total scores of 'Problems in Built-up Areas', 'Disaster Risks arising from Geological Conditions of the Ground and Problems of Buildings' and 'Inadequacy of Urban Services'. Resulting grids with the highest scores are the most likely candidates for urban transformation in this citywide study. To categorize urban space in terms of urban transformation, Grouping Analysis in ArcGIS 10.6.1 was conducted to data that includes each component scores in significantly clustered grids. Due to Pseudo Statistics and Box Plots, 6 groups with the highest F stats were extracted. As a result of the mapping of the groups, it can be said that 6 groups can be interpreted in a more meaningful manner in relation to the urban space. The method presented in this study can be magnified due to the availability of more spatial data. By integrating with other data to be obtained during the planning process, this method can contribute to the continuation of research and decision-making processes of urban transformation master plans on a more consistent basis.Keywords: urban transformation, GIS, disaster risk assessment, Kocaeli
Procedia PDF Downloads 120619 Verification of Geophysical Investigation during Subsea Tunnelling in Qatar
Authors: Gary Peach, Furqan Hameed
Abstract:
Musaimeer outfall tunnel is one of the longest storm water tunnels in the world, with a total length of 10.15 km. The tunnel will accommodate surface and rain water received from the drainage networks from 270 km of urban areas in southern Doha with a pumping capacity of 19.7m³/sec. The tunnel is excavated by Tunnel Boring Machine (TBM) through Rus Formation, Midra Shales, and Simsima Limestone. Water inflows at high pressure, complex mixed ground, and weaker ground strata prone to karstification with the presence of vertical and lateral fractures connected to the sea bed were also encountered during mining. In addition to pre-tender geotechnical investigations, the Contractor carried out a supplementary offshore geophysical investigation in order to fine-tune the existing results of geophysical and geotechnical investigations. Electric resistivity tomography (ERT) and Seismic Reflection survey was carried out. Offshore geophysical survey was performed, and interpretations of rock mass conditions were made to provide an overall picture of underground conditions along the tunnel alignment. This allowed the critical tunnelling area and cutter head intervention to be planned accordingly. Karstification was monitored with a non-intrusive radar system facility installed on the TBM. The Boring Electric Ahead Monitoring(BEAM) was installed at the cutter head and was able to predict the rock mass up to 3 tunnel diameters ahead of the cutter head. BEAM system was provided with an online system for real time monitoring of rock mass condition and then correlated with the rock mass conditions predicted during the interpretation phase of offshore geophysical surveys. The further correlation was carried by Samples of the rock mass taken from tunnel face inspections and excavated material produced by the TBM. The BEAM data was continuously monitored to check the variations in resistivity and percentage frequency effect (PFE) of the ground. This system provided information about rock mass condition, potential karst risk, and potential of water inflow. BEAM system was found to be more than 50% accurate in picking up the difficult ground conditions and faults as predicted in the geotechnical interpretative report before the start of tunnelling operations. Upon completion of the project, it was concluded that the combined use of different geophysical investigation results can make the execution stage be carried out in a more confident way with the less geotechnical risk involved. The approach used for the prediction of rock mass condition in Geotechnical Interpretative Report (GIR) and Geophysical Reflection and electric resistivity tomography survey (ERT) Geophysical Reflection surveys were concluded to be reliable as the same rock mass conditions were encountered during tunnelling operations.Keywords: tunnel boring machine (TBM), subsea, karstification, seismic reflection survey
Procedia PDF Downloads 245618 Pyramid of Deradicalization: Causes and Possible Solutions
Authors: Ashir Ahmed
Abstract:
Generally, radicalization happens when a person's thinking and behaviour become significantly different from how most of the members of their society and community view social issues and participate politically. Radicalization often leads to violent extremism that refers to the beliefs and actions of people who support or use violence to achieve ideological, religious or political goals. Studies on radicalization negate the common myths that someone must be in a group to be radicalised or anyone who experiences radical thoughts is a violent extremist. Moreover, it is erroneous to suggest that radicalisation is always linked to religion. Generally, the common motives of radicalization include ideological, issue-based, ethno-nationalist or separatist underpinning. Moreover, there are number of factors that further augments the chances of someone being radicalised and may choose the path of violent extremism and possibly terrorism. Since there are numbers of factors (and sometimes quite different) contributing in radicalization and violent extremism, it is highly unlikely to devise a single solution that could produce effective outcomes to deal with radicalization, violent extremism and terrorism. The pathway to deradicalization, like the pathway to radicalisation, is different for everyone. Considering the need of having customized deradicalization resolution, this study proposes a multi-tier framework, called ‘pyramid of deradicalization’ that first help identifying the stage at which an individual could be on the radicalization pathway and then propose a customize strategy to deal with the respective stage. The first tier (tier 1) addresses broader community and proposes a ‘universal approach’ aiming to offer community-based design and delivery of educational programs to raise awareness and provide general information on possible factors leading to radicalization and their remedies. The second tier focuses on the members of community who are more vulnerable and are disengaged from the rest of the community. This tier proposes a ‘targeted approach’ targeting the vulnerable members of the community through early intervention such as providing anonymous help lines where people feel confident and comfortable in seeking help without fearing the disclosure of their identity. The third tier aims to focus on people having clear evidence of moving toward extremism or getting radicalized. The people falls in this tier are believed to be supported through ‘interventionist approach’. The interventionist approach advocates the community engagement and community-policing, introducing deradicalization programmes to the targeted individuals and looking after their physical and mental health issues. The fourth and the last tier suggests the strategies to deal with people who are actively breaking the law. ‘Enforcement approach’ suggests various approaches such as strong law enforcement, fairness and accuracy in reporting radicalization events, unbiased treatment by law based on gender, race, nationality or religion and strengthen the family connections.It is anticipated that the operationalization of the proposed framework (‘pyramid of deradicalization’) would help in categorising people considering their tendency to become radicalized and then offer an appropriate strategy to make them valuable and peaceful members of the community.Keywords: deradicalization, framework, terrorism, violent extremism
Procedia PDF Downloads 269617 Primary-Color Emitting Photon Energy Storage Nanophosphors for Developing High Contrast Latent Fingerprints
Authors: G. Swati, D. Haranath
Abstract:
Commercially available long afterglow /persistent phosphors are proprietary materials and hence the exact composition and phase responsible for their luminescent characteristics such as initial intensity and afterglow luminescence time are not known. Further to generate various emission colors, commercially available persistence phosphors are physically blended with fluorescent organic dyes such as rodhamine, kiton and methylene blue etc. Blending phosphors with organic dyes results into complete color coverage in visible spectra, however with time, such phosphors undergo thermal and photo-bleaching. This results in the loss of their true emission color. Hence, the current work is dedicated studies on inorganic based thermally and chemically stable primary color emitting nanophosphors namely SrAl2O4:Eu2+, Dy3+, (CaZn)TiO3:Pr3+, and Sr2MgSi2O7:Eu2+, Dy3+. SrAl2O4: Eu2+, Dy3+ phosphor exhibits a strong excitation in UV and visible region (280-470 nm) with a broad emission peak centered at 514 nm is the characteristic emission of parity allowed 4f65d1→4f7 transitions of Eu2+ (8S7/2→2D5/2). Sunlight excitable Sr2MgSi2O7:Eu2+,Dy3+ nanophosphors emits blue color (464 nm) with Commercial international de I’Eclairage (CIE) coordinates to be (0.15, 0.13) with a color purity of 74 % with afterglow time of > 5 hours for dark adapted human eyes. (CaZn)TiO3:Pr3+ phosphor system possess high color purity (98%) which emits intense, stable and narrow red emission at 612 nm due intra 4f transitions (1D2 → 3H4) with afterglow time of 0.5 hour. Unusual property of persistence luminescence of these nanophoshphors supersedes background effects without losing sensitive information these nanophosphors offer several advantages of visible light excitation, negligible substrate interference, high contrast bifurcation of ridge pattern, non-toxic nature revealing finger ridge details of the fingerprints. Both level 1 and level 2 features from a fingerprint can be studied which are useful for used classification, indexing, comparison and personal identification. facile methodology to extract high contrast fingerprints on non-porous and porous substrates using a chemically inert, visible light excitable, and nanosized phosphorescent label in the dark has been presented. The chemistry of non-covalent physisorption interaction between the long afterglow phosphor powder and sweat residue in fingerprints has been discussed in detail. Real-time fingerprint development on porous and non-porous substrates has also been performed. To conclude, apart from conventional dark vision applications, as prepared primary color emitting afterglow phosphors are potentional candidate for developing high contrast latent fingerprints.Keywords: fingerprints, luminescence, persistent phosphors, rare earth
Procedia PDF Downloads 221616 Partisan Agenda Setting in Digital Media World
Authors: Hai L. Tran
Abstract:
Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization
Procedia PDF Downloads 59615 Suicide Wrongful Death: Standard of Care Problems Involving the Inaccurate Discernment of Lethal Risk When Focusing on the Elicitation of Suicide Ideation
Authors: Bill D. Geis
Abstract:
Suicide wrongful death forensic cases are the fastest rising tort in mental health law. It is estimated that suicide-related cases have accounted for 15% of U.S. malpractice claims since 2006. Most suicide-related personal injury claims fall into the legal category of “wrongful death.” Though mental health experts may be called on to address a range of forensic questions in wrongful death cases, the central consultation that most experts provide is about the negligence element—specifically, the issue of whether the clinician met the clinical standard of care in assessing, treating, and managing the deceased person’s mental health care. Standards of care, varying from U.S. state to state, are broad and address what a reasonable clinician might do in a similar circumstance. This fact leaves the issue of the suicide standard of care, in each case, up to forensic experts to put forth a reasoned estimate of what the standard of care should have been in the specific case under litigation. Because the general state guidelines for standard of care are broad, forensic experts are readily retained to provide scientific and clinical opinions about whether or not a clinician met the standard of care in their suicide assessment, treatment, and management of the case. In the past and in much of current practice, the assessment of suicide has centered on the elicitation of verbalized suicide ideation. Research in recent years, however, has indicated that the majority of persons who end their lives do not say they are suicidal at their last medical or psychiatric contact. Near-term risk assessment—that goes beyond verbalized suicide ideation—is needed. Our previous research employed structural equation modeling to predict lethal suicide risk--eight negative thought patterns (feeling like a burden on others, hopelessness, self-hatred, etc.) mediated by nine transdiagnostic clinical factors (mental torment, insomnia, substance abuse, PTSD intrusions, etc.) were combined to predict acute lethal suicide risk. This structural equation model, the Lethal Suicide Risk Pattern (LSRP), Acute model, had excellent goodness-of-fit [χ2(df) = 94.25(47)***, CFI = .98, RMSEA = .05, .90CI = .03-.06, p(RMSEA = .05) = .63. AIC = 340.25, ***p < .001.]. A further SEQ analysis was completed for this paper, adding a measure of Acute Suicide Ideation to the previous SEQ. Acceptable prediction model fit was no longer achieved [χ2(df) = 3.571, CFI > .953, RMSEA = .075, .90% CI = .065-.085, AIC = 529.550].This finding suggests that, in this additional study, immediate verbalized suicide ideation information was unhelpful in the assessment of lethal risk. The LSRP and other dynamic, near-term risk models (such as the Acute Suicide Affective Disorder Model and the Suicide Crisis Syndrome Model)—going beyond elicited suicide ideation—need to be incorporated into current clinical suicide assessment training. Without this training, the standard of care for suicide assessment is out of sync with current research—an emerging dilemma for the forensic evaluation of suicide wrongful death cases.Keywords: forensic evaluation, standard of care, suicide, suicide assessment, wrongful death
Procedia PDF Downloads 68614 The Social Ecology of Serratia entomophila: Pathogen of Costelytra giveni
Authors: C. Watson, T. Glare, M. O'Callaghan, M. Hurst
Abstract:
The endemic New Zealand grass grub (Costelytra giveni, Coleoptera: Scarabaeidae) is an economically significant grassland pest in New Zealand. Due to their impacts on production within the agricultural sector, one of New Zealand's primary industries, several methods are being used to either control or prevent the establishment of new grass grub populations in the pasture. One such method involves the use of a biopesticide based on the bacterium Serratia entomophila. This species is one of the causative agents of amber disease, a chronic disease of the larvae which results in death via septicaemia after approximately 2 to 3 months. The ability of S. entomophila to cause amber disease is dependant upon the presence of the amber disease associated plasmid (pADAP), which encodes for the key virulence determinants required for the establishment and maintenance of the disease. Following the collapse of grass grub populations within the soil, resulting from either natural population build-up or application of the bacteria, non-pathogenic plasmid-free Serratia strains begin to predominate within the soil. Whilst the interactions between S. entomophila and grass grub larvae are well studied, less information is known on the interactions between plasmid-bearing and plasmid-free strains, particularly the potential impact of these interactions upon the efficacy of an applied biopesticide. Using a range of constructed strains with antibiotic tags, in vitro (broth culture) and in vivo (soil and larvae) experiments were conducted using inoculants comprised of differing ratios of isogenic pathogenic and non-pathogenic Serratia strains, enabling the relative growth of pADAP+ and pADAP- strains under competition conditions to be assessed. In nutrient-rich, the non-pathogenic pADAP- strain outgrew the pathogenic pADAP+ strain by day 3 when inoculated in equal quantities, and by day 5 when applied as the minority inoculant, however, there was an overall gradual decline in the number of viable bacteria for both strains over a 7-day period. Similar results were obtained in additional experiments using the same strains and continuous broth cultures re-inoculated at 24-hour intervals, although in these cultures, the viable cell count did not diminish over the 7-day period. When the same ratios were assessed in soil microcosms with limited available nutrients, the strains remained relatively stable over a 2-month period. Additionally, in vivo grass grub co-infections assays using the same ratios of tagged Serratia strains revealed similar results to those observed in the soil, but there was also evidence of horizontal transfer of pADAP from the pathogenic to the non-pathogenic strain within the larval gut after a period of 4 days. Whilst the influence of competition is more apparent in broth cultures than within the soil or larvae, further testing is required to determine whether this competition between pathogenic and non-pathogenic Serratia strains has any influence on efficacy and disease progression, and how this may impact on the ability of S. entomophila to cause amber disease within grass grub larvae when applied as a biopesticide.Keywords: biological control, entomopathogen, microbial ecology, New Zealand
Procedia PDF Downloads 156613 Fuzzy Availability Analysis of a Battery Production System
Authors: Merve Uzuner Sahin, Kumru D. Atalay, Berna Dengiz
Abstract:
In today’s competitive market, there are many alternative products that can be used in similar manner and purpose. Therefore, the utility of the product is an important issue for the preferability of the brand. This utility could be measured in terms of its functionality, durability, reliability. These all are affected by the system capabilities. Reliability is an important system design criteria for the manufacturers to be able to have high availability. Availability is the probability that a system (or a component) is operating properly to its function at a specific point in time or a specific period of times. System availability provides valuable input to estimate the production rate for the company to realize the production plan. When considering only the corrective maintenance downtime of the system, mean time between failure (MTBF) and mean time to repair (MTTR) are used to obtain system availability. Also, the MTBF and MTTR values are important measures to improve system performance by adopting suitable maintenance strategies for reliability engineers and practitioners working in a system. Failure and repair time probability distributions of each component in the system should be known for the conventional availability analysis. However, generally, companies do not have statistics or quality control departments to store such a large amount of data. Real events or situations are defined deterministically instead of using stochastic data for the complete description of real systems. A fuzzy set is an alternative theory which is used to analyze the uncertainty and vagueness in real systems. The aim of this study is to present a novel approach to compute system availability using representation of MTBF and MTTR in fuzzy numbers. Based on the experience in the system, it is decided to choose 3 different spread of MTBF and MTTR such as 15%, 20% and 25% to obtain lower and upper limits of the fuzzy numbers. To the best of our knowledge, the proposed method is the first application that is used fuzzy MTBF and fuzzy MTTR for fuzzy system availability estimation. This method is easy to apply in any repairable production system by practitioners working in industry. It is provided that the reliability engineers/managers/practitioners could analyze the system performance in a more consistent and logical manner based on fuzzy availability. This paper presents a real case study of a repairable multi-stage production line in lead-acid battery production factory in Turkey. The following is focusing on the considered wet-charging battery process which has a higher production level than the other types of battery. In this system, system components could exist only in two states, working or failed, and it is assumed that when a component in the system fails, it becomes as good as new after repair. Instead of classical methods, using fuzzy set theory and obtaining intervals for these measures would be very useful for system managers, practitioners to analyze system qualifications to find better results for their working conditions. Thus, much more detailed information about system characteristics is obtained.Keywords: availability analysis, battery production system, fuzzy sets, triangular fuzzy numbers (TFNs)
Procedia PDF Downloads 224612 Effect of Low to Moderate Altitude on Football Performance: An Analysis of Thirteen Seasons in the South African Premier Soccer League
Authors: Khatija Bahdur, Duane Dell’Oca
Abstract:
There is limited information on how altitude impacts performance in a team sport. Most altitude research in football has been conducted at high elevation ( > 2500m), resulting in a chasm of understanding whether low to moderate altitude affects performance. The South African Premier Soccer League (PSL) fixtures entail matches played at altitudes from sea level to 1700m above mean sea level. Despite coaches highlighting the effect of altitude on performance outcomes in matches, further research is needed to establish whether altitude does impact match results. Greater insight into if and how altitude impacts performance in the PSL will assist coaches in deciding if and how to incorporate altitude in their planning. The purpose of this study is to fill in this gap through the use of a retrospective analysis of PSL matches. This quantitative study is based on a descriptive analysis of 181 PSL matches involving one team based at sea-level, taking place over a period of thirteen seasons. The following data were obtained: altitude at which the match was played, match result, the timing of goals, and timing of substitutions. The altitude was classified in 2 ways: inland ( > 500m) and coastal ( < 500m) and also further subdivided into narrower categories ( < 500m, 500-1000m, 1000-1300m; 1300-1500m, > 1500m). The analysis included a 2-sample t-test to determine differences in total goals scored and timing of goals for inland and coastal matches and the chi-square test to identify the significance of altitude on match results. The level of significance was set at the alpha level of 0.05. Match results are significantly affected by the altitude and level of altitude within inland teams most likely to win when playing at inland venues (p=0.000). The proportion of draws was slightly higher at the coast. At altitudes between 500-1000m, 1300-1500m, and 1500-1700m, a greater percentage of matches were won by coastal teams as opposed to draws. The timing of goals varied based on the team’s base altitude and the match elevation. The most significant differences were between 36-40 minutes (p=0.023), 41-45 minutes (p=0.000) and 50-65 minutes (p=0.000). When breaking down inland team’s matches to different altitude categories, greater differences were highlighted. Inland teams scored more goals per minute between 10-20 minute (p=0.009), 41-45 minutes (p=0.003) and 50-65 minutes (p=0.015). The total number of goals scored per match at different altitudes by a) inland teams (p=0.000), b) coastal teams (p=0.006). Coastal teams made significantly more substitutions when playing at altitude (p=0.034), although there were no significant differences when comparing the different altitude categories. The timing of all three changes, however, did vary significantly at the different altitudes. There were no significant differences in timing or number of substitutions for inland teams. Match results and timing of goals are influenced by altitude, with differences between the level of altitude also playing a role. The trends indicate that inland teams win more matches when playing at altitude against coastal teams, and they score more goals just prior to half-time and in the first quarter of the second half.Keywords: coastal teams, inland teams, timing of goals, results, substitutions
Procedia PDF Downloads 131611 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project
Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba
Abstract:
Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.Keywords: architecture, interdisciplinary, research, studio, students, wood
Procedia PDF Downloads 311610 Motivational Profiles of the Entrepreneurial Career in Spanish Businessmen
Authors: Magdalena Suárez-Ortega, M. Fe. Sánchez-García
Abstract:
This paper focuses on the analysis of the motivations that lead people to undertake and consolidate their business. It is addressed from the framework of planned behavior theory, which recognizes the importance of the social environment and cultural values, both in the decision to undertake business and in business consolidation. Similarly, it is also based on theories of career development, which emphasize the importance of career management competencies and their connections to other vital aspects of people, including their roles within their families and other personal activities. This connects directly with the impact of entrepreneurship on the career and the professional-personal project of each individual. This study is part of the project titled Career Design and Talent Management (Ministry of Economy and Competitiveness of Spain, State Plan 2013-2016 Excellence Ref. EDU2013-45704-P). The aim of the study is to identify and describe entrepreneurial competencies and motivational profiles in a sample of 248 Spanish entrepreneurs, considering the consolidated profile and the profile in transition (n = 248).In order to obtain the information, the Questionnaire of Motivation and conditioners of the entrepreneurial career (MCEC) has been applied. This consists of 67 items and includes four scales (E1-Conflicts in conciliation, E2-Satisfaction in the career path, E3-Motivations to undertake, E4-Guidance Needs). Cluster analysis (mixed method, combining k-means clustering with a hierarchical method) was carried out, characterizing the groups profiles according to the categorical variables (chi square, p = 0.05), and the quantitative variables (ANOVA). The results have allowed us to characterize three motivational profiles relevant to the motivation, the degree of conciliation between personal and professional life, and the degree of conflict in conciliation, levels of career satisfaction and orientation needs (in the entrepreneurial project and life-career). The first profile is formed by extrinsically motivated entrepreneurs, professionally satisfied and without conflict of vital roles. The second profile acts with intrinsic motivation and also associated with family models, and although it shows satisfaction with their professional career, it finds a high conflict in their family and professional life. The third is composed of entrepreneurs with high extrinsic motivation, professional dissatisfaction and at the same time, feel the conflict in their professional life by the effect of personal roles. Ultimately, the analysis has allowed us to line the kinds of entrepreneurs to different levels of motivation, satisfaction, needs and articulation in professional and personal life, showing characterizations associated with the use of time for leisure, and the care of the family. Associations related to gender, age, activity sector, environment (rural, urban, virtual), and the use of time for domestic tasks are not identified. The model obtained and its implications for the design of training actions and orientation to entrepreneurs is also discussed.Keywords: motivation, entrepreneurial career, guidance needs, life-work balance, job satisfaction, assessment
Procedia PDF Downloads 301609 Identifying the Effects of the Rural Demographic Changes in the Northern Netherlands: A Holistic Approach to Create Healthier Environment
Authors: A. R. Shokoohi, E. A. M. Bulder, C. Th. van Alphen, D. F. den Hertog, E. J. Hin
Abstract:
The Northern region of the Netherlands has beautiful landscapes, a nice diversity of green and blue areas, and dispersed settlements. However, some recent population changes can become threats to health and wellbeing in these areas. The rural areas in the three northern provinces -Groningen, Friesland, and Drenthe, see youngsters leave the region for which reason they are aging faster than other regions in the Netherlands. As a result, some villages have faced major population decline that is leading to loss of facilities/amenities and a decrease in accessibility and social cohesion. Those who still live in these villages are relatively old, low educated and have low-income. To develop a deeper understanding of the health status of the people living in these areas, and help them to improve their living environment, the GO!-Method is being applied in this study. This method has been developed by the National Institute for Public Health and the Environment (RIVM) of the Netherlands and is inspired by the broad definition of health by Machteld Huber: the ability to adapt and direct control, in terms of the physical, emotional and social challenges of life, while paying extra attention to vulnerable groups. A healthy living environment is defined as an environment that residents find it pleasant and encourages and supports healthy behavior. The GO!-method integrates six domains that constitute a healthy living environment: health and lifestyle, facilities and development, safety and hygiene, social cohesion and active citizens, green areas, and air and noise pollution. First of all, this method will identify opportunities for a healthier living environment using existing information and perceptions of residents and other local stakeholders in order to strengthen social participation and quality of life in these rural areas. Second, this approach will connect identified opportunities with available and effective evidence-based interventions in order to develop an action plan from the residents and local authorities perspective which will help them to design their municipalities healthier and more resilient. This method is being used for the first time in rural areas to our best knowledge, in close collaboration with the residents and local authorities of the three provinces to create a sustainable process and stimulate social participation. Our paper will present the outcomes of the first phase of this project in collaboration with the municipality of Westerkwartier, located in the northwest of the province of Groningen. And will describe the current situation, and identify local assets, opportunities, and policies relating to healthier environment; as well as needs and challenges to achieve goals. The preliminary results show that rural demographic changes in the northern Netherlands have negative impacts on service provisions and social cohesion, and there is a need to understand this complicated situation and improve the quality of life in those areas.Keywords: population decline, rural areas, healthy environment, Netherlands
Procedia PDF Downloads 96608 Effect of Phenolic Acids on Human Saliva: Evaluation by Diffusion and Precipitation Assays on Cellulose Membranes
Authors: E. Obreque-Slier, F. Orellana-Rodríguez, R. López-Solís
Abstract:
Phenolic compounds are secondary metabolites present in some foods, such as wine. Polyphenols comprise two main groups: flavonoids (anthocyanins, flavanols, and flavonols) and non-flavonoids (stilbenes and phenolic acids). Phenolic acids are low molecular weight non flavonoid compounds that are usually grouped into benzoic (gallic, vanillinic and protocatechuic acids) and cinnamic acids (ferulic, p-coumaric and caffeic acids). Likewise, tannic acid is an important polyphenol constituted mainly by gallic acid. Phenolic compounds are responsible for important properties in foods and drinks, such as color, aroma, bitterness, and astringency. Astringency is a drying, roughing, and sometimes puckering sensation that is experienced on the various oral surfaces during or immediately after tasting foods. Astringency perception has been associated with interactions between flavanols present in some foods and salivary proteins. Despite the quantitative relevance of phenolic acids in food and beverages, there is no information about its effect on salivary proteins and consequently on the sensation of astringency. The objective of this study was assessed the interaction of several phenolic acids (gallic, vanillinic, protocatechuic, ferulic, p-coumaric and caffeic acids) with saliva. Tannic acid was used as control. Thus, solutions of each phenolic acids (5 mg/mL) were mixed with human saliva (1:1 v/v). After incubation for 5 min at room temperature, 15-μL aliquots of the mixtures were dotted on a cellulose membrane and allowed to diffuse. The dry membrane was fixed in 50 g/L trichloroacetic acid, rinsed in 800 mL/L ethanol and stained for protein with Coomassie blue for 20 min, destained with several rinses of 73 g/L acetic acid and dried under a heat lamp. Both diffusion area and stain intensity of the protein spots were semiqualitative estimates for protein-tannin interaction (diffusion test). The rest of the whole saliva-phenol solution mixtures of the diffusion assay were centrifuged and fifteen-μL aliquots of each supernatant were dotted on a cellulose membrane, allowed to diffuse and processed for protein staining, as indicated above. In this latter assay, reduced protein staining was taken as indicative of protein precipitation (precipitation test). The diffusion of the salivary protein was restricted by the presence of each phenolic acids (anti-diffusive effect), while tannic acid did not alter diffusion of the salivary protein. By contrast, phenolic acids did not provoke precipitation of the salivary protein, while tannic acid produced precipitation of salivary proteins. In addition, binary mixtures (mixtures of two components) of various phenolic acids with gallic acid provoked a restriction of saliva. Similar effect was observed by the corresponding individual phenolic acids. Contrary, binary mixtures of phenolic acid with tannic acid, as well tannic acid alone, did not affect the diffusion of the saliva but they provoked an evident precipitation. In summary, phenolic acids showed a relevant interaction with the salivary proteins, thus suggesting that these wine compounds can also contribute to the sensation of astringency.Keywords: astringency, polyphenols, tannins, tannin-protein interaction
Procedia PDF Downloads 246607 Ganga Rejuvenation through Forestation and Conservation Measures in Riverscape
Authors: Ombir Singh
Abstract:
In spite of the religious and cultural pre-dominance of the river Ganga in the Indian ethos, fragmentation and degradation of the river continued down the ages. Recognizing the national concern on environmental degradation of the river and its basin, Ministry of Water Resources, River Development & Ganga Rejuvenation (MoWR,RD&GR), Government of India has initiated a number of pilot schemes for the rejuvenation of river Ganga under the ‘Namami Gange’ Programme. Considering the diversity, complexity, and intricacies of forest ecosystems and pivotal multiple functions performed by them and their inter-connectedness with highly dynamic river ecosystems, forestry interventions all along the river Ganga from its origin at Gaumukh, Uttarakhand to its mouth at Ganga Sagar, West Bengal has been planned by the ministry. For that Forest Research Institute (FRI) in collaboration with National Mission for Clean Ganga (NMCG) has prepared a Detailed Project Report (DPR) on Forestry Interventions for Ganga. The Institute has adopted an extensive consultative process at the national and state levels involving various stakeholders relevant in the context of river Ganga and employed a science-based methodology including use of remote sensing and GIS technologies for geo-spatial analysis, modeling and prioritization of sites for proposed forestation and conservation interventions. Four sets of field data formats were designed to obtain the field based information for forestry interventions, mainly plantations and conservation measures along the river course. In response, five stakeholder State Forest Departments had submitted more than 8,000 data sheets to the Institute. In order to analyze a voluminous field data received from five participating states, the Institute also developed a software to collate, analyze and generation of reports on proposed sites in Ganga basin. FRI has developed potential plantation and treatment models for the proposed forestry and other conservation measures in major three types of landscape components visualized in the Ganga riverscape. These are: (i) Natural, (ii) Agriculture, and (iii) Urban Landscapes. Suggested plantation models broadly varied for the Uttarakhand Himalayas and the Ganga Plains in five participating states. Besides extensive plantations in three type of landscapes within the riverscape, various conservation measures such as soil and water conservation, riparian wildlife management, wetland management, bioremediation and bio-filtration and supporting activities such as policy and law intervention, concurrent research, monitoring and evaluation, and mass awareness campaigns have been envisioned in the DPR. The DPR also incorporates the details of the implementation mechanism, budget provisioned for different components of the project besides allocation of budget state-wise to five implementing agencies, national partner organizations and the Nodal Ministry.Keywords: conservation, Ganga, river, water, forestry interventions
Procedia PDF Downloads 149606 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 79605 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement
Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís
Abstract:
Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement
Procedia PDF Downloads 85604 Transducers for Measuring Displacements of Rotating Blades in Turbomachines
Authors: Pavel Prochazka
Abstract:
The study deals with transducers for measuring vibration displacements of rotating blade tips in turbomachines. In order to prevent major accidents with extensive economic consequences, it shows an urgent need for every low-pressure steam turbine stage being equipped with modern non-contact measuring system providing information on blade loading, damage and residual lifetime under operation. The requirement of measuring vibration and static characteristics of steam turbine blades, therefore, calls for the development and operational verification of both new types of sensors and measuring principles and methods. The task is really demanding: to measure displacements of blade tips with a resolution of the order of 10 μm by speeds up to 750 m/s, humidity 100% and temperatures up to 200 °C. While in gas turbines are used primarily capacitive and optical transducers, these transducers cannot be used in steam turbines. The reason is moisture vapor, droplets of condensing water and dirt, which disable the function of sensors. Therefore, the most feasible approach was to focus on research of electromagnetic sensors featuring promising characteristics for given blade materials in a steam environment. Following types of sensors have been developed and both experimentally and theoretically studied in the Institute of Thermodynamics, Academy of Sciences of the Czech Republic: eddy-current, Hall effect, inductive and magnetoresistive. Eddy-current transducers demand a small distance of 1 to 2 mm and change properties in the harsh environment of steam turbines. Hall effect sensors have relatively low sensitivity, high values of offset, drift, and especially noise. Induction sensors do not require any supply current and have a simple construction. The magnitude of the sensors output voltage is dependent on the velocity of the measured body and concurrently on the varying magnetic induction, and they cannot be used statically. Magnetoresistive sensors are formed by magnetoresistors arranged into a Wheatstone bridge. Supplying the sensor from a current source provides better linearity. The MR sensors can be used permanently for temperatures up to 200 °C at lower values of the supply current of about 1 mA. The frequency range of 0 to 300 kHz is by an order higher comparing to the Hall effect and induction sensors. The frequency band starts at zero frequency, which is very important because the sensors can be calibrated statically. The MR sensors feature high sensitivity and low noise. The symmetry of the bridge arrangement leads to a high common mode rejection ratio and suppressing disturbances, which is important, especially in industrial applications. The MR sensors feature high sensitivity, high common mode rejection ratio, and low noise, which is important, especially in industrial applications. Magnetoresistive transducers provide a range of excellent properties indicating their priority for displacement measurements of rotating blades in turbomachines.Keywords: turbines, blade vibration, blade tip timing, non-contact sensors, magnetoresistive sensors
Procedia PDF Downloads 129603 Geospatial Modeling Framework for Enhancing Urban Roadway Intersection Safety
Authors: Neeti Nayak, Khalid Duri
Abstract:
Despite the many advances made in transportation planning, the number of injuries and fatalities in the United States which involve motorized vehicles near intersections remain largely unchanged year over year. Data from the National Highway Traffic Safety Administration for 2018 indicates accidents involving motorized vehicles at traffic intersections accounted for 8,245 deaths and 914,811 injuries. Furthermore, collisions involving pedal cyclists killed 861 people (38% at intersections) and injured 46,295 (68% at intersections), while accidents involving pedestrians claimed 6,247 lives (25% at intersections) and injured 71,887 (56% at intersections)- the highest tallies registered in nearly 20 years. Some of the causes attributed to the rising number of accidents relate to increasing populations and the associated changes in land and traffic usage patterns, insufficient visibility conditions, and inadequate applications of traffic controls. Intersections that were initially designed with a particular land use pattern in mind may be rendered obsolete by subsequent developments. Many accidents involving pedestrians are accounted for by locations which should have been designed for safe crosswalks. Conventional solutions for evaluating intersection safety often require costly deployment of engineering surveys and analysis, which limit the capacity of resource-constrained administrations to satisfy their community’s needs for safe roadways adequately, effectively relegating mitigation efforts for high-risk areas to post-incident responses. This paper demonstrates how geospatial technology can identify high-risk locations and evaluate the viability of specific intersection management techniques. GIS is used to simulate relevant real-world conditions- the presence of traffic controls, zoning records, locations of interest for human activity, design speed of roadways, topographic details and immovable structures. The proposed methodology provides a low-cost mechanism for empowering urban planners to reduce the risks of accidents using 2-dimensional data representing multi-modal street networks, parcels, crosswalks and demographic information alongside 3-dimensional models of buildings, elevation, slope and aspect surfaces to evaluate visibility and lighting conditions and estimate probabilities for jaywalking and risks posed by blind or uncontrolled intersections. The proposed tools were developed using sample areas of Southern California, but the model will scale to other cities which conform to similar transportation standards given the availability of relevant GIS data.Keywords: crosswalks, cyclist safety, geotechnology, GIS, intersection safety, pedestrian safety, roadway safety, transportation planning, urban design
Procedia PDF Downloads 109602 The Use of Emerging Technologies in Higher Education Institutions: A Case of Nelson Mandela University, South Africa
Authors: Ayanda P. Deliwe, Storm B. Watson
Abstract:
The COVID-19 pandemic has disrupted the established practices of higher education institutions (HEIs). Most higher education institutions worldwide had to shift from traditional face-to-face to online learning. The online environment and new online tools are disrupting the way in which higher education is presented. Furthermore, the structures of higher education institutions have been impacted by rapid advancements in information and communication technologies. Emerging technologies should not be viewed in a negative light because, as opposed to the traditional curriculum that worked to create productive and efficient researchers, emerging technologies encourage creativity and innovation. Therefore, using technology together with traditional means will enhance teaching and learning. Emerging technologies in higher education not only change the experience of students, lecturers, and the content, but it is also influencing the attraction and retention of students. Higher education institutions are under immense pressure because not only are they competing locally and nationally, but emerging technologies also expand the competition internationally. Emerging technologies have eliminated border barriers, allowing students to study in the country of their choice regardless of where they are in the world. Higher education institutions are becoming indifferent as technology is finding its way into the lecture room day by day. Academics need to utilise technology at their disposal if they want to get through to their students. Academics are now competing for students' attention with social media platforms such as WhatsApp, Snapchat, Instagram, Facebook, TikTok, and others. This is posing a significant challenge to higher education institutions. It is, therefore, critical to pay attention to emerging technologies in order to see how they can be incorporated into the classroom in order to improve educational quality while remaining relevant in the work industry. This study aims to understand how emerging technologies have been utilised at Nelson Mandela University in presenting teaching and learning activities since April 2020. The primary objective of this study is to analyse how academics are incorporating emerging technologies in their teaching and learning activities. This primary objective was achieved by conducting a literature review on clarifying and conceptualising the emerging technologies being utilised by higher education institutions, reviewing and analysing the use of emerging technologies, and will further be investigated through an empirical analysis of the use of emerging technologies at Nelson Mandela University. Findings from the literature review revealed that emerging technology is impacting several key areas in higher education institutions, such as the attraction and retention of students, enhancement of teaching and learning, increase in global competition, elimination of border barriers, and highlighting the digital divide. The literature review further identified that learning management systems, open educational resources, learning analytics, and artificial intelligence are the most prevalent emerging technologies being used in higher education institutions. The identified emerging technologies will be further analysed through an empirical analysis to identify how they are being utilised at Nelson Mandela University.Keywords: artificial intelligence, emerging technologies, learning analytics, learner management systems, open educational resources
Procedia PDF Downloads 69601 Control of Belts for Classification of Geometric Figures by Artificial Vision
Authors: Juan Sebastian Huertas Piedrahita, Jaime Arturo Lopez Duque, Eduardo Luis Perez Londoño, Julián S. Rodríguez
Abstract:
The process of generating computer vision is called artificial vision. The artificial vision is a branch of artificial intelligence that allows the obtaining, processing, and analysis of any type of information especially the ones obtained through digital images. Actually the artificial vision is used in manufacturing areas for quality control and production, as these processes can be realized through counting algorithms, positioning, and recognition of objects that can be measured by a single camera (or more). On the other hand, the companies use assembly lines formed by conveyor systems with actuators on them for moving pieces from one location to another in their production. These devices must be previously programmed for their good performance and must have a programmed logic routine. Nowadays the production is the main target of every industry, quality, and the fast elaboration of the different stages and processes in the chain of production of any product or service being offered. The principal base of this project is to program a computer that recognizes geometric figures (circle, square, and triangle) through a camera, each one with a different color and link it with a group of conveyor systems to organize the mentioned figures in cubicles, which differ from one another also by having different colors. This project bases on artificial vision, therefore the methodology needed to develop this project must be strict, this one is detailed below: 1. Methodology: 1.1 The software used in this project is QT Creator which is linked with Open CV libraries. Together, these tools perform to realize the respective program to identify colors and forms directly from the camera to the computer. 1.2 Imagery acquisition: To start using the libraries of Open CV is necessary to acquire images, which can be captured by a computer’s web camera or a different specialized camera. 1.3 The recognition of RGB colors is realized by code, crossing the matrices of the captured images and comparing pixels, identifying the primary colors which are red, green, and blue. 1.4 To detect forms it is necessary to realize the segmentation of the images, so the first step is converting the image from RGB to grayscale, to work with the dark tones of the image, then the image is binarized which means having the figure of the image in a white tone with a black background. Finally, we find the contours of the figure in the image to detect the quantity of edges to identify which figure it is. 1.5 After the color and figure have been identified, the program links with the conveyor systems, which through the actuators will classify the figures in their respective cubicles. Conclusions: The Open CV library is a useful tool for projects in which an interface between a computer and the environment is required since the camera obtains external characteristics and realizes any process. With the program for this project any type of assembly line can be optimized because images from the environment can be obtained and the process would be more accurate.Keywords: artificial intelligence, artificial vision, binarized, grayscale, images, RGB
Procedia PDF Downloads 378600 Assessing the Structure of Non-Verbal Semantic Knowledge: The Evaluation and First Results of the Hungarian Semantic Association Test
Authors: Alinka Molnár-Tóth, Tímea Tánczos, Regina Barna, Katalin Jakab, Péter Klivényi
Abstract:
Supported by neuroscientific findings, the so-called Hub-and-Spoke model of the human semantic system is based on two subcomponents of semantic cognition, namely the semantic control process and semantic representation. Our semantic knowledge is multimodal in nature, as the knowledge system stored in relation to a conception is extensive and broad, while different aspects of the conception may be relevant depending on the purpose. The motivation of our research is to develop a new diagnostic measurement procedure based on the preservation of semantic representation, which is appropriate to the specificities of the Hungarian language and which can be used to compare the non-verbal semantic knowledge of healthy and aphasic persons. The development of the test will broaden the Hungarian clinical diagnostic toolkit, which will allow for more specific therapy planning. The sample of healthy persons (n=480) was determined by the last census data for the representativeness of the sample. Based on the concept of the Pyramids and Palm Tree Test, and according to the characteristics of the Hungarian language, we have elaborated a test based on different types of semantic information, in which the subjects are presented with three pictures: they have to choose the one that best fits the target word above from the two lower options, based on the semantic relation defined. We have measured 5 types of semantic knowledge representations: associative relations, taxonomy, motional representations, concrete as well as abstract verbs. As the first step in our data analysis, we examined the normal distribution of our results, and since it was not normally distributed (p < 0.05), we used nonparametric statistics further into the analysis. Using descriptive statistics, we could determine the frequency of the correct and incorrect responses, and with this knowledge, we could later adjust and remove the items of questionable reliability. The reliability was tested using Cronbach’s α, and it can be safely said that all the results were in an acceptable range of reliability (α = 0.6-0.8). We then tested for the potential gender differences using the Mann Whitney-U test, however, we found no difference between the two (p < 0.05). Likewise, we didn’t see that the age had any effect on the results using one-way ANOVA (p < 0.05), however, the level of education did influence the results (p > 0.05). The relationships between the subtests were observed by the nonparametric Spearman’s rho correlation matrix, showing statistically significant correlation between the subtests (p > 0.05), signifying a linear relationship between the measured semantic functions. A margin of error of 5% was used in all cases. The research will contribute to the expansion of the clinical diagnostic toolkit and will be relevant for the individualised therapeutic design of treatment procedures. The use of a non-verbal test procedure will allow an early assessment of the most severe language conditions, which is a priority in the differential diagnosis. The measurement of reaction time is expected to advance prodrome research, as the tests can be easily conducted in the subclinical phase.Keywords: communication disorders, diagnostic toolkit, neurorehabilitation, semantic knowlegde
Procedia PDF Downloads 103599 Rotterdam in Transition: A Design Case for a Low-Carbon Transport Node in Lombardijen
Authors: Halina Veloso e Zarate, Manuela Triggianese
Abstract:
The urban challenges posed by rapid population growth, climate adaptation, and sustainable living have compelled Dutch cities to reimagine their built environment and transportation systems. As a pivotal contributor to CO₂ emissions, the transportation sector in the Netherlands demands innovative solutions for transitioning to low-carbon mobility. This study investigates the potential of transit oriented development (TOD) as a strategy for achieving carbon reduction and sustainable urban transformation. Focusing on the Lombardijen station area in Rotterdam, which is targeted for significant densification, this paper presents a design-oriented exploration of a low-carbon transport node. By employing a research-by-design methodology, this study delves into multifaceted factors and scales, aiming to propose future scenarios for Lombardijen. Drawing from a synthesis of existing literature, applied research, and practical insights, a robust design framework emerges. To inform this framework, governmental data concerning the built environment and material embodied carbon are harnessed. However, the restricted access to crucial datasets, such as property ownership information from the cadastre and embodied carbon data from De Nationale Milieudatabase, underscores the need for improved data accessibility, especially during the concept design phase. The findings of this research contribute fundamental insights not only to the Lombardijen case but also to TOD studies across Rotterdam's 13 nodes and similar global contexts. Spatial data related to property ownership facilitated the identification of potential densification sites, underscoring its importance for informed urban design decisions. Additionally, the paper highlights the disparity between the essential role of embodied carbon data in environmental assessments for building permits and its limited accessibility due to proprietary barriers. Although this study lays the groundwork for sustainable urbanization through TOD-based design, it acknowledges an area of future research worthy of exploration: the socio-economic dimension. Given the complex socio-economic challenges inherent in the Lombardijen area, extending beyond spatial constraints, a comprehensive approach demands integration of mobility infrastructure expansion, land-use diversification, programmatic enhancements, and climate adaptation. While the paper adopts a TOD lens, it refrains from an in-depth examination of issues concerning equity and inclusivity, opening doors for subsequent research to address these aspects crucial for holistic urban development.Keywords: Rotterdam zuid, transport oriented development, carbon emissions, low-carbon design, cross-scale design, data-supported design
Procedia PDF Downloads 84598 Cut-Off of CMV Cobas® Taqman® (CAP/CTM Roche®) for Introduction of Ganciclovir Pre-Emptive Therapy in Allogeneic Hematopoietic Stem Cell Transplant Recipients
Authors: B. B. S. Pereira, M. O. Souza, L. P. Zanetti, L. C. S. Oliveira, J. R. P. Moreno, M. P. Souza, V. R. Colturato, C. M. Machado
Abstract:
Background: The introduction of prophylactic or preemptive therapies has effectively decreased the CMV mortality rates after hematopoietic stem cell transplantation (HSCT). CMV antigenemia (pp65) or quantitative PCR are methods currently approved for CMV surveillance in pre-emptive strategies. Commercial assays are preferred as cut-off levels defined by in-house assays may vary among different protocols and in general show low reproducibility. Moreover, comparison of published data among different centers is only possible if international standards of quantification are included in the assays. Recently, the World Health Organization (WHO) established the first international standard for CMV detection. The real time PCR COBAS Ampliprep/ CobasTaqMan (CAP/CTM) (Roche®) was developed using the WHO standard for CMV quantification. However, the cut-off for the introduction of antiviral has not been determined yet. Methods: We conducted a retrospective study to determine: 1) the sensitivity and specificity of the new CMV CAP/CTM test in comparison with pp65 antigenemia to detect episodes of CMV infection/reactivation, and 2) the cut-off of viral load for introduction of ganciclovir (GCV). Pp65 antigenemia was performed and the corresponding plasma samples were stored at -20°C for further CMV detection by CAP/CTM. Comparison of tests was performed by kappa index. The appearance of positive antigenemia was considered the state variable to determine the cut-off of CMV viral load by ROC curve. Statistical analysis was performed using SPSS software version 19 (SPSS, Chicago, IL, USA.). Results: Thirty-eight patients were included and followed from August 2014 through May 2015. The antigenemia test detected 53 episodes of CMV infection in 34 patients (89.5%), while CAP/CTM detected 37 episodes in 33 patients (86.8%). AG and PCR results were compared in 431 samples and Kappa index was 30.9%. The median time for first AG detection was 42 (28-140) days, while CAP/CTM detected at a median of 7 days earlier (34 days, ranging from 7 to 110 days). The optimum cut-off value of CMV DNA was 34.25 IU/mL to detect positive antigenemia with 88.2% of sensibility, 100% of specificity and AUC of 0.91. This cut-off value is below the limit of detection and quantification of the equipment which is 56 IU/mL. According to CMV recurrence definition, 16 episodes of CMV recurrence were detected by antigenemia (47.1%) and 4 (12.1%) by CAP/CTM. The duration of viremia as detected by antigenemia was shorter (60.5% of the episodes lasted ≤ 7 days) in comparison to CAP/CTM (57.9% of the episodes lasting 15 days or more). This data suggests that the use of antigenemia to define the duration of GCV therapy might prompt early interruption of antiviral, which may favor CMV reactivation. The CAP/CTM PCR could possibly provide a safer information concerning the duration of GCV therapy. As prolonged treatment may increase the risk of toxicity, this hypothesis should be confirmed in prospective trials. Conclusions: Even though CAP/CTM by ROCHE showed great qualitative correlation with the antigenemia technique, the fully automated CAP/CTM did not demonstrate increased sensitivity. The cut-off value below the limit of detection and quantification may result in delayed introduction of pre-emptive therapy.Keywords: antigenemia, CMV COBAS/TAQMAN, cytomegalovirus, antiviral cut-off
Procedia PDF Downloads 191