Search results for: research anxiety
1385 The Commodification of Internet Culture: Online Memes and Differing Perceptions of Their Commercial Uses
Authors: V. Esteves
Abstract:
As products of participatory culture, internet memes represent a global form of interaction with online culture. These digital objects draw upon a rich historical engagement with remix practices that dates back decades: from the copy and paste practices of Dadaism and punk to the re-appropriation techniques of the Situationist International; memes echo a long established form of cultural creativity that pivots on the art of the remix. Online culture has eagerly embraced the changes that the Web 2.0 afforded in terms of making use of remixing as an accessible form of societal expression, bridging these remix practices of the past into a more widely available and accessible platform. Memes embody the idea of 'intercreativity', allowing global creative collaboration to take place through networked digital media; they reflect the core values of participation and interaction that are present throughout much internet discourse whilst also existing in a historical remix continuum. Memes hold the power of cultural symbolism manipulated by global audiences through which societies make meaning, as these remixed digital objects have an elasticity and low literacy level that allows for a democratic form of cultural engagement and meaning-making by and for users around the world. However, because memes are so elastic, their ability to be re-appropriated by other powers for reasons beyond their original intention has become evident. Recently, corporations have made use of internet memes for advertising purposes, engaging in the circulation and re-appropriation of internet memes in commercial spaces – which has, in turn, complicated this relation between online users and memes' democratic possibilities further. By engaging in a widespread online ethnography supplemented by in-depth interviews with meme makers, this research was able to not only track different online meme use through commercial contexts, but it also allowed the possibility to engage in qualitative discussions with meme makers and users regarding their perception and experience of these varying commercial uses of memes. These can be broadly put within two categories: internet memes that are turned into physical merchandise and the use of memes in advertising to sell other (non-meme related) products. Whilst there has been considerable acceptance of the former type of commercial meme use, the use of memes in adverts in order to sell unrelated products has been met with resistance. The changes in reception regarding commercial meme use is dependent on ideas of cultural ownership and perceptions of authorship, ultimately uncovering underlying socio-cultural ideologies that come to the fore within these overlapping contexts. Additionally, this adoption of memes by corporate powers echoes the recuperation process that the Situationist International endured, creating a further link with older remix cultures and their lifecycles.Keywords: commodification, internet culture, memes, recuperation, remix
Procedia PDF Downloads 1441384 Pump-as-Turbine: Testing and Characterization as an Energy Recovery Device, for Use within the Water Distribution Network
Authors: T. Lydon, A. McNabola, P. Coughlan
Abstract:
Energy consumption in the water distribution network (WDN) is a well established problem equating to the industry contributing heavily to carbon emissions, with 0.9 kg CO2 emitted per m3 of water supplied. It is indicated that 85% of energy wasted in the WDN can be recovered by installing turbines. Existing potential in networks is present at small capacity sites (5-10 kW), numerous and dispersed across networks. However, traditional turbine technology cannot be scaled down to this size in an economically viable fashion, thus alternative approaches are needed. This research aims to enable energy recovery potential within the WDN by exploring the potential of pumps-as-turbines (PATs), to realise this potential. PATs are estimated to be ten times cheaper than traditional micro-hydro turbines, presenting potential to contribute to an economically viable solution. However, a number of technical constraints currently prohibit their widespread use, including the inability of a PAT to control pressure, difficulty in the selection of PATs due to lack of performance data and a lack of understanding on how PATs can cater for fluctuations as extreme as +/- 50% of the average daily flow, characteristic of the WDN. A PAT prototype is undergoing testing in order to identify the capabilities of the technology. Results of preliminary testing, which involved testing the efficiency and power potential of the PAT for varying flow and pressure conditions, in order to develop characteristic and efficiency curves for the PAT and a baseline understanding of the technologies capabilities, are presented here: •The limitations of existing selection methods which convert BEP from pump operation to BEP in turbine operation was highlighted by the failure of such methods to reflect the conditions of maximum efficiency of the PAT. A generalised selection method for the WDN may need to be informed by an understanding of impact of flow variations and pressure control on system power potential capital cost, maintenance costs, payback period. •A clear relationship between flow and efficiency rate of the PAT has been established. The rate of efficiency reductions for flows +/- 50% BEP is significant and more extreme for deviations in flow above the BEP than below, but not dissimilar to the reaction of efficiency of other turbines. •PAT alone is not sufficient to regulate pressure, yet the relationship of pressure across the PAT is foundational in exploring ways which PAT energy recovery systems can maintain required pressure level within the WDN. Efficiencies of systems of PAT energy recovery systems operating conditions of pressure regulation, which have been conceptualise in current literature, need to be established. Initial results guide the focus of forthcoming testing and exploration of PAT technology towards how PATs can form part of an efficiency energy recovery system.Keywords: energy recovery, pump-as-turbine, water distribution network, water distribution network
Procedia PDF Downloads 2601383 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries
Authors: Ismatilla Mardanov
Abstract:
There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.Keywords: foreign direct investment, economy, institutions, instrumental variable estimation
Procedia PDF Downloads 1591382 Exploring Neural Responses to Urban Spaces in Older People Using Mobile EEG
Authors: Chris Neale, Jenny Roe, Peter Aspinall, Sara Tilley, Steve Cinderby, Panos Mavros, Richard Coyne, Neil Thin, Catharine Ward Thompson
Abstract:
This research directly assesses older people’s neural activation in response to walking through a changing urban environment, as measured by electroencephalography (EEG). As the global urban population is predicted to grow, there is a need to understand the role that the urban environment may play on the health of its older inhabitants. There is a large body of evidence suggesting green space has a beneficial restorative effect, but this effect remains largely understudied in both older people and by using a neuroimaging assessment. For this study, participants aged 65 years and over were required to walk between a busy urban built environment and a green urban environment, in a counterbalanced design, wearing an Emotiv EEG headset to record real-time neural responses to place. Here we report on the outputs for these responses derived from both the proprietary Affectiv Suite software, which creates emotional parameters with a real time value assigned to them, as well as the raw EEG output focusing on alpha and beta changes, associated with changes in relaxation and attention respectively. Each walk lasted around fifteen minutes and was undertaken at the natural walking pace of the participant. The two walking environments were compared using a form of high dimensional correlated component regression (CCR) on difference data between the urban busy and urban green spaces. For the Emotiv parameters, results showed that levels of ‘engagement’ increased in the urban green space (with a subsequent decrease in the urban busy built space) whereas levels of ‘excitement’ increased in the urban busy environment (with a subsequent decrease in the urban green space). In the raw data, low beta (13 – 19 Hz) increased in the urban busy space with a subsequent decrease shown in the green space, similar to the pattern shown with the ‘excitement’ result. Alpha activity (9 – 13 Hz) shows a correlation with low beta, but not with dependent change in the regression model. This suggests that alpha is acting as a suppressor variable. These results suggest that there are neural signatures associated with the experience of urban spaces which may reflect the age of the cohort or the spatiality of the settings themselves. These are shown both in the outputs of the proprietary software as well as the raw EEG output. Built busy urban spaces appear to induce neural activity associated with vigilance and low level stress, while this effect is ameliorated in the urban green space, potentially suggesting a beneficial effect on attentional capacity in urban green space in this participant group. The interaction between low beta and alpha requires further investigation, in particular the role of alpha in this relationship.Keywords: ageing, EEG, green space, urban space
Procedia PDF Downloads 2241381 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification
Procedia PDF Downloads 521380 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective
Authors: Amy Y. Wang, Eko L. Yi
Abstract:
Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.Keywords: employee voice, organizational politics, social information processing, workplace ostracism
Procedia PDF Downloads 1401379 Association between Occupational Characteristics and Well-Being: An Exploratory Study of Married Working Women in New Delhi, India
Authors: Kanchan Negi
Abstract:
Background: Modern and urban occupational culture have driven demands for people to work long hours and weekends and take work to home at times. Research on the health effects of these exhaustive temporal work patterns is scant or contradictory. This study examines the relationship between work patterns and wellbeing in a sample of women living in the metropolitan hub of Delhi. Method: This study is based on the data collected from 360 currently married women between age 29 and 49 years, working in the urban capital hub of India, i.e., Delhi. The women interviewed were professionals from the education, health, banking and information and technology (IT) sector. Bivariate analysis was done to study the characteristics of the sample. Logistic regression analysis was used to estimate the physical and psychological wellbeing across occupational characteristics. Results: Most of the working women were below age 35 years; around 30% of women worked in the education sector, 23% in health, 21% in banking and 26% in the IT sector. Over 55% of women were employed in the private sector and only 36% were permanent employees. Nearly 30% of women worked for more than the standard 8 hours a day. The findings from logistic regression showed that compared to women working in the education sector, those who worked in the banking and IT sector more likely to have physical and psychological health issues (OR 2.07-4.37, CI 1.17-4.37); women who bear dual burden of responsibilities had higher odds of physical and psychological health issues than women who did not (OR 1.19-1.85 CI 0.96-2.92). Women who worked for more than 8 hours a day (OR 1.15, CI 1.01-1.30) and those who worked for more than five days a week (OR 1.25, CI 1.05-1.35) were more likely to have physical health issues than women who worked for 6-8 hours a day and five days e week, respectively. Also, not having flexible work timings and compensatory holidays increased the odds of having physical and psychological health issues among working women (OR 1.17-1.29, CI 1.01-1.47). Women who worked in the private sector, those employed temporarily and who worked in the non-conducive environments were more likely to have psychological health issues as compared to women in the public sector, permanent employees and those who worked in a conducive environment, respectively (OR 1.33-1.67, CI 1.09-2.91). Women who did not have poor work-life balance had reduced the odds of psychological health issues than women with poor work-life balance (OR 0.46, CI 0.25-0.84). Conclusion: Poor wellbeing significantly linked to strenuous and rigid work patterns, suggesting that modern and urban work culture may contribute to the poor wellbeing of working women. Noticing the recent decline in female workforce participation in Delhi, schemes like Flexi-timings, compensatory holidays, work-from-home and daycare facilities for young ones must be welcomed; these policies already exist in some private sector firms, and the public sectors companies should also adopt such changes to ease the dual burden as homemaker and career maker. This could encourage women in the urban areas to readily take up the jobs with less juggle to manage home and work.Keywords: occupational characteristics, urban India, well-being, working women
Procedia PDF Downloads 2051378 Between Leader-Member Exchange and Toxic Leadership: A Theoretical Review
Authors: Aldila Dyas Nurfitri
Abstract:
Nowadays, leadership has became the one of main issues in forming organization groups even countries. The concept of a social contract between the leaders and subordinates become one of the explanations for the leadership process. The interests of the two parties are not always the same, but they must work together to achieve both goals. Based on the concept at the previous it comes “The Leader Member Exchange Theory”—well known as LMX Theory, which assumes that leadership is a process of social interaction interplay between the leaders and their subordinates. High-quality LMX relationships characterized by a high carrying capacity, informal supervision, confidence, and power negotiation enabled, whereas low-quality LMX relationships are described by low support, large formal supervision, less or no participation of subordinates in decision-making, and less confidence as well as the attention of the leader Application of formal supervision system in a low LMX behavior was in line with strict controls on toxic leadership model. Leaders must be able to feel toxic control all aspects of the organization every time. Leaders with this leadership model does not give autonomy to the staff. This behavior causes stagnation and make a resistant organizational culture in an organization. In Indonesia, the pattern of toxic leadership later evolved into a dysfunctional system that is growing rapidly. One consequence is the emergence of corrupt behavior. According to Kellerman, corruption is defined as a pattern and some subordinates behave lie, cheat or steal to a degree that goes beyond the norm, they put self-interest than the common good.According to the corruption data in Indonesia based on the results of ICW research on 2012 showed that the local government sector ranked first with 177 cases. Followed by state or local enterprises as much as 41 cases. LMX is defined as the quality of the relationship between superiors and subordinates are implications for the effectiveness and progress of the organization. The assumption of this theory that leadership as a process of social interaction interplay between the leaders and his followers are characterized by a number of dimensions, such as affection, loyalty, contribution, and professional respect. Meanwhile, the toxic leadership is dysfunctional leadership in organization that is led by someone with the traits are not able to adjust, do not have integrity, malevolent, evil, and full of discontent marked by a number of characteristics, such as self-centeredness, exploiting others, controlling behavior, disrespecting others, suppress innovation and creativity of employees, and inadequate emotional intelligence. The leaders with some characteristics, such as high self-centeredness, exploiting others, controlling behavior, and disrespecting others, tends to describe a low LMX relationships directly with subordinates compared with low self-centeredness, exploiting others, controlling behavior, and disrespecting others. While suppress innovation and creativity of employees aspect and inadequate emotional intelligence, tend not to give direct effect to the low quality of LMX.Keywords: leader-member exchange, toxic leadership, leadership
Procedia PDF Downloads 4871377 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 1931376 Working Capital Management Practices in Small Businesses in Victoria
Authors: Ranjith Ihalanayake, Lalith Seelanatha, John Breen
Abstract:
In this study, we explored the current working capital management practices as applied in small businesses in Victoria, filling an existing theoretical and empirical gap in literature in general and in Australia in particular. Amidst the current global competitive and dynamic environment, the short term insolvency of small businesses is very critical for the long run survival. A firm’s short-term insolvency is dependent on the availability of sufficient working capital for feeding day to day operational activities. Therefore, given the reliance for short-term funding by small businesses, it has been recognized that the efficient management of working capital is crucial in respect of the prosperity and survival of such firms. Against this background, this research was an attempt to understand the current working capital management strategies and practices used by the small scale businesses. To this end, we conducted an internet survey among 220 small businesses operating in Victoria, Australia. The survey results suggest that the majority of respondents are owner-manager (73%) and male (68%). Respondents participated in this survey mostly have a degree (46%). About a half of respondents are more than 50 years old. Most of respondents (64%) have business management experience more than ten years. Similarly, majority of them (63%) had experience in the area of their current business. Types of business of the respondents are: Private limited company (41%), sole proprietorship (37%), and partnership (15%). In addition, majority of the firms are service companies (63%), followed by retailed companies (25%), and manufacturing (17%). Size of companies of this survey varies, 32% of them have annual sales $100,000 or under, while 22% of them have revenue more than $1,000,000 every year. In regards to the total assets, majority of respondents (43%) have total assets $100,000 or less while 20% of respondents have total assets more than $1,000,000. In regards to WCMPs, results indicate that almost 70% of respondents mentioned that they are responsible for managing their business working capital. The survey shows that majority of respondents (65.5%) use their business experience to identify the level of investment in working capital, compared to 22% of respondents who seek advice from professionals. The other 10% of respondents, however, follow industry practice to identify the level of working capital. The survey also shows that more than a half of respondents maintain good liquidity financial position for their business by having accounts payable less than accounts receivable. This study finds that majority of small business companies in western area of Victoria have a WCM policy but only about 8 % of them have a formal policy. Majority of the businesses (52.7%) have an informal policy while 39.5% have no policy. Of those who have a policy, 44% described their working capital management policies as a compromise policy while 35% described their policy as a conservative policy. Only 6% of respondents apply aggressive policy. Overall the results indicate that the small businesses pay less attention into the management of working capital of their business despite its significance in the successful operation of the business. This approach may be adopted during favourable economic times. However, during relatively turbulent economic conditions, such an approach could lead to greater financial difficulties i.e. short-term financial insolvency.Keywords: small business, working capital management, Australia, sufficient, financial insolvency
Procedia PDF Downloads 3541375 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 1661374 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 1621373 “Student Veterans’ Transition to Nursing Education: Barriers and Facilitators
Authors: Bruce Hunter
Abstract:
Background: The transition for student veterans from military service to higher education can be a challenging endeavor, especially for those pursuing an education in nursing. While the experiences and perspectives of each student veteran is unique, their successful integration into an academic environment can be influenced by a complex array of barriers and facilitators. This mixed-methods study aims to explore the themes and concepts that can be found in the transition experiences of student veterans in nursing education, with a focus on identifying the barriers they face and the facilitators that support their success. Methods: This study utilizes an explanatory mixed-methods approach. The research participants include student veterans enrolled in nursing programs across three academic institutions in the Southeastern United States. Quantitative Phase: A Likert scale instrument is distributed to a sample of student veterans in nursing programs. The survey assesses demographic information, academic experiences, social experiences, and perceptions of institutional support. Quantitative data is analyzed using descriptive statistics to assess demographics and to identify barriers and facilitators to the transition. Qualitative Phase: Two open-ended questions were posed to student veterans to explore their lived experiences, barriers, and facilitators during the transition to nursing education and to further explain the quantitative findings. Thematic analysis with line-by-line coding is employed to identify recurring themes and narratives that may shed light on the barriers and facilitators encountered. Results: This study found that the successful academic integration of student veterans lies in recognizing the diversity of values and attitudes among student veterans, understanding the potential challenges they face, and engaging in initiative-taking steps to create an inclusive and supportive academic environment that accommodates the unique experiences of this demographic. Addressing these academic and social integration concerns can contribute to a more understanding environment for student veterans in the BSN program. Conclusion: Providing support during this transitional period is crucial not only for retaining veterans, but also for bolstering their success in achieving the status of registered nurses. Acquiring an understanding of military culture emerges as an essential initial step for nursing faculty in student veteran retention and for successful completion of their programs. Participants found that their transition experience lacked meaningful social interactions, which could foster a positive learning environment, enhance their emotional well-being, and could contribute significantly to their overall success and satisfaction in their nursing education journey. Recognizing and promoting academic and social integration is important in helping veterans experience a smooth transition into and through the unfamiliar academic environment of nursing education.Keywords: nursing, education, student veterans, barriers, facilitators
Procedia PDF Downloads 491372 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 2691371 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 3131370 The Relationship between Osteoporosis-Related Knowledge and Physical Activity among Women Age over 50 Years
Authors: P. Tardi, B. Szilagyi, A. Makai, P. Acs, M. Hock, M. Jaromi
Abstract:
Osteoporosis is becoming a major public health problem, particularly in postmenopausal women, as the incidence of this disease is getting higher. Nowadays, one of the most common chronic musculoskeletal diseases is osteoporosis. Osteoporosis-related knowledge is an important contributor to prevent or to treat osteoporosis. The most important strategies to prevent or treat the disease are increasing the level of physical activity at all ages, cessation of smoking, reduction of alcohol consumption, adequate dietary calcium, and vitamin D intake. The aim of the study was to measure the osteoporosis-related knowledge and physical activity among women age over 50 years. For the measurements, we used the osteoporosis questionnaire (OPQ) to examine the disease-specific knowledge and the global physical activity questionnaire (GPAQ) to measure the quantity and quality of the physical activity. The OPQ is a self-administered 20-item questionnaire with five categories: general information, risk factors, investigations, consequences, and treatment. There are four choices per question (one of them is the 'I do not know'). The filler gets +1 for a good answer, -1 point for a bad answer, and 0 for 'I do not know' answer. We contacted with 326 women (63.08 ± 9.36 year) to fill out the questionnaires. Descriptive analysis was carried out, and we calculated Spearman's correlation coefficient to examine the relationship between the variables. Data were entered into Microsoft Excel, and all statistical analyses were performed using SPSS (Version 24). The participants of the study (n=326) reached 8.76 ± 6.94 points on OPQ. Significant (p < 0.001) differences were found in the results of OPQ according to the highest level of education. It was observed that the score of the participants with osteoporosis (10.07 ± 6.82 points) was significantly (p=0.003) higher than participants without osteoporosis (9.38 ± 6.66 points) and the score of those women (6.49 ± 6.97 points) who did not know that osteoporosis exists in their case. The GPAQ results showed the sample physical activity in the dimensions of vigorous work (479.86 ± 684.02 min/week); moderate work (678.16 ± 804.5 min/week); travel (262.83 ± 380.27 min/week); vigorous recreation (77.71 ± 123.46 min/week); moderate recreation (115.15 ± 154.82 min/week) and total weekly physical activity (1645.99 ± 1432.88 min/week). Significant correlations were found between the osteoporosis-related knowledge and the physical activity in travel (R=0.21; p < 0.001), vigorous recreation (R=0.35; p < 0.001), moderate recreation (R=0.35; p < 0.001), total vigorous minutes/week (R=0.15; p=0.001) and total moderate minutes/week (R=0.13; p=0.04) dimensions. According to the results that were achieved, the highest level of education significantly determines osteoporosis-related knowledge. Physical activity is an important contributor to prevent or to treat osteoporosis, and it showed a significant correlation with osteoporosis-related knowledge. Based on the results, the development of osteoporosis-related knowledge may help to improve the level of physical activity, especially recreation. Acknowledgment: Supported by the ÚNKP-20-1 New National Excellence Program of The Ministry for Innovation and Technology from the Source of the National Research, Development and Innovation Fund.Keywords: osteoporosis, osteoporosis-related knowledge, physical activity, prevention
Procedia PDF Downloads 1131369 Deforestation, Vulnerability and Adaptation Strategies of Rural Farmers: The Case of Central Rift Valley Region of Ethiopia
Authors: Dembel Bonta Gebeyehu
Abstract:
In the study area, the impacts of deforestation for environmental degradation and livelihood of farmers manifest in different faces. They are more vulnerable as they depend on rain-fed agriculture and immediate natural forests. On the other hand, after planting seedling, waste disposal and management system of the plastic cover is poorly practiced and administered in the country in general and in the study area in particular. If this situation continues, the plastic waste would also accentuate land degradation. Besides, there is the absence of empirical studies conducted comprehensively on the research under study the case. The results of the study could suffice to inform any intervention schemes or to contribute to the existing knowledge on these issues. The study employed a qualitative approach based on intensive fieldwork data collected via various tools namely open-ended interviews, focus group discussion, key-informant interview and non-participant observation. The collected data was duly transcribed and latter categorized into different labels based on pre-determined themes to make further analysis. The major causes of deforestation were the expansion of agricultural land, poor administration, population growth, and the absence of conservation methods. The farmers are vulnerable to soil erosion and soil infertility culminating in low agricultural production; loss of grazing land and decline of livestock production; climate change; and deterioration of social capital. Their adaptation and coping strategies include natural conservation measures, diversification of income sources, safety-net program, and migration. Due to participatory natural resource conservation measures, soil erosion has been decreased and protected, indigenous woodlands started to regenerate. These brought farmers’ attitudinal change. The existing forestation program has many flaws. Especially, after planting seedlings, there is no mechanism for the plastic waste disposal and management. It was also found out organizational challenges among the mandated offices In the studied area, deforestation is aggravated by a number of factors, which made the farmers vulnerable. The current forestation programs are not well-planned, implemented, and coordinated. Sustainable and efficient seedling plastic cover collection and reuse methods should be devised. This is possible through creating awareness, organizing micro and small enterprises to reuse, and generate income from the collected plastic etc.Keywords: land-cover and land-dynamics, vulnerability, adaptation strategy, mitigation strategies, sustainable plastic waste management
Procedia PDF Downloads 3881368 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 1401367 A Village Transformed as Census Town a Case Study of Village Nilpur, Tehsil Rajpura, District Patiala (Punjab, India)
Authors: Preetinder Kaur Randhawa
Abstract:
The rural areas can be differentiated from urban areas in terms of their economic activities as rural areas are primarily involved in agricultural sector and provide natural resources whereas, urban areas are primarily involved in infrastructure sector and provide manufacturing services. Census of India defines a Census Town as an area which satisfies the following three criteria i.e. population exceeds 5000, at least 75 percent of male population engaged in non-agricultural sector and minimum population density of 400 persons per square kilometers. Urban areas can be attributed to the improvement of transport facilities, the massive decline in agricultural, especially male workers and workers shift to non-agricultural activities. This study examines the pattern, process of rural areas transformed into urban areas/ census town. The study has analyzed the various factors which are responsible for land transformation as well as the socio-economic transformation of the village population. Nilpur (CT) which belongs to Rajpura Tehsil in Patiala district, Punjab has been selected for the present study. The methodology adopted includes qualitative and quantitative research design, methods based on secondary data. Secondary data has been collected from unpublished revenue record office of Rajpura Tehsil and Primary Census Abstract of Patiala district, Census of India 2011. The results have showed that rate of transformation of a village to census town in Rajpura Tehsil has been one of highest among other villages. The census town has evolved through the evolutionary process of human settlement which grows in size, population and physical development. There must be a complete economic transformation and attainment of high level of technological development. Urban design and construction of buildings and infrastructure can be carried out better and faster and can be used to aid human habitation with the enhancement of quality of life. The study has concluded that in the selected area i.e Nilpur (CT) literacy rate has increased to 72.1 percent in year 2011 from 67.6 percent in year 2001. Similarly non-agricultural work force has increased to 95.2 percent in year 2011 from 81.1 percent in year 2001. It is very much clear that the increased literacy rate has put a positive impact on the involvement of non-agricultural workers have enhanced. The study has concluded that rural-urban linkages are important tools for understanding complexities of people livelihood and their strategies which involve mobility migration and the diversification of income sources and occupations.Keywords: Census Town, India, Nilpur, Punjab
Procedia PDF Downloads 2511366 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 3931365 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections
Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor
Abstract:
Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance
Procedia PDF Downloads 1681364 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance
Authors: Sarah Crawford, Gerry Lesley
Abstract:
This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors
Procedia PDF Downloads 681363 Perception of Nurses and Caregivers on Fall Preventive Management for Hospitalized Children Based on Ecological Model
Authors: Mirim Kim, Won-Oak Oh
Abstract:
Purpose: The purpose of this study was to identify hospitalized children's fall risk factors, fall prevention status and fall prevention strategies recognized by nurses and caregivers of hospitalized children and present an ecological model for fall preventive management in hospitalized children. Method: The participants of this study were 14 nurses working in medical institutions and having more than one year of child care experience and 14 adult caregivers of children under 6 years of age receiving inpatient treatment at a medical institution. One to one interview was attempted to identify their perception of fall preventive management. Transcribed data were analyzed through latent content analysis method. Results: Fall risk factors in hospitalized children were 'unpredictable behavior', 'instability', 'lack of awareness about danger', 'lack of awareness about falls', 'lack of child control ability', 'lack of awareness about the importance of fall prevention', 'lack of sensitivity to children', 'untidy environment around children', 'lack of personalized facilities for children', 'unsafe facility', 'lack of partnership between healthcare provider and caregiver', 'lack of human resources', 'inadequate fall prevention policy', 'lack of promotion about fall prevention', 'a performanceism oriented culture'. Fall preventive management status of hospitalized children were 'absence of fall prevention capability', 'efforts not to fall', 'blocking fall risk situation', 'limit the scope of children's activity when there is no caregiver', 'encourage caregivers' fall prevention activities', 'creating a safe environment surrounding hospitalized children', 'special management for fall high risk children', 'mutual cooperation between healthcare providers and caregivers', 'implementation of fall prevention policy', 'providing guide signs about fall risk'. Fall preventive management strategies of hospitalized children were 'restrain dangerous behavior', 'inspiring awareness about fall', 'providing fall preventive education considering the child's eye level', 'efforts to become an active subject of fall prevention activities', 'providing customed fall prevention education', 'open communication between healthcare providers and caregivers', 'infrastructure and personnel management to create safe hospital environment', 'expansion fall prevention campaign', 'development and application of a valid fall assessment instrument', 'conversion of awareness about safety'. Conclusion: In this study, the ecological model of fall preventive management for hospitalized children reflects various factors that directly or indirectly affect the fall prevention of hospitalized children. Therefore, these results can be considered as useful baseline data for developing systematic fall prevention programs and hospital policies to prevent fall accident in hospitalized children. Funding: This study was funded by the National Research Foundation of South Korea (grant number NRF-2016R1A2B1015455).Keywords: fall down, safety culture, hospitalized children, risk factors
Procedia PDF Downloads 1641362 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 731361 Enhancing Mental Health Services Through Strategic Planning: The East Tennessee State University Counseling Center’s 2024-2028 Plan
Authors: R. M. Kilonzo, S. Bedingfield, K. Smith, K. Hudgins Smith, K. Couper, R. Ratley, Z. Taylor, A. Engelman, M. Renne
Abstract:
Introduction: The mental health needs of university students continue to evolve, necessitating a strategic approach to service delivery. The East Tennessee State University (ETSU) Counseling Center developed its inaugural Strategic Plan (2024-2028) to enhance student mental health services. The plan focuses on improving access, quality of care, and service visibility, aligning with the university’s mission to support academic success and student well-being. Aim: This strategic plan aims to establish a comprehensive framework for delivering high-quality, evidence-based mental health services to ETSU students, addressing current challenges, and anticipating future needs. Methods: The development of the strategic plan was a collaborative effort involving the Counseling Center’s leadership, staff, with technical support from Doctor of Public Health-community and behavioral health intern. Multiple workshops, online/offline reviews, and stakeholder consultations were held to ensure a robust and inclusive process. A SWOT analysis and stakeholder mapping were conducted to identify strengths, weaknesses, opportunities, and challenges. Key performance indicators (KPIs) were set to measure service utilization, satisfaction, and outcomes. Results: The plan resulted in four strategic priorities: service application, visibility/accessibility, safety and satisfaction, and training programs. Key objectives include expanding counseling services, improving service access through outreach, reducing stigma, and increasing peer support programs. The plan also focuses on continuous quality improvement through data-driven assessments and research initiatives. Immediate outcomes include expanded group therapy, enhanced staff training, and increased mental health literacy across campus. Conclusion and Recommendation: The strategic plan provides a roadmap for addressing the mental health needs of ETSU students, with a clear focus on accessibility, inclusivity, and evidence-based practices. Implementing the plan will strengthen the Counseling Center’s capacity to meet the diverse needs of the student population. To ensure sustainability, it is recommended that the center continuously assess student needs, foster partnerships with university and external stakeholders, and advocate for increased funding to expand services and staff capacity.Keywords: strategic plan, university counseling center, mental health, students
Procedia PDF Downloads 191360 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves
Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada
Abstract:
Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine
Procedia PDF Downloads 1631359 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data
Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang
Abstract:
Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.Keywords: case fatality rate, dengue, dengue vaccine, the elderly
Procedia PDF Downloads 2811358 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 1481357 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 2531356 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 323