Search results for: transport research
1335 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification
Procedia PDF Downloads 491334 The Use of Political Savviness in Dealing with Workplace Ostracism: A Social Information Processing Perspective
Authors: Amy Y. Wang, Eko L. Yi
Abstract:
Can vicarious experiences of workplace ostracism affect employees’ willingness to voice? Given the increasingly interdependent nature of the modern workplace in which employees rely on social interactions to fulfill organizational goals, workplace ostracism –the extent to which an individual perceives that he or she is ignored or excluded by others in the workplace– has garnered significant interest from scholars and practitioners alike. Extending beyond conventional studies that largely focus on the perspectives and outcomes of ostracized targets, we address the indirect effects of workplace ostracism on third-party employees embedded in the same social context. Using a social information processing approach, we propose that the ostracism of coworkers acts as political information that influences third-party employees in their decisions to engage in risky and discretionary behaviors such as employee voice. To make sense of and to navigate through experiences of workplace ostracism, we posit that both political understanding and political skill allow third party employees to minimize the risks and uncertainty of voicing. This conceptual model was tested by a study involving 154 supervisor-subordinate dyads of a publicly listed bio-technology firm located in Mainland China. Each supervisor and their direct subordinates composed of a work team; each team had a minimum of two subordinates and a maximum of four subordinates. Human resources used the master list to distribute the ID coded questionnaires to the matching names. All studied constructs were measured using existing scales proved effective in previous literature. Hypotheses were tested using Confirmatory Factor Analysis and Hierarchal Multiple Regression. All three hypotheses were supported which showed that employees were less likely to engage in voice behaviors when their coworkers reported having experienced ostracism in the workplace. Results also showed a significant three-way interaction between political understanding and political skill on the relationship between coworkers’ ostracism and employee voice, indicating that political savviness is a valuable resource in mitigating ostracism’s negative and indirect effects. Our results illustrated that an employee’s coworkers being ostracized indeed adversely impacted his or her own voice behavior. However, not all individuals reacted passively to the social context; rather, we found that politically savvy individuals – possessing both political understanding and political skill – and their voice behaviors were less impacted by ostracism in their work environment. At the same time, we found that having only political understanding or only political skill was significantly less effective in mitigating ostracism’s negative effects, suggesting a necessary duality of political knowledge and political skill in combatting ostracism. Organizational implications, recommendations, and future research ideas are also discussed.Keywords: employee voice, organizational politics, social information processing, workplace ostracism
Procedia PDF Downloads 1391333 Association between Occupational Characteristics and Well-Being: An Exploratory Study of Married Working Women in New Delhi, India
Authors: Kanchan Negi
Abstract:
Background: Modern and urban occupational culture have driven demands for people to work long hours and weekends and take work to home at times. Research on the health effects of these exhaustive temporal work patterns is scant or contradictory. This study examines the relationship between work patterns and wellbeing in a sample of women living in the metropolitan hub of Delhi. Method: This study is based on the data collected from 360 currently married women between age 29 and 49 years, working in the urban capital hub of India, i.e., Delhi. The women interviewed were professionals from the education, health, banking and information and technology (IT) sector. Bivariate analysis was done to study the characteristics of the sample. Logistic regression analysis was used to estimate the physical and psychological wellbeing across occupational characteristics. Results: Most of the working women were below age 35 years; around 30% of women worked in the education sector, 23% in health, 21% in banking and 26% in the IT sector. Over 55% of women were employed in the private sector and only 36% were permanent employees. Nearly 30% of women worked for more than the standard 8 hours a day. The findings from logistic regression showed that compared to women working in the education sector, those who worked in the banking and IT sector more likely to have physical and psychological health issues (OR 2.07-4.37, CI 1.17-4.37); women who bear dual burden of responsibilities had higher odds of physical and psychological health issues than women who did not (OR 1.19-1.85 CI 0.96-2.92). Women who worked for more than 8 hours a day (OR 1.15, CI 1.01-1.30) and those who worked for more than five days a week (OR 1.25, CI 1.05-1.35) were more likely to have physical health issues than women who worked for 6-8 hours a day and five days e week, respectively. Also, not having flexible work timings and compensatory holidays increased the odds of having physical and psychological health issues among working women (OR 1.17-1.29, CI 1.01-1.47). Women who worked in the private sector, those employed temporarily and who worked in the non-conducive environments were more likely to have psychological health issues as compared to women in the public sector, permanent employees and those who worked in a conducive environment, respectively (OR 1.33-1.67, CI 1.09-2.91). Women who did not have poor work-life balance had reduced the odds of psychological health issues than women with poor work-life balance (OR 0.46, CI 0.25-0.84). Conclusion: Poor wellbeing significantly linked to strenuous and rigid work patterns, suggesting that modern and urban work culture may contribute to the poor wellbeing of working women. Noticing the recent decline in female workforce participation in Delhi, schemes like Flexi-timings, compensatory holidays, work-from-home and daycare facilities for young ones must be welcomed; these policies already exist in some private sector firms, and the public sectors companies should also adopt such changes to ease the dual burden as homemaker and career maker. This could encourage women in the urban areas to readily take up the jobs with less juggle to manage home and work.Keywords: occupational characteristics, urban India, well-being, working women
Procedia PDF Downloads 2051332 Between Leader-Member Exchange and Toxic Leadership: A Theoretical Review
Authors: Aldila Dyas Nurfitri
Abstract:
Nowadays, leadership has became the one of main issues in forming organization groups even countries. The concept of a social contract between the leaders and subordinates become one of the explanations for the leadership process. The interests of the two parties are not always the same, but they must work together to achieve both goals. Based on the concept at the previous it comes “The Leader Member Exchange Theory”—well known as LMX Theory, which assumes that leadership is a process of social interaction interplay between the leaders and their subordinates. High-quality LMX relationships characterized by a high carrying capacity, informal supervision, confidence, and power negotiation enabled, whereas low-quality LMX relationships are described by low support, large formal supervision, less or no participation of subordinates in decision-making, and less confidence as well as the attention of the leader Application of formal supervision system in a low LMX behavior was in line with strict controls on toxic leadership model. Leaders must be able to feel toxic control all aspects of the organization every time. Leaders with this leadership model does not give autonomy to the staff. This behavior causes stagnation and make a resistant organizational culture in an organization. In Indonesia, the pattern of toxic leadership later evolved into a dysfunctional system that is growing rapidly. One consequence is the emergence of corrupt behavior. According to Kellerman, corruption is defined as a pattern and some subordinates behave lie, cheat or steal to a degree that goes beyond the norm, they put self-interest than the common good.According to the corruption data in Indonesia based on the results of ICW research on 2012 showed that the local government sector ranked first with 177 cases. Followed by state or local enterprises as much as 41 cases. LMX is defined as the quality of the relationship between superiors and subordinates are implications for the effectiveness and progress of the organization. The assumption of this theory that leadership as a process of social interaction interplay between the leaders and his followers are characterized by a number of dimensions, such as affection, loyalty, contribution, and professional respect. Meanwhile, the toxic leadership is dysfunctional leadership in organization that is led by someone with the traits are not able to adjust, do not have integrity, malevolent, evil, and full of discontent marked by a number of characteristics, such as self-centeredness, exploiting others, controlling behavior, disrespecting others, suppress innovation and creativity of employees, and inadequate emotional intelligence. The leaders with some characteristics, such as high self-centeredness, exploiting others, controlling behavior, and disrespecting others, tends to describe a low LMX relationships directly with subordinates compared with low self-centeredness, exploiting others, controlling behavior, and disrespecting others. While suppress innovation and creativity of employees aspect and inadequate emotional intelligence, tend not to give direct effect to the low quality of LMX.Keywords: leader-member exchange, toxic leadership, leadership
Procedia PDF Downloads 4871331 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 1921330 Working Capital Management Practices in Small Businesses in Victoria
Authors: Ranjith Ihalanayake, Lalith Seelanatha, John Breen
Abstract:
In this study, we explored the current working capital management practices as applied in small businesses in Victoria, filling an existing theoretical and empirical gap in literature in general and in Australia in particular. Amidst the current global competitive and dynamic environment, the short term insolvency of small businesses is very critical for the long run survival. A firm’s short-term insolvency is dependent on the availability of sufficient working capital for feeding day to day operational activities. Therefore, given the reliance for short-term funding by small businesses, it has been recognized that the efficient management of working capital is crucial in respect of the prosperity and survival of such firms. Against this background, this research was an attempt to understand the current working capital management strategies and practices used by the small scale businesses. To this end, we conducted an internet survey among 220 small businesses operating in Victoria, Australia. The survey results suggest that the majority of respondents are owner-manager (73%) and male (68%). Respondents participated in this survey mostly have a degree (46%). About a half of respondents are more than 50 years old. Most of respondents (64%) have business management experience more than ten years. Similarly, majority of them (63%) had experience in the area of their current business. Types of business of the respondents are: Private limited company (41%), sole proprietorship (37%), and partnership (15%). In addition, majority of the firms are service companies (63%), followed by retailed companies (25%), and manufacturing (17%). Size of companies of this survey varies, 32% of them have annual sales $100,000 or under, while 22% of them have revenue more than $1,000,000 every year. In regards to the total assets, majority of respondents (43%) have total assets $100,000 or less while 20% of respondents have total assets more than $1,000,000. In regards to WCMPs, results indicate that almost 70% of respondents mentioned that they are responsible for managing their business working capital. The survey shows that majority of respondents (65.5%) use their business experience to identify the level of investment in working capital, compared to 22% of respondents who seek advice from professionals. The other 10% of respondents, however, follow industry practice to identify the level of working capital. The survey also shows that more than a half of respondents maintain good liquidity financial position for their business by having accounts payable less than accounts receivable. This study finds that majority of small business companies in western area of Victoria have a WCM policy but only about 8 % of them have a formal policy. Majority of the businesses (52.7%) have an informal policy while 39.5% have no policy. Of those who have a policy, 44% described their working capital management policies as a compromise policy while 35% described their policy as a conservative policy. Only 6% of respondents apply aggressive policy. Overall the results indicate that the small businesses pay less attention into the management of working capital of their business despite its significance in the successful operation of the business. This approach may be adopted during favourable economic times. However, during relatively turbulent economic conditions, such an approach could lead to greater financial difficulties i.e. short-term financial insolvency.Keywords: small business, working capital management, Australia, sufficient, financial insolvency
Procedia PDF Downloads 3521329 Variation among East Wollega Coffee (Coffea arabica L.) Landraces for Quality Attributes
Authors: Getachew Weldemichael, Sentayehu Alamerew, Leta Tulu, Gezahegn Berecha
Abstract:
Coffee quality improvement program is becoming the focus of coffee research, as the world coffee consumption pattern shifted to high-quality coffee. However, there is limited information on the genetic variation of C. Arabica for quality improvement in potential specialty coffee growing areas of Ethiopia. Therefore, this experiment was conducted with the objectives of determining the magnitude of variation among 105 coffee accessions collected from east Wollega coffee growing areas and assessing correlations between the different coffee qualities attributes. It was conducted in RCRD with three replications. Data on green bean physical characters (shape and make, bean color and odor) and organoleptic cup quality traits (aromatic intensity, aromatic quality, acidity, astringency, bitterness, body, flavor, and overall standard of the liquor) were recorded. Analysis of variance, clustering, genetic divergence, principal component and correlation analysis was performed using SAS software. The result revealed that there were highly significant differences (P<0.01) among the accessions for all quality attributes except for odor and bitterness. Among the tested accessions, EW104 /09, EW101 /09, EW58/09, EW77/09, EW35/09, EW71/09, EW68/09, EW96 /09, EW83/09 and EW72/09 had the highest total coffee quality values (the sum of bean physical and cup quality attributes). These genotypes could serve as a source of genes for green bean physical characters and cup quality improvement in Arabica coffee. Furthermore, cluster analysis grouped the coffee accessions into five clusters with significant inter-cluster distances implying that there is moderate diversity among the accessions and crossing accessions from these divergent inter-clusters would result in hetrosis and recombinants in segregating generations. The principal component analysis revealed that the first three principal components with eigenvalues greater than unity accounted for 83.1% of the total variability due to the variation of nine quality attributes considered for PC analysis, indicating that all quality attributes equally contribute to a grouping of the accessions in different clusters. Organoleptic cup quality attributes showed positive and significant correlations both at the genotypic and phenotypic levels, demonstrating the possibility of simultaneous improvement of the traits. Path coefficient analysis revealed that acidity, flavor, and body had a high positive direct effect on overall cup quality, implying that these traits can be used as indirect criteria to improve overall coffee quality. Therefore, it was concluded that there is considerable variation among the accessions, which need to be properly conserved for future improvement of the coffee quality. However, the variability observed for quality attributes must be further verified using biochemical and molecular analysis.Keywords: accessions, Coffea arabica, cluster analysis, correlation, principal component
Procedia PDF Downloads 1631328 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 1591327 “Student Veterans’ Transition to Nursing Education: Barriers and Facilitators
Authors: Bruce Hunter
Abstract:
Background: The transition for student veterans from military service to higher education can be a challenging endeavor, especially for those pursuing an education in nursing. While the experiences and perspectives of each student veteran is unique, their successful integration into an academic environment can be influenced by a complex array of barriers and facilitators. This mixed-methods study aims to explore the themes and concepts that can be found in the transition experiences of student veterans in nursing education, with a focus on identifying the barriers they face and the facilitators that support their success. Methods: This study utilizes an explanatory mixed-methods approach. The research participants include student veterans enrolled in nursing programs across three academic institutions in the Southeastern United States. Quantitative Phase: A Likert scale instrument is distributed to a sample of student veterans in nursing programs. The survey assesses demographic information, academic experiences, social experiences, and perceptions of institutional support. Quantitative data is analyzed using descriptive statistics to assess demographics and to identify barriers and facilitators to the transition. Qualitative Phase: Two open-ended questions were posed to student veterans to explore their lived experiences, barriers, and facilitators during the transition to nursing education and to further explain the quantitative findings. Thematic analysis with line-by-line coding is employed to identify recurring themes and narratives that may shed light on the barriers and facilitators encountered. Results: This study found that the successful academic integration of student veterans lies in recognizing the diversity of values and attitudes among student veterans, understanding the potential challenges they face, and engaging in initiative-taking steps to create an inclusive and supportive academic environment that accommodates the unique experiences of this demographic. Addressing these academic and social integration concerns can contribute to a more understanding environment for student veterans in the BSN program. Conclusion: Providing support during this transitional period is crucial not only for retaining veterans, but also for bolstering their success in achieving the status of registered nurses. Acquiring an understanding of military culture emerges as an essential initial step for nursing faculty in student veteran retention and for successful completion of their programs. Participants found that their transition experience lacked meaningful social interactions, which could foster a positive learning environment, enhance their emotional well-being, and could contribute significantly to their overall success and satisfaction in their nursing education journey. Recognizing and promoting academic and social integration is important in helping veterans experience a smooth transition into and through the unfamiliar academic environment of nursing education.Keywords: nursing, education, student veterans, barriers, facilitators
Procedia PDF Downloads 481326 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 2691325 Digital Transformation of Lean Production: Systematic Approach for the Determination of Digitally Pervasive Value Chains
Authors: Peter Burggräf, Matthias Dannapfel, Hanno Voet, Patrick-Benjamin Bök, Jérôme Uelpenich, Julian Hoppe
Abstract:
The increasing digitalization of value chains can help companies to handle rising complexity in their processes and thereby reduce the steadily increasing planning and control effort in order to raise performance limits. Due to technological advances, companies face the challenge of smart value chains for the purpose of improvements in productivity, handling the increasing time and cost pressure and the need of individualized production. Therefore, companies need to ensure quick and flexible decisions to create self-optimizing processes and, consequently, to make their production more efficient. Lean production, as the most commonly used paradigm for complexity reduction, reaches its limits when it comes to variant flexible production and constantly changing market and environmental conditions. To lift performance limits, which are inbuilt in current value chains, new methods and tools must be applied. Digitalization provides the potential to derive these new methods and tools. However, companies lack the experience to harmonize different digital technologies. There is no practicable framework, which instructs the transformation of current value chains into digital pervasive value chains. Current research shows that a connection between lean production and digitalization exists. This link is based on factors such as people, technology and organization. In this paper, the introduced method for the determination of digitally pervasive value chains takes the factors people, technology and organization into account and extends existing approaches by a new dimension. It is the first systematic approach for the digital transformation of lean production and consists of four steps: The first step of ‘target definition’ describes the target situation and defines the depth of the analysis with regards to the inspection area and the level of detail. The second step of ‘analysis of the value chain’ verifies the lean-ability of processes and lies in a special focus on the integration capacity of digital technologies in order to raise the limits of lean production. Furthermore, the ‘digital evaluation process’ ensures the usefulness of digital adaptions regarding their practicability and their integrability into the existing production system. Finally, the method defines actions to be performed based on the evaluation process and in accordance with the target situation. As a result, the validation and optimization of the proposed method in a German company from the electronics industry shows that the digital transformation of current value chains based on lean production achieves a raise of their inbuilt performance limits.Keywords: digitalization, digital transformation, Industrie 4.0, lean production, value chain
Procedia PDF Downloads 3111324 The Relationship between Osteoporosis-Related Knowledge and Physical Activity among Women Age over 50 Years
Authors: P. Tardi, B. Szilagyi, A. Makai, P. Acs, M. Hock, M. Jaromi
Abstract:
Osteoporosis is becoming a major public health problem, particularly in postmenopausal women, as the incidence of this disease is getting higher. Nowadays, one of the most common chronic musculoskeletal diseases is osteoporosis. Osteoporosis-related knowledge is an important contributor to prevent or to treat osteoporosis. The most important strategies to prevent or treat the disease are increasing the level of physical activity at all ages, cessation of smoking, reduction of alcohol consumption, adequate dietary calcium, and vitamin D intake. The aim of the study was to measure the osteoporosis-related knowledge and physical activity among women age over 50 years. For the measurements, we used the osteoporosis questionnaire (OPQ) to examine the disease-specific knowledge and the global physical activity questionnaire (GPAQ) to measure the quantity and quality of the physical activity. The OPQ is a self-administered 20-item questionnaire with five categories: general information, risk factors, investigations, consequences, and treatment. There are four choices per question (one of them is the 'I do not know'). The filler gets +1 for a good answer, -1 point for a bad answer, and 0 for 'I do not know' answer. We contacted with 326 women (63.08 ± 9.36 year) to fill out the questionnaires. Descriptive analysis was carried out, and we calculated Spearman's correlation coefficient to examine the relationship between the variables. Data were entered into Microsoft Excel, and all statistical analyses were performed using SPSS (Version 24). The participants of the study (n=326) reached 8.76 ± 6.94 points on OPQ. Significant (p < 0.001) differences were found in the results of OPQ according to the highest level of education. It was observed that the score of the participants with osteoporosis (10.07 ± 6.82 points) was significantly (p=0.003) higher than participants without osteoporosis (9.38 ± 6.66 points) and the score of those women (6.49 ± 6.97 points) who did not know that osteoporosis exists in their case. The GPAQ results showed the sample physical activity in the dimensions of vigorous work (479.86 ± 684.02 min/week); moderate work (678.16 ± 804.5 min/week); travel (262.83 ± 380.27 min/week); vigorous recreation (77.71 ± 123.46 min/week); moderate recreation (115.15 ± 154.82 min/week) and total weekly physical activity (1645.99 ± 1432.88 min/week). Significant correlations were found between the osteoporosis-related knowledge and the physical activity in travel (R=0.21; p < 0.001), vigorous recreation (R=0.35; p < 0.001), moderate recreation (R=0.35; p < 0.001), total vigorous minutes/week (R=0.15; p=0.001) and total moderate minutes/week (R=0.13; p=0.04) dimensions. According to the results that were achieved, the highest level of education significantly determines osteoporosis-related knowledge. Physical activity is an important contributor to prevent or to treat osteoporosis, and it showed a significant correlation with osteoporosis-related knowledge. Based on the results, the development of osteoporosis-related knowledge may help to improve the level of physical activity, especially recreation. Acknowledgment: Supported by the ÚNKP-20-1 New National Excellence Program of The Ministry for Innovation and Technology from the Source of the National Research, Development and Innovation Fund.Keywords: osteoporosis, osteoporosis-related knowledge, physical activity, prevention
Procedia PDF Downloads 1111323 Deforestation, Vulnerability and Adaptation Strategies of Rural Farmers: The Case of Central Rift Valley Region of Ethiopia
Authors: Dembel Bonta Gebeyehu
Abstract:
In the study area, the impacts of deforestation for environmental degradation and livelihood of farmers manifest in different faces. They are more vulnerable as they depend on rain-fed agriculture and immediate natural forests. On the other hand, after planting seedling, waste disposal and management system of the plastic cover is poorly practiced and administered in the country in general and in the study area in particular. If this situation continues, the plastic waste would also accentuate land degradation. Besides, there is the absence of empirical studies conducted comprehensively on the research under study the case. The results of the study could suffice to inform any intervention schemes or to contribute to the existing knowledge on these issues. The study employed a qualitative approach based on intensive fieldwork data collected via various tools namely open-ended interviews, focus group discussion, key-informant interview and non-participant observation. The collected data was duly transcribed and latter categorized into different labels based on pre-determined themes to make further analysis. The major causes of deforestation were the expansion of agricultural land, poor administration, population growth, and the absence of conservation methods. The farmers are vulnerable to soil erosion and soil infertility culminating in low agricultural production; loss of grazing land and decline of livestock production; climate change; and deterioration of social capital. Their adaptation and coping strategies include natural conservation measures, diversification of income sources, safety-net program, and migration. Due to participatory natural resource conservation measures, soil erosion has been decreased and protected, indigenous woodlands started to regenerate. These brought farmers’ attitudinal change. The existing forestation program has many flaws. Especially, after planting seedlings, there is no mechanism for the plastic waste disposal and management. It was also found out organizational challenges among the mandated offices In the studied area, deforestation is aggravated by a number of factors, which made the farmers vulnerable. The current forestation programs are not well-planned, implemented, and coordinated. Sustainable and efficient seedling plastic cover collection and reuse methods should be devised. This is possible through creating awareness, organizing micro and small enterprises to reuse, and generate income from the collected plastic etc.Keywords: land-cover and land-dynamics, vulnerability, adaptation strategy, mitigation strategies, sustainable plastic waste management
Procedia PDF Downloads 3871322 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 1401321 Semiotics of the New Commercial Music Paradigm
Authors: Mladen Milicevic
Abstract:
This presentation will address how the statistical analysis of digitized popular music influences the music creation and emotionally manipulates consumers.Furthermore, it will deal with semiological aspect of uniformization of musical taste in order to predict the potential revenues generated by popular music sales. In the USA, we live in an age where most of the popular music (i.e. music that generates substantial revenue) has been digitized. It is safe to say that almost everything that was produced in last 10 years is already digitized (either available on iTunes, Spotify, YouTube, or some other platform). Depending on marketing viability and its potential to generate additional revenue most of the “older” music is still being digitized. Once the music gets turned into a digital audio file,it can be computer-analyzed in all kinds of respects, and the similar goes for the lyrics because they also exist as a digital text file, to which any kin of N Capture-kind of analysis may be applied. So, by employing statistical examination of different popular music metrics such as tempo, form, pronouns, introduction length, song length, archetypes, subject matter,and repetition of title, the commercial result may be predicted. Polyphonic HMI (Human Media Interface) introduced the concept of the hit song science computer program in 2003.The company asserted that machine learning could create a music profile to predict hit songs from its audio features Thus,it has been established that a successful pop song must include: 100 bpm or more;an 8 second intro;use the pronoun 'you' within 20 seconds of the start of the song; hit the bridge middle 8 between 2 minutes and 2 minutes 30 seconds; average 7 repetitions of the title; create some expectations and fill that expectation in the title. For the country song: 100 bpm or less for a male artist; 14-second intro; uses the pronoun 'you' within the first 20 seconds of the intro; has a bridge middle 8 between 2 minutes and 2 minutes 30 seconds; has 7 repetitions of title; creates an expectation,fulfills it in 60 seconds.This approach to commercial popular music minimizes the human influence when it comes to which “artist” a record label is going to sign and market. Twenty years ago,music experts in the A&R (Artists and Repertoire) departments of the record labels were making personal aesthetic judgments based on their extensive experience in the music industry. Now, the computer music analyzing programs, are replacing them in an attempt to minimize investment risk of the panicking record labels, in an environment where nobody can predict the future of the recording industry.The impact on the consumers taste through the narrow bottleneck of the above mentioned music selection by the record labels,created some very peculiar effects not only on the taste of popular music consumers, but also the creative chops of the music artists as well. What is the meaning of this semiological shift is the main focus of this research and paper presentation.Keywords: music, semiology, commercial, taste
Procedia PDF Downloads 3921320 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections
Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor
Abstract:
Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance
Procedia PDF Downloads 1671319 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance
Authors: Sarah Crawford, Gerry Lesley
Abstract:
This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors
Procedia PDF Downloads 671318 Perception of Nurses and Caregivers on Fall Preventive Management for Hospitalized Children Based on Ecological Model
Authors: Mirim Kim, Won-Oak Oh
Abstract:
Purpose: The purpose of this study was to identify hospitalized children's fall risk factors, fall prevention status and fall prevention strategies recognized by nurses and caregivers of hospitalized children and present an ecological model for fall preventive management in hospitalized children. Method: The participants of this study were 14 nurses working in medical institutions and having more than one year of child care experience and 14 adult caregivers of children under 6 years of age receiving inpatient treatment at a medical institution. One to one interview was attempted to identify their perception of fall preventive management. Transcribed data were analyzed through latent content analysis method. Results: Fall risk factors in hospitalized children were 'unpredictable behavior', 'instability', 'lack of awareness about danger', 'lack of awareness about falls', 'lack of child control ability', 'lack of awareness about the importance of fall prevention', 'lack of sensitivity to children', 'untidy environment around children', 'lack of personalized facilities for children', 'unsafe facility', 'lack of partnership between healthcare provider and caregiver', 'lack of human resources', 'inadequate fall prevention policy', 'lack of promotion about fall prevention', 'a performanceism oriented culture'. Fall preventive management status of hospitalized children were 'absence of fall prevention capability', 'efforts not to fall', 'blocking fall risk situation', 'limit the scope of children's activity when there is no caregiver', 'encourage caregivers' fall prevention activities', 'creating a safe environment surrounding hospitalized children', 'special management for fall high risk children', 'mutual cooperation between healthcare providers and caregivers', 'implementation of fall prevention policy', 'providing guide signs about fall risk'. Fall preventive management strategies of hospitalized children were 'restrain dangerous behavior', 'inspiring awareness about fall', 'providing fall preventive education considering the child's eye level', 'efforts to become an active subject of fall prevention activities', 'providing customed fall prevention education', 'open communication between healthcare providers and caregivers', 'infrastructure and personnel management to create safe hospital environment', 'expansion fall prevention campaign', 'development and application of a valid fall assessment instrument', 'conversion of awareness about safety'. Conclusion: In this study, the ecological model of fall preventive management for hospitalized children reflects various factors that directly or indirectly affect the fall prevention of hospitalized children. Therefore, these results can be considered as useful baseline data for developing systematic fall prevention programs and hospital policies to prevent fall accident in hospitalized children. Funding: This study was funded by the National Research Foundation of South Korea (grant number NRF-2016R1A2B1015455).Keywords: fall down, safety culture, hospitalized children, risk factors
Procedia PDF Downloads 1631317 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 731316 Enhancing Mental Health Services Through Strategic Planning: The East Tennessee State University Counseling Center’s 2024-2028 Plan
Authors: R. M. Kilonzo, S. Bedingfield, K. Smith, K. Hudgins Smith, K. Couper, R. Ratley, Z. Taylor, A. Engelman, M. Renne
Abstract:
Introduction: The mental health needs of university students continue to evolve, necessitating a strategic approach to service delivery. The East Tennessee State University (ETSU) Counseling Center developed its inaugural Strategic Plan (2024-2028) to enhance student mental health services. The plan focuses on improving access, quality of care, and service visibility, aligning with the university’s mission to support academic success and student well-being. Aim: This strategic plan aims to establish a comprehensive framework for delivering high-quality, evidence-based mental health services to ETSU students, addressing current challenges, and anticipating future needs. Methods: The development of the strategic plan was a collaborative effort involving the Counseling Center’s leadership, staff, with technical support from Doctor of Public Health-community and behavioral health intern. Multiple workshops, online/offline reviews, and stakeholder consultations were held to ensure a robust and inclusive process. A SWOT analysis and stakeholder mapping were conducted to identify strengths, weaknesses, opportunities, and challenges. Key performance indicators (KPIs) were set to measure service utilization, satisfaction, and outcomes. Results: The plan resulted in four strategic priorities: service application, visibility/accessibility, safety and satisfaction, and training programs. Key objectives include expanding counseling services, improving service access through outreach, reducing stigma, and increasing peer support programs. The plan also focuses on continuous quality improvement through data-driven assessments and research initiatives. Immediate outcomes include expanded group therapy, enhanced staff training, and increased mental health literacy across campus. Conclusion and Recommendation: The strategic plan provides a roadmap for addressing the mental health needs of ETSU students, with a clear focus on accessibility, inclusivity, and evidence-based practices. Implementing the plan will strengthen the Counseling Center’s capacity to meet the diverse needs of the student population. To ensure sustainability, it is recommended that the center continuously assess student needs, foster partnerships with university and external stakeholders, and advocate for increased funding to expand services and staff capacity.Keywords: strategic plan, university counseling center, mental health, students
Procedia PDF Downloads 161315 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves
Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada
Abstract:
Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine
Procedia PDF Downloads 1611314 The Epidemiology of Dengue in Taiwan during 2014-15: A Descriptive Analysis of the Severe Outbreaks of Central Surveillance System Data
Authors: Chu-Tzu Chen, Angela S. Huang, Yu-Min Chou, Chin-Hui Yang
Abstract:
Dengue is a major public health concern throughout tropical and sub-tropical regions. Taiwan is located in the Pacific Ocean and overlying the tropical and subtropical zones. The island remains humid throughout the year and receives abundant rainfall, and the temperature is very hot in summer at southern Taiwan. It is ideal for the growth of dengue vectors and would be increasing the risk on dengue outbreaks. During the first half of the 20th century, there were three island-wide dengue outbreaks (1915, 1931, and 1942). After almost forty years of dormancy, a DEN-2 outbreak occurred in Liuchiu Township, Pingtung County in 1981. Thereafter, more dengue outbreaks occurred with different scales in southern Taiwan. However, there were more than ten thousands of dengue cases in 2014 and in 2015. It did not only affect human health, but also caused widespread social disruption and economic losses. The study would like to reveal the epidemiology of dengue on Taiwan, especially the severe outbreak in 2015, and try to find the effective interventions in dengue control including dengue vaccine development for the elderly. Methods: The study applied the Notifiable Diseases Surveillance System database of the Taiwan Centers for Disease Control as data source. All cases were reported with the uniform case definition and confirmed by NS1 rapid diagnosis/laboratory diagnosis. Results: In 2014, Taiwan experienced a serious DEN-1 outbreak with 15,492 locally-acquired cases, including 136 cases of dengue hemorrhagic fever (DHF) which caused 21 deaths. However, a more serious DEN-2 outbreak occurred with 43,419 locally-acquired cases in 2015. The epidemic occurred mainly at Tainan City (22,760 cases) and Kaohsiung City (19,723 cases) in southern Taiwan. The age distribution for the cases were mainly adults. There were 228 deaths due to dengue infection, and the case fatality rate was 5.25 ‰. The average age of them was 73.66 years (range 29-96) and 86.84% of them were older than 60 years. Most of them were comorbidities. To review the clinical manifestations of the 228 death cases, 38.16% (N=87) of them were reported with warning signs, while 51.75% (N=118) were reported without warning signs. Among the 87 death cases reported to dengue with warning signs, 89.53% were diagnosed sever dengue and 84% needed the intensive care. Conclusion: The year 2015 was characterized by large dengue outbreaks worldwide. The risk of serious dengue outbreak may increase significantly in the future, and the elderly is the vulnerable group in Taiwan. However, a dengue vaccine has been licensed for use in people 9-45 years of age living in endemic settings at the end of 2015. In addition to carry out the research to find out new interventions in dengue control, developing the dengue vaccine for the elderly is very important to prevent severe dengue and deaths.Keywords: case fatality rate, dengue, dengue vaccine, the elderly
Procedia PDF Downloads 2801313 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 1471312 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 2521311 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 3221310 Biotechnological Methods for the Grouting of the Tunneling Space
Authors: V. Ivanov, J. Chu, V. Stabnikov
Abstract:
Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space
Procedia PDF Downloads 2071309 Structural Balance and Creative Tensions in New Product Development Teams
Authors: Shankaran Sitarama
Abstract:
New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams
Procedia PDF Downloads 791308 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador
Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria
Abstract:
There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables
Procedia PDF Downloads 4301307 Environmental Threats and Great Barrier Reef: A Vulnerability Assessment of World’s Best Tropical Marine Ecosystems
Authors: Ravi Kant Anand, Nikkey Keshri
Abstract:
The Great Barrier Reef of Australia is known for its beautiful landscapes and seascapes with ecological importance. This site was selected as a World Heritage site in 1981 and popularized internationally for tourism, recreational activities and fishing. But the major environmental hazards such as climate change, pollution, overfishing and shipping are making worst the site of marine ecosystem. Climate change is directly hitting on Great Barrier Reef through increasing level of sea, acidification of ocean, increasing in temperature, uneven precipitation, changes in the El Nino and increasing level of cyclones and storms. Apart from that pollution is second biggest factor which vanishing the coral reef ecosystem. Pollution including over increasement of pesticides and chemicals, eutrophication, pollution through mining, sediment runoff, loss of coastal wetland and oil spills. Coral bleaching is the biggest problem because of the environmental threatening agents. Acidification of ocean water reduced the formation of calcium carbonate skeleton. The floral ecosystem (including sea grasses and mangroves) of ocean water is the key source of food for fishes and other faunal organisms but the powerful waves, extreme temperature, destructive storms and river run- off causing the threat for them. If one natural system is under threat, it means the whole marine food web is affected from algae to whale. Poisoning of marine water through different polluting agents have been affecting the production of corals, breeding of fishes, weakening of marine health and increased in death of fishes and corals. In lieu of World Heritage site, tourism sector is directly affected and causing increasement in unemployment. Fishing sector also affected. Fluctuation in the temperature of ocean water affects the production of corals because it needs desolate place, proper sunlight and temperature up to 21 degree centigrade. But storms, El Nino, rise in temperature and sea level are induced for continuous reduction of the coral production. If we do not restrict the environmental problems of Great Barrier Reef than the best known ecological beauty with coral reefs, pelagic environments, algal meadows, coasts and estuaries, mangroves forests and sea grasses, fish species, coral gardens and the one of the best tourist spots will lost in upcoming years. My research will focus on the different environmental threats, its socio-economic impacts and different conservative measures.Keywords: climate change, overfishing, acidification, eutrophication
Procedia PDF Downloads 3731306 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 230