Search results for: focus group research
999 The Trumping of Science: Exploratory Study into Discrepancy between Politician and Scientist Sources in American Covid-19 News Coverage
Authors: Wafa Unus
Abstract:
Science journalism has been vanishing from America’s national newspapers for decades. Reportage on scientific topics is limited to only a handful of newspapers and of those, few employ dedicated science journalists to cover stories that require this specialized expertise. News organizations' lack of readiness to convey complex scientific concepts to a mass populace becomes particularly problematic when events like the Covid-19 pandemic occur. The lack of coverage of Covid-19 prior to its onset in the United States, suggests something more troubling - that the deprioritization of reporting on hard science as an educational tool in favor of political frames of coverage, places dangerous blinders on the American public. This research looks at the disparity between voices of health and science experts in news articles and the voices of political figures, in order to better understand the approach of American newspapers in conveying expert opinion on Covid-19. A content analysis of 300 articles on Covid-19 by major newspapers in the United States between January 1st, 2020 and April 30th, 2020 illuminates this investigation. The Boston Globe, the New York Times, and the Los Angeles Times are included in the content analysis. Initial findings reveal a significant disparity in the number of articles that mention Anthony Fauci, the director of the National Institute Allergy and Infectious Disease, and the number that make reference to political figures. Covid-related articles in the New York Times that focused on health topics (as opposed to economic or social issues) contained the voices of 54 different politicians who were mentioned a total of 608 times. Only five members of the scientific community were mentioned a total of 24 times (out of 674 articles). In the Boston Globe, 36 different politicians were mentioned a total of 147 times, and only two members of the scientific community, one being Anthony Fauci, were mentioned a total of nine times (out of 423 articles). In the Los Angeles Times, 52 different politicians were mentioned a total of 600 times, and only six members of the scientific community were included and were mentioned a total of 82 times with Fauci being mentioned 48 times (out of 851 articles). Results provide a better understanding of the frames in which American journalists in Covid hotspots conveyed information of expert analysis on Covid-19 during one of the most pressing news events of the century. Ultimately, the objective of this study is to utilize the exploratory data to evaluate the nature, extent and impact of Covid-19 reporting in the context of trustworthiness and scientific expertise. Secondarily, this data will illuminate the degree to which Covid-19 reporting focused on politics over science.Keywords: science reporting, science journalism, covid, misinformation, news
Procedia PDF Downloads 218998 Review on Recent Dynamics and Constraints of Affordable Housing Provision in Nigeria: A Case of Growing Economic Precarity
Authors: Ikenna Stephen Ezennia, Sebnem Onal Hoscara
Abstract:
Successive governments in Nigeria are faced with the pressing problem of how to house an ever-expanding urban population, usually low-income earners. The question of housing and affordability presents a complex challenge for these governments, as the commodification of housing links it inextricably to markets and capital flows. Therefore, placing it as at the center of the government’s agenda. However, the provision of decent and affordable housing for average Nigerians has remained an illusion, despite copious schemes, policies and programs initiated and carried out by various successive governments. Similarly, this phenomenon has also been observed in many countries of Africa, which is largely a result of economic unpredictability, lack of housing finance and insecurity, among other factors peculiar to a struggling economy. This study reviews recent dynamics and factors challenging the provision and development of affordable housing for the low income urban populace of Nigeria. Thus, the aim of the study is to present a comprehensive approach for understanding recent trends in the provision of affordable housing for Nigerians. The approach is based on a new paradigm of research: transdisciplinarity; a form of inquiry that crosses the boundaries of different disciplines. Therefore, the review takes a retrospective gaze at the various housing development programs/schemes/policies taken by successive governments of Nigeria within the last few decades and exams recent efforts geared towards eradicating the problems of housing delivery. Sources of data included relevant English language articles and the results of literature search of Elsevier Science Direct, ISI Web of Knowledge, Pro Quest Central, Scopus, and Google Scholar. The findings reveal that factors such as; rapid urbanization, inadequate planning and land use control, lack of adequate and favorable finance, high prices of land, high prices of building material, youth/touts harassment of developers, poor urban infrastructure, multiple taxation, and risk share are the major factors posing as a hindrance to adequate housing delivery. The results show that the majority of Nigeria’s affordable housing schemes, programs and policies are in most cases poorly implemented and abandoned without proper coordination. Consequently, the study concludes that the affordable housing delivery strategies in Nigeria are an epitome of lip service politics by successive governments; and the current trend of leaving housing provision to the vagaries of market forces cannot be expected to support affordable housing especially for the low income urban populace.Keywords: affordable housing, housing delivery, national housing policy, urban poor
Procedia PDF Downloads 222997 Risk Factors for Determining Anti-HBcore to Hepatitis B Virus Among Blood Donors
Authors: Tatyana Savchuk, Yelena Grinvald, Mohamed Ali, Ramune Sepetiene, Dinara Sadvakassova, Saniya Saussakova, Kuralay Zhangazieva, Dulat Imashpayev
Abstract:
Introduction. The problem of viral hepatitis B (HBV) takes a vital place in the global health system. The existing risk of HBV transmission through blood transfusions is associated with transfusion of blood taken from infected individuals during the “serological window” period or from patients with latent HBV infection, the marker of which is anti-HBcore. In the absence of information about other markers of hepatitis B, the presence of anti-HBcore suggests that a person may be actively infected or has suffered hepatitis B in the past and has immunity. Aim. To study the risk factors influencing the positive anti-HBcore indicators among the donor population. Materials and Methods. The study was conducted in 2021 in the Scientific and Production Center of Transfusiology of the Ministry of Healthcare in Kazakhstan. The samples taken from blood donors were tested for anti-HBcore, by CLIA on the Architect i2000SR (ABBOTT). A special questionnaire was developed for the blood donors’ socio-demographic characteristics. Statistical analysis was conducted by the R software (version 4.1.1, USA, 2021). Results.5709 people aged 18 to 66 years were included in the study, the proportion of men and women was 68.17% and 31.83%, respectively. The average age of the participants was 35.7 years. A weighted multivariable mixed effects logistic regression analysis showed that age (p<0.001), ethnicity (p<0.05), and marital status (p<0.05) were statistically associated with anti-HBcore positivity. In particular, analysis adjusting for gender, nationality, education, marital status, family history of hepatitis, blood transfusion, injections, and surgical interventions, with a one-year increase in age (adjOR=1.06, 95%CI:1.05-1.07), showed an 6% growth in odds of having anti-HBcore positive results. Those who were russian ethnicity (adjOR=0.65, 95%CI:0.46-0.93) and representatives of other nationality groups (adjOR=0.56, 95%CI:0.37-0.85) had lower odds of having anti-HBcore when compared to Kazakhs when controlling for other covariant variables. Among singles, the odds of having a positive anti-HBcore were lower by 29% (adjOR = 0.71, 95%CI:0.57-0.89) compared to married participants when adjusting for other variables. Conclusions.Kazakhstan is one of the countries with medium endemicity of HBV prevalence (2%-7%). Results of the study demonstrated the possibility to form a profile of risk factors (age, nationality, marital status). Taking into account the data, it is recommended to increase attention to donor questionnaires by adding leading questions and to improve preventive measures to prevent HBV. Funding. This research was supported by a grant from Abbott Laboratories.Keywords: anti-HBcore, blood donor, donation, hepatitis B virus, occult hepatitis
Procedia PDF Downloads 109996 The Commodification of Internet Culture: Online Memes and Differing Perceptions of Their Commercial Uses
Authors: V. Esteves
Abstract:
As products of participatory culture, internet memes represent a global form of interaction with online culture. These digital objects draw upon a rich historical engagement with remix practices that dates back decades: from the copy and paste practices of Dadaism and punk to the re-appropriation techniques of the Situationist International; memes echo a long established form of cultural creativity that pivots on the art of the remix. Online culture has eagerly embraced the changes that the Web 2.0 afforded in terms of making use of remixing as an accessible form of societal expression, bridging these remix practices of the past into a more widely available and accessible platform. Memes embody the idea of 'intercreativity', allowing global creative collaboration to take place through networked digital media; they reflect the core values of participation and interaction that are present throughout much internet discourse whilst also existing in a historical remix continuum. Memes hold the power of cultural symbolism manipulated by global audiences through which societies make meaning, as these remixed digital objects have an elasticity and low literacy level that allows for a democratic form of cultural engagement and meaning-making by and for users around the world. However, because memes are so elastic, their ability to be re-appropriated by other powers for reasons beyond their original intention has become evident. Recently, corporations have made use of internet memes for advertising purposes, engaging in the circulation and re-appropriation of internet memes in commercial spaces – which has, in turn, complicated this relation between online users and memes' democratic possibilities further. By engaging in a widespread online ethnography supplemented by in-depth interviews with meme makers, this research was able to not only track different online meme use through commercial contexts, but it also allowed the possibility to engage in qualitative discussions with meme makers and users regarding their perception and experience of these varying commercial uses of memes. These can be broadly put within two categories: internet memes that are turned into physical merchandise and the use of memes in advertising to sell other (non-meme related) products. Whilst there has been considerable acceptance of the former type of commercial meme use, the use of memes in adverts in order to sell unrelated products has been met with resistance. The changes in reception regarding commercial meme use is dependent on ideas of cultural ownership and perceptions of authorship, ultimately uncovering underlying socio-cultural ideologies that come to the fore within these overlapping contexts. Additionally, this adoption of memes by corporate powers echoes the recuperation process that the Situationist International endured, creating a further link with older remix cultures and their lifecycles.Keywords: commodification, internet culture, memes, recuperation, remix
Procedia PDF Downloads 149995 Institutional and Economic Determinants of Foreign Direct Investment: Comparative Analysis of Three Clusters of Countries
Authors: Ismatilla Mardanov
Abstract:
There are three types of countries, the first of which is willing to attract foreign direct investment (FDI) in enormous amounts and do whatever it takes to make this happen. Therefore, FDI pours into such countries. In the second cluster of countries, even if the country is suffering tremendously from the shortage of investments, the governments are hesitant to attract investments because they are at the hands of local oligarchs/cartels. Therefore, FDI inflows are moderate to low in such countries. The third type is countries whose companies prefer investing in the most efficient locations globally and are hesitant to invest in the homeland. Sorting countries into such clusters, the present study examines the essential institutions and economic factors that make these countries different. Past literature has discussed various determinants of FDI in all kinds of countries. However, it did not classify countries based on government motivation, institutional setup, and economic factors. A specific approach to each target country is vital for corporate foreign direct investment risk analysis and decisions. The research questions are 1. What specific institutional and economic factors paint the pictures of the three clusters; 2. What specific institutional and economic factors are determinants of FDI; 3. Which of the determinants are endogenous and exogenous variables? 4. How can institutions and economic and political variables impact corporate investment decisions Hypothesis 1: In the first type, country institutions and economic factors will be favorable for FDI. Hypothesis 2: In the second type, even if country economic factors favor FDI, institutions will not. Hypothesis 3: In the third type, even if country institutions favorFDI, economic factors will not favor domestic investments. Therefore, FDI outflows occur in large amounts. Methods: Data come from open sources of the World Bank, the Fraser Institute, the Heritage Foundation, and other reliable sources. The dependent variable is FDI inflows. The independent variables are institutions (economic and political freedom indices) and economic factors (natural, material, and labor resources, government consumption, infrastructure, minimum wage, education, unemployment, tax rates, consumer price index, inflation, and others), the endogeneity or exogeneity of which are tested in the instrumental variable estimation. Political rights and civil liberties are used as instrumental variables. Results indicate that in the first type, both country institutions and economic factors, specifically labor and logistics/infrastructure/energy intensity, are favorable for potential investors. In the second category of countries, the risk of loss of assets is very high due to governmentshijacked by local oligarchs/cartels/special interest groups. In the third category of countries, the local economic factors are unfavorable for domestic investment even if the institutions are well acceptable. Cluster analysis and instrumental variable estimation were used to reveal cause-effect patterns in each of the clusters.Keywords: foreign direct investment, economy, institutions, instrumental variable estimation
Procedia PDF Downloads 161994 Study on the Rapid Start-up and Functional Microorganisms of the Coupled Process of Short-range Nitrification and Anammox in Landfill Leachate Treatment
Authors: Lina Wu
Abstract:
The excessive discharge of nitrogen in sewage greatly intensifies the eutrophication of water bodies and poses a threat to water quality. Nitrogen pollution control has become a global concern. Currently, the problem of water pollution in China is still not optimistic. As a typical high ammonia nitrogen organic wastewater, landfill leachate is more difficult to treat than domestic sewage because of its complex water quality, high toxicity, and high concentration.Many studies have shown that the autotrophic anammox bacteria in nature can combine nitrous and ammonia nitrogen without carbon source through functional genes to achieve total nitrogen removal, which is very suitable for the removal of nitrogen from leachate. In addition, the process also saves a lot of aeration energy consumption than the traditional nitrogen removal process. Therefore, anammox plays an important role in nitrogen conversion and energy saving. The process composed of short-range nitrification and denitrification coupled an ammo ensures the removal of total nitrogen and improves the removal efficiency, meeting the needs of the society for an ecologically friendly and cost-effective nutrient removal treatment technology. Continuous flow process for treating late leachate [an up-flow anaerobic sludge blanket reactor (UASB), anoxic/oxic (A/O)–anaerobic ammonia oxidation reactor (ANAOR or anammox reactor)] has been developed to achieve autotrophic deep nitrogen removal. In this process, the optimal process parameters such as hydraulic retention time and nitrification flow rate have been obtained, and have been applied to the rapid start-up and stable operation of the process system and high removal efficiency. Besides, finding the characteristics of microbial community during the start-up of anammox process system and analyzing its microbial ecological mechanism provide a basis for the enrichment of anammox microbial community under high environmental stress. One research developed partial nitrification-Anammox (PN/A) using an internal circulation (IC) system and a biological aerated filter (BAF) biofilm reactor (IBBR), where the amount of water treated is closer to that of landfill leachate. However, new high-throughput sequencing technology is still required to be utilized to analyze the changes of microbial diversity of this system, related functional genera and functional genes under optimal conditions, providing theoretical and further practical basis for the engineering application of novel anammox system in biogas slurry treatment and resource utilization.Keywords: nutrient removal and recovery, leachate, anammox, partial nitrification
Procedia PDF Downloads 52993 Association between Occupational Characteristics and Well-Being: An Exploratory Study of Married Working Women in New Delhi, India
Authors: Kanchan Negi
Abstract:
Background: Modern and urban occupational culture have driven demands for people to work long hours and weekends and take work to home at times. Research on the health effects of these exhaustive temporal work patterns is scant or contradictory. This study examines the relationship between work patterns and wellbeing in a sample of women living in the metropolitan hub of Delhi. Method: This study is based on the data collected from 360 currently married women between age 29 and 49 years, working in the urban capital hub of India, i.e., Delhi. The women interviewed were professionals from the education, health, banking and information and technology (IT) sector. Bivariate analysis was done to study the characteristics of the sample. Logistic regression analysis was used to estimate the physical and psychological wellbeing across occupational characteristics. Results: Most of the working women were below age 35 years; around 30% of women worked in the education sector, 23% in health, 21% in banking and 26% in the IT sector. Over 55% of women were employed in the private sector and only 36% were permanent employees. Nearly 30% of women worked for more than the standard 8 hours a day. The findings from logistic regression showed that compared to women working in the education sector, those who worked in the banking and IT sector more likely to have physical and psychological health issues (OR 2.07-4.37, CI 1.17-4.37); women who bear dual burden of responsibilities had higher odds of physical and psychological health issues than women who did not (OR 1.19-1.85 CI 0.96-2.92). Women who worked for more than 8 hours a day (OR 1.15, CI 1.01-1.30) and those who worked for more than five days a week (OR 1.25, CI 1.05-1.35) were more likely to have physical health issues than women who worked for 6-8 hours a day and five days e week, respectively. Also, not having flexible work timings and compensatory holidays increased the odds of having physical and psychological health issues among working women (OR 1.17-1.29, CI 1.01-1.47). Women who worked in the private sector, those employed temporarily and who worked in the non-conducive environments were more likely to have psychological health issues as compared to women in the public sector, permanent employees and those who worked in a conducive environment, respectively (OR 1.33-1.67, CI 1.09-2.91). Women who did not have poor work-life balance had reduced the odds of psychological health issues than women with poor work-life balance (OR 0.46, CI 0.25-0.84). Conclusion: Poor wellbeing significantly linked to strenuous and rigid work patterns, suggesting that modern and urban work culture may contribute to the poor wellbeing of working women. Noticing the recent decline in female workforce participation in Delhi, schemes like Flexi-timings, compensatory holidays, work-from-home and daycare facilities for young ones must be welcomed; these policies already exist in some private sector firms, and the public sectors companies should also adopt such changes to ease the dual burden as homemaker and career maker. This could encourage women in the urban areas to readily take up the jobs with less juggle to manage home and work.Keywords: occupational characteristics, urban India, well-being, working women
Procedia PDF Downloads 206992 Between Leader-Member Exchange and Toxic Leadership: A Theoretical Review
Authors: Aldila Dyas Nurfitri
Abstract:
Nowadays, leadership has became the one of main issues in forming organization groups even countries. The concept of a social contract between the leaders and subordinates become one of the explanations for the leadership process. The interests of the two parties are not always the same, but they must work together to achieve both goals. Based on the concept at the previous it comes “The Leader Member Exchange Theory”—well known as LMX Theory, which assumes that leadership is a process of social interaction interplay between the leaders and their subordinates. High-quality LMX relationships characterized by a high carrying capacity, informal supervision, confidence, and power negotiation enabled, whereas low-quality LMX relationships are described by low support, large formal supervision, less or no participation of subordinates in decision-making, and less confidence as well as the attention of the leader Application of formal supervision system in a low LMX behavior was in line with strict controls on toxic leadership model. Leaders must be able to feel toxic control all aspects of the organization every time. Leaders with this leadership model does not give autonomy to the staff. This behavior causes stagnation and make a resistant organizational culture in an organization. In Indonesia, the pattern of toxic leadership later evolved into a dysfunctional system that is growing rapidly. One consequence is the emergence of corrupt behavior. According to Kellerman, corruption is defined as a pattern and some subordinates behave lie, cheat or steal to a degree that goes beyond the norm, they put self-interest than the common good.According to the corruption data in Indonesia based on the results of ICW research on 2012 showed that the local government sector ranked first with 177 cases. Followed by state or local enterprises as much as 41 cases. LMX is defined as the quality of the relationship between superiors and subordinates are implications for the effectiveness and progress of the organization. The assumption of this theory that leadership as a process of social interaction interplay between the leaders and his followers are characterized by a number of dimensions, such as affection, loyalty, contribution, and professional respect. Meanwhile, the toxic leadership is dysfunctional leadership in organization that is led by someone with the traits are not able to adjust, do not have integrity, malevolent, evil, and full of discontent marked by a number of characteristics, such as self-centeredness, exploiting others, controlling behavior, disrespecting others, suppress innovation and creativity of employees, and inadequate emotional intelligence. The leaders with some characteristics, such as high self-centeredness, exploiting others, controlling behavior, and disrespecting others, tends to describe a low LMX relationships directly with subordinates compared with low self-centeredness, exploiting others, controlling behavior, and disrespecting others. While suppress innovation and creativity of employees aspect and inadequate emotional intelligence, tend not to give direct effect to the low quality of LMX.Keywords: leader-member exchange, toxic leadership, leadership
Procedia PDF Downloads 489991 Congruency of English Teachers’ Assessments Vis-à-Vis 21st Century Skills Assessment Standards
Authors: Mary Jane Suarez
Abstract:
A massive educational overhaul has taken place at the onset of the 21st century addressing the mismatches of employability skills with that of scholastic skills taught in schools. For a community to thrive in an ever-developing economy, the teaching of the necessary skills for job competencies should be realized by every educational institution. However, in harnessing 21st-century skills amongst learners, teachers, who often lack familiarity and thorough insights into the emerging 21st-century skills, are chained with the restraint of the need to comprehend the physiognomies of 21st-century skills learning and the requisite to implement the tenets of 21st-century skills teaching. With the endeavor to espouse 21st-century skills learning and teaching, a United States-based national coalition called Partnership 21st Century Skills (P21) has identified the four most important skills in 21st-century learning: critical thinking, communication, collaboration, and creativity and innovation with an established framework for 21st-century skills standards. Assessment of skills is the lifeblood of every teaching and learning encounter. It is correspondingly crucial to look at the 21st century standards and the assessment guides recognized by P21 to ensure that learners are 21st century ready. This mixed-method study sought to discover and describe what classroom assessments were used by English teachers in a public secondary school in the Philippines with course offerings on science, technology, engineering, and mathematics (STEM). The research evaluated the assessment tools implemented by English teachers and how these assessment tools were congruent to the 21st assessment standards of P21. A convergent parallel design was used to analyze assessment tools and practices in four phases. In the data-gathering phase, survey questionnaires, document reviews, interviews, and classroom observations were used to gather quantitative and qualitative data simultaneously, and how assessment tools and practices were consistent with the P21 framework with the four Cs as its foci. In the analysis phase, the data were treated using mean, frequency, and percentage. In the merging and interpretation phases, a side-by-side comparison was used to identify convergent and divergent aspects of the results. In conclusion, the results yielded assessments tools and practices that were inconsistent, if not at all, used by teachers. Findings showed that there were inconsistencies in implementing authentic assessments, there was a scarcity of using a rubric to critically assess 21st skills in both language and literature subjects, there were incongruencies in using portfolio and self-reflective assessments, there was an exclusion of intercultural aspects in assessing the four Cs and the lack of integrating collaboration in formative and summative assessments. As a recommendation, a harmonized assessment scheme of P21 skills was fashioned for teachers to plan, implement, and monitor classroom assessments of 21st-century skills, ensuring the alignment of such assessments to P21 standards for the furtherance of the institution’s thrust to effectively integrate 21st-century skills assessment standards to its curricula.Keywords: 21st-century skills, 21st-century skills assessments, assessment standards, congruency, four Cs
Procedia PDF Downloads 194990 Working Capital Management Practices in Small Businesses in Victoria
Authors: Ranjith Ihalanayake, Lalith Seelanatha, John Breen
Abstract:
In this study, we explored the current working capital management practices as applied in small businesses in Victoria, filling an existing theoretical and empirical gap in literature in general and in Australia in particular. Amidst the current global competitive and dynamic environment, the short term insolvency of small businesses is very critical for the long run survival. A firm’s short-term insolvency is dependent on the availability of sufficient working capital for feeding day to day operational activities. Therefore, given the reliance for short-term funding by small businesses, it has been recognized that the efficient management of working capital is crucial in respect of the prosperity and survival of such firms. Against this background, this research was an attempt to understand the current working capital management strategies and practices used by the small scale businesses. To this end, we conducted an internet survey among 220 small businesses operating in Victoria, Australia. The survey results suggest that the majority of respondents are owner-manager (73%) and male (68%). Respondents participated in this survey mostly have a degree (46%). About a half of respondents are more than 50 years old. Most of respondents (64%) have business management experience more than ten years. Similarly, majority of them (63%) had experience in the area of their current business. Types of business of the respondents are: Private limited company (41%), sole proprietorship (37%), and partnership (15%). In addition, majority of the firms are service companies (63%), followed by retailed companies (25%), and manufacturing (17%). Size of companies of this survey varies, 32% of them have annual sales $100,000 or under, while 22% of them have revenue more than $1,000,000 every year. In regards to the total assets, majority of respondents (43%) have total assets $100,000 or less while 20% of respondents have total assets more than $1,000,000. In regards to WCMPs, results indicate that almost 70% of respondents mentioned that they are responsible for managing their business working capital. The survey shows that majority of respondents (65.5%) use their business experience to identify the level of investment in working capital, compared to 22% of respondents who seek advice from professionals. The other 10% of respondents, however, follow industry practice to identify the level of working capital. The survey also shows that more than a half of respondents maintain good liquidity financial position for their business by having accounts payable less than accounts receivable. This study finds that majority of small business companies in western area of Victoria have a WCM policy but only about 8 % of them have a formal policy. Majority of the businesses (52.7%) have an informal policy while 39.5% have no policy. Of those who have a policy, 44% described their working capital management policies as a compromise policy while 35% described their policy as a conservative policy. Only 6% of respondents apply aggressive policy. Overall the results indicate that the small businesses pay less attention into the management of working capital of their business despite its significance in the successful operation of the business. This approach may be adopted during favourable economic times. However, during relatively turbulent economic conditions, such an approach could lead to greater financial difficulties i.e. short-term financial insolvency.Keywords: small business, working capital management, Australia, sufficient, financial insolvency
Procedia PDF Downloads 354989 Validating the Micro-Dynamic Rule in Opinion Dynamics Models
Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle
Abstract:
Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule
Procedia PDF Downloads 163988 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 270987 The Relationship between Osteoporosis-Related Knowledge and Physical Activity among Women Age over 50 Years
Authors: P. Tardi, B. Szilagyi, A. Makai, P. Acs, M. Hock, M. Jaromi
Abstract:
Osteoporosis is becoming a major public health problem, particularly in postmenopausal women, as the incidence of this disease is getting higher. Nowadays, one of the most common chronic musculoskeletal diseases is osteoporosis. Osteoporosis-related knowledge is an important contributor to prevent or to treat osteoporosis. The most important strategies to prevent or treat the disease are increasing the level of physical activity at all ages, cessation of smoking, reduction of alcohol consumption, adequate dietary calcium, and vitamin D intake. The aim of the study was to measure the osteoporosis-related knowledge and physical activity among women age over 50 years. For the measurements, we used the osteoporosis questionnaire (OPQ) to examine the disease-specific knowledge and the global physical activity questionnaire (GPAQ) to measure the quantity and quality of the physical activity. The OPQ is a self-administered 20-item questionnaire with five categories: general information, risk factors, investigations, consequences, and treatment. There are four choices per question (one of them is the 'I do not know'). The filler gets +1 for a good answer, -1 point for a bad answer, and 0 for 'I do not know' answer. We contacted with 326 women (63.08 ± 9.36 year) to fill out the questionnaires. Descriptive analysis was carried out, and we calculated Spearman's correlation coefficient to examine the relationship between the variables. Data were entered into Microsoft Excel, and all statistical analyses were performed using SPSS (Version 24). The participants of the study (n=326) reached 8.76 ± 6.94 points on OPQ. Significant (p < 0.001) differences were found in the results of OPQ according to the highest level of education. It was observed that the score of the participants with osteoporosis (10.07 ± 6.82 points) was significantly (p=0.003) higher than participants without osteoporosis (9.38 ± 6.66 points) and the score of those women (6.49 ± 6.97 points) who did not know that osteoporosis exists in their case. The GPAQ results showed the sample physical activity in the dimensions of vigorous work (479.86 ± 684.02 min/week); moderate work (678.16 ± 804.5 min/week); travel (262.83 ± 380.27 min/week); vigorous recreation (77.71 ± 123.46 min/week); moderate recreation (115.15 ± 154.82 min/week) and total weekly physical activity (1645.99 ± 1432.88 min/week). Significant correlations were found between the osteoporosis-related knowledge and the physical activity in travel (R=0.21; p < 0.001), vigorous recreation (R=0.35; p < 0.001), moderate recreation (R=0.35; p < 0.001), total vigorous minutes/week (R=0.15; p=0.001) and total moderate minutes/week (R=0.13; p=0.04) dimensions. According to the results that were achieved, the highest level of education significantly determines osteoporosis-related knowledge. Physical activity is an important contributor to prevent or to treat osteoporosis, and it showed a significant correlation with osteoporosis-related knowledge. Based on the results, the development of osteoporosis-related knowledge may help to improve the level of physical activity, especially recreation. Acknowledgment: Supported by the ÚNKP-20-1 New National Excellence Program of The Ministry for Innovation and Technology from the Source of the National Research, Development and Innovation Fund.Keywords: osteoporosis, osteoporosis-related knowledge, physical activity, prevention
Procedia PDF Downloads 114986 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 143985 A Village Transformed as Census Town a Case Study of Village Nilpur, Tehsil Rajpura, District Patiala (Punjab, India)
Authors: Preetinder Kaur Randhawa
Abstract:
The rural areas can be differentiated from urban areas in terms of their economic activities as rural areas are primarily involved in agricultural sector and provide natural resources whereas, urban areas are primarily involved in infrastructure sector and provide manufacturing services. Census of India defines a Census Town as an area which satisfies the following three criteria i.e. population exceeds 5000, at least 75 percent of male population engaged in non-agricultural sector and minimum population density of 400 persons per square kilometers. Urban areas can be attributed to the improvement of transport facilities, the massive decline in agricultural, especially male workers and workers shift to non-agricultural activities. This study examines the pattern, process of rural areas transformed into urban areas/ census town. The study has analyzed the various factors which are responsible for land transformation as well as the socio-economic transformation of the village population. Nilpur (CT) which belongs to Rajpura Tehsil in Patiala district, Punjab has been selected for the present study. The methodology adopted includes qualitative and quantitative research design, methods based on secondary data. Secondary data has been collected from unpublished revenue record office of Rajpura Tehsil and Primary Census Abstract of Patiala district, Census of India 2011. The results have showed that rate of transformation of a village to census town in Rajpura Tehsil has been one of highest among other villages. The census town has evolved through the evolutionary process of human settlement which grows in size, population and physical development. There must be a complete economic transformation and attainment of high level of technological development. Urban design and construction of buildings and infrastructure can be carried out better and faster and can be used to aid human habitation with the enhancement of quality of life. The study has concluded that in the selected area i.e Nilpur (CT) literacy rate has increased to 72.1 percent in year 2011 from 67.6 percent in year 2001. Similarly non-agricultural work force has increased to 95.2 percent in year 2011 from 81.1 percent in year 2001. It is very much clear that the increased literacy rate has put a positive impact on the involvement of non-agricultural workers have enhanced. The study has concluded that rural-urban linkages are important tools for understanding complexities of people livelihood and their strategies which involve mobility migration and the diversification of income sources and occupations.Keywords: Census Town, India, Nilpur, Punjab
Procedia PDF Downloads 252984 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections
Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor
Abstract:
Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance
Procedia PDF Downloads 169983 Combined Treatment of Estrogen-Receptor Positive Breast Microtumors with 4-Hydroxytamoxifen and Novel Non-Steroidal Diethyl Stilbestrol-Like Analog Produces Enhanced Preclinical Treatment Response and Decreased Drug Resistance
Authors: Sarah Crawford, Gerry Lesley
Abstract:
This research is a pre-clinical assessment of anti-cancer effects of novel non-steroidal diethyl stilbestrol-like estrogen analogs in estrogen-receptor positive/ progesterone-receptor positive human breast cancer microtumors of MCF 7 cell line. Tamoxifen analog formulation (Tam A1) was used as a single agent or in combination with therapeutic concentrations of 4-hydroxytamoxifen, currently used as a long-term treatment for the prevention of breast cancer recurrence in women with estrogen receptor positive/ progesterone receptor positive malignancies. At concentrations ranging from 30-50 microM, Tam A1 induced microtumor disaggregation and cell death. Incremental cytotoxic effects correlated with increasing concentrations of Tam A1. Live tumor microscopy showed that microtumos displayed diffuse borders and substrate-attached cells were rounded-up and poorly adherent. A complete cytotoxic effect was observed using 40-50 microM Tam A1 with time course kinetics similar to 4-hydroxytamoxifen. Combined treatment with TamA1 (30-50 microM) and 4-hydroxytamoxifen (10-15 microM) induced a highly cytotoxic, synergistic combined treatment response that was more rapid and complete than using 4-hydroxytamoxifen as a single agent therapeutic. Microtumors completely dispersed or formed necrotic foci indicating a highly cytotoxic combined treatment response. Moreover, breast cancer microtumors treated with both 4-hydroxytamoxifen and Tam A1 displayed lower levels of long-term post-treatment regrowth, a critical parameter of primary drug resistance, than observed for 4-hydroxytamoxifen when used as a single agent therapeutic. Tumor regrowth at 6 weeks post-treatment with either single agent 4-hydroxy tamoxifen, Tam A1 or a combined treatment was assessed for the development of drug resistance. Breast cancer cells treated with both 4-hydroxytamoxifen and Tam A1 displayed significantly lower levels of post-treatment regrowth, indicative of decreased drug resistance, than observed for either single treatment modality. The preclinical data suggest that combined treatment involving the use of tamoxifen analogs may be a novel clinical approach for long-term maintenance therapy in patients with estrogen-receptor positive/progesterone-receptor positive breast cancer receiving hormonal therapy to prevent disease recurrence. Detailed data on time-course, IC50 and tumor regrowth assays post- treatment as well as a proposed mechanism of action to account for observed synergistic drug effects will be presented.Keywords: 4-hydroxytamoxifen, tamoxifen analog, drug-resistance, microtumors
Procedia PDF Downloads 69982 Perception of Nurses and Caregivers on Fall Preventive Management for Hospitalized Children Based on Ecological Model
Authors: Mirim Kim, Won-Oak Oh
Abstract:
Purpose: The purpose of this study was to identify hospitalized children's fall risk factors, fall prevention status and fall prevention strategies recognized by nurses and caregivers of hospitalized children and present an ecological model for fall preventive management in hospitalized children. Method: The participants of this study were 14 nurses working in medical institutions and having more than one year of child care experience and 14 adult caregivers of children under 6 years of age receiving inpatient treatment at a medical institution. One to one interview was attempted to identify their perception of fall preventive management. Transcribed data were analyzed through latent content analysis method. Results: Fall risk factors in hospitalized children were 'unpredictable behavior', 'instability', 'lack of awareness about danger', 'lack of awareness about falls', 'lack of child control ability', 'lack of awareness about the importance of fall prevention', 'lack of sensitivity to children', 'untidy environment around children', 'lack of personalized facilities for children', 'unsafe facility', 'lack of partnership between healthcare provider and caregiver', 'lack of human resources', 'inadequate fall prevention policy', 'lack of promotion about fall prevention', 'a performanceism oriented culture'. Fall preventive management status of hospitalized children were 'absence of fall prevention capability', 'efforts not to fall', 'blocking fall risk situation', 'limit the scope of children's activity when there is no caregiver', 'encourage caregivers' fall prevention activities', 'creating a safe environment surrounding hospitalized children', 'special management for fall high risk children', 'mutual cooperation between healthcare providers and caregivers', 'implementation of fall prevention policy', 'providing guide signs about fall risk'. Fall preventive management strategies of hospitalized children were 'restrain dangerous behavior', 'inspiring awareness about fall', 'providing fall preventive education considering the child's eye level', 'efforts to become an active subject of fall prevention activities', 'providing customed fall prevention education', 'open communication between healthcare providers and caregivers', 'infrastructure and personnel management to create safe hospital environment', 'expansion fall prevention campaign', 'development and application of a valid fall assessment instrument', 'conversion of awareness about safety'. Conclusion: In this study, the ecological model of fall preventive management for hospitalized children reflects various factors that directly or indirectly affect the fall prevention of hospitalized children. Therefore, these results can be considered as useful baseline data for developing systematic fall prevention programs and hospital policies to prevent fall accident in hospitalized children. Funding: This study was funded by the National Research Foundation of South Korea (grant number NRF-2016R1A2B1015455).Keywords: fall down, safety culture, hospitalized children, risk factors
Procedia PDF Downloads 167981 A Design Framework for an Open Market Platform of Enriched Card-Based Transactional Data for Big Data Analytics and Open Banking
Authors: Trevor Toy, Josef Langerman
Abstract:
Around a quarter of the world’s data is generated by financial with an estimated 708.5 billion global non-cash transactions reached between 2018 and. And with Open Banking still a rapidly developing concept within the financial industry, there is an opportunity to create a secure mechanism for connecting its stakeholders to openly, legitimately and consensually share the data required to enable it. Integration and data sharing of anonymised transactional data are still operated in silos and centralised between the large corporate entities in the ecosystem that have the resources to do so. Smaller fintechs generating data and businesses looking to consume data are largely excluded from the process. Therefore there is a growing demand for accessible transactional data for analytical purposes and also to support the rapid global adoption of Open Banking. The following research has provided a solution framework that aims to provide a secure decentralised marketplace for 1.) data providers to list their transactional data, 2.) data consumers to find and access that data, and 3.) data subjects (the individuals making the transactions that generate the data) to manage and sell the data that relates to themselves. The platform also provides an integrated system for downstream transactional-related data from merchants, enriching the data product available to build a comprehensive view of a data subject’s spending habits. A robust and sustainable data market can be developed by providing a more accessible mechanism for data producers to monetise their data investments and encouraging data subjects to share their data through the same financial incentives. At the centre of the platform is the market mechanism that connects the data providers and their data subjects to the data consumers. This core component of the platform is developed on a decentralised blockchain contract with a market layer that manages transaction, user, pricing, payment, tagging, contract, control, and lineage features that pertain to the user interactions on the platform. One of the platform’s key features is enabling the participation and management of personal data by the individuals from whom the data is being generated. This framework developed a proof-of-concept on the Etheruem blockchain base where an individual can securely manage access to their own personal data and that individual’s identifiable relationship to the card-based transaction data provided by financial institutions. This gives data consumers access to a complete view of transactional spending behaviour in correlation to key demographic information. This platform solution can ultimately support the growth, prosperity, and development of economies, businesses, communities, and individuals by providing accessible and relevant transactional data for big data analytics and open banking.Keywords: big data markets, open banking, blockchain, personal data management
Procedia PDF Downloads 74980 Multi-Criteria Selection and Improvement of Effective Design for Generating Power from Sea Waves
Authors: Khaled M. Khader, Mamdouh I. Elimy, Omayma A. Nada
Abstract:
Sustainable development is the nominal goal of most countries at present. In general, fossil fuels are the development mainstay of most world countries. Regrettably, the fossil fuel consumption rate is very high, and the world is facing the problem of conventional fuels depletion soon. In addition, there are many problems of environmental pollution resulting from the emission of harmful gases and vapors during fuel burning. Thus, clean, renewable energy became the main concern of most countries for filling the gap between available energy resources and their growing needs. There are many renewable energy sources such as wind, solar and wave energy. Energy can be obtained from the motion of sea waves almost all the time. However, power generation from solar or wind energy is highly restricted to sunny periods or the availability of suitable wind speeds. Moreover, energy produced from sea wave motion is one of the cheapest types of clean energy. In addition, renewable energy usage of sea waves guarantees safe environmental conditions. Cheap electricity can be generated from wave energy using different systems such as oscillating bodies' system, pendulum gate system, ocean wave dragon system and oscillating water column device. In this paper, a multi-criteria model has been developed using Analytic Hierarchy Process (AHP) to support the decision of selecting the most effective system for generating power from sea waves. This paper provides a widespread overview of the different design alternatives for sea wave energy converter systems. The considered design alternatives have been evaluated using the developed AHP model. The multi-criteria assessment reveals that the off-shore Oscillating Water Column (OWC) system is the most appropriate system for generating power from sea waves. The OWC system consists of a suitable hollow chamber at the shore which is completely closed except at its base which has an open area for gathering moving sea waves. Sea wave's motion pushes the air up and down passing through a suitable well turbine for generating power. Improving the power generation capability of the OWC system is one of the main objectives of this research. After investigating the effect of some design modifications, it has been concluded that selecting the appropriate settings of some effective design parameters such as the number of layers of Wells turbine fans and the intermediate distance between the fans can result in significant improvements. Moreover, simple dynamic analysis of the Wells turbine is introduced. Furthermore, this paper strives for comparing the theoretical and experimental results of the built experimental prototype.Keywords: renewable energy, oscillating water column, multi-criteria selection, Wells turbine
Procedia PDF Downloads 164979 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 148978 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 324977 Biotechnological Methods for the Grouting of the Tunneling Space
Authors: V. Ivanov, J. Chu, V. Stabnikov
Abstract:
Different biotechnological methods for the production of construction materials and for the performance of construction processes in situ are developing within a new scientific discipline of Construction Biotechnology. The aim of this research was to develop and test new biotechnologies and biotechnological grouts for the minimization of the hydraulic conductivity of the fractured rocks and porous soil. This problem is essential to minimize flow rate of groundwater into the construction sites, the tunneling space before and after excavation, inside levies, as well as to stop water seepage from the aquaculture ponds, agricultural channels, radioactive waste or toxic chemicals storage sites, from the landfills or from the soil-polluted sites. The conventional fine or ultrafine cement grouts or chemical grouts have such restrictions as high cost, viscosity, sometime toxicity but the biogrouts, which are based on microbial or enzymatic activities and some not expensive inorganic reagents, could be more suitable in many cases because of lower cost and low or zero toxicity. Due to these advantages, development of biotechnologies for biogrouting is going exponentially. However, most popular at present biogrout, which is based on activity of urease- producing bacteria initiating crystallization of calcium carbonate from calcium salt has such disadvantages as production of toxic ammonium/ammonia and development of high pH. Therefore, the aim of our studies was development and testing of new biogrouts that are environmentally friendly and have low cost suitable for large scale geotechnical, construction, and environmental applications. New microbial biotechnologies have been studied and tested in the sand columns, fissured rock samples, in 1 m3 tank with sand, and in the pack of stone sheets that were the models of the porous soil and fractured rocks. Several biotechnological methods showed positive results: 1) biogrouting using sequential desaturation of sand by injection of denitrifying bacteria and medium following with biocementation using urease-producing bacteria, urea and calcium salt decreased hydraulic conductivity of sand to 2×10-7 ms-1 after 17 days of treatment and consumed almost three times less reagents than conventional calcium-and urea-based biogrouting; 2) biogrouting using slime-producing bacteria decreased hydraulic conductivity of sand to 1x10-6 ms-1 after 15 days of treatment; 3) biogrouting of the rocks with the width of the fissures 65×10-6 m using calcium bicarbonate solution, that was produced from CaCO3 and CO2 under 30 bars pressure, decreased hydraulic conductivity of the fissured rocks to 2×10-7 ms-1 after 5 days of treatment. These bioclogging technologies could have a lot of advantages over conventional construction materials and processes and can be used in geotechnical engineering, agriculture and aquaculture, and for the environmental protection.Keywords: biocementation, bioclogging, biogrouting, fractured rocks, porous soil, tunneling space
Procedia PDF Downloads 208976 Effect of Multi-Walled Carbon Nanotubes on Fuel Cell Membrane Performance
Authors: Rabindranath Jana, Biswajit Maity, Keka Rana
Abstract:
The most promising clean energy source is the fuel cell, since it does not generate toxic gases and other hazardous compounds. Again the direct methanol fuel cell (DMFC) is more user-friendly as it is easy to be miniaturized and suited as energy source for automobiles as well as domestic applications and portable devices. And unlike the hydrogen used for some fuel cells, methanol is a liquid that is easy to store and transport in conventional tanks. The most important part of a fuel cell is its membrane. Till now, an overall efficiency for a methanol fuel cell is reported to be about 20 ~ 25%. The lower efficiency of the cell may be due to the critical factors, e.g. slow reaction kinetics at the anode and methanol crossover. The oxidation of methanol is composed of a series of successive reactions creating formaldehyde and formic acid as intermediates that contribute to slow reaction rates and decreased cell voltage. Currently, the investigation of new anode catalysts to improve oxidation reaction rates is an active area of research as it applies to the methanol fuel cell. Surprisingly, there are very limited reports on nanostructured membranes, which are rather simple to manufacture with different tuneable compositions and are expected to allow only the proton permeation but not the methanol due to their molecular sizing effects and affinity to the membrane surface. We have developed a nanostructured fuel cell membrane from polydimethyl siloxane rubber (PDMS), ethylene methyl co-acrylate (EMA) and multi-walled carbon nanotubes (MWNTs). The effect of incorporating different proportions of f-MWNTs in polymer membrane has been studied. The introduction of f-MWNTs in polymer matrix modified the polymer structure, and therefore the properties of the device. The proton conductivity, measured by an AC impedance technique using open-frame and two-electrode cell and methanol permeability of the membranes was found to be dependent on the f-MWNTs loading. The proton conductivity of the membranes increases with increase in concentration of f-MWNTs concentration due to increased content of conductive materials. Measured methanol permeabilities at 60oC were found to be dependant on loading of f-MWNTs. The methanol permeability decreased from 1.5 x 10-6 cm²/s for pure film to 0.8 x 10-7 cm²/s for a membrane containing 0.5wt % f-MWNTs. This is due to increasing proportion of f-MWNTs, the matrix becomes more compact. From DSC melting curves it is clear that the polymer matrix with f-MWNTs is thermally stable. FT-IR studies show good interaction between EMA and f-MWNTs. XRD analysis shows good crystalline behavior of the prepared membranes. Significant cost savings can be achieved when using the blended films which contain less expensive polymers.Keywords: fuel cell membrane, polydimethyl siloxane rubber, carbon nanotubes, proton conductivity, methanol permeability
Procedia PDF Downloads 413975 Food Composition Tables Used as an Instrument to Estimate the Nutrient Ingest in Ecuador
Authors: Ortiz M. Rocío, Rocha G. Karina, Domenech A. Gloria
Abstract:
There are several tools to assess the nutritional status of the population. A main instrument commonly used to build those tools is the food composition tables (FCT). Despite the importance of FCT, there are many error sources and variability factors that can be presented on building those tables and can lead to an under or over estimation of ingest of nutrients of a population. This work identified different food composition tables used as an instrument to estimate the nutrient ingest in Ecuador.The collection of data for choosing FCT was made through key informants –self completed questionnaires-, supplemented with institutional web research. A questionnaire with general variables (origin, year of edition, etc) and methodological variables (method of elaboration, information of the table, etc) was passed to the identified FCT. Those variables were defined based on an extensive literature review. A descriptive analysis of content was performed. Ten printed tables and three databases were reported which were all indistinctly treated as food composition tables. We managed to get information from 69% of the references. Several informants referred to printed documents that were not accessible. In addition, searching the internet was not successful. Of the 9 final tables, n=8 are from Latin America, and, n= 5 of these were constructed by indirect method (collection of already published data) having as a main source of information a database from the United States department of agriculture USDA. One FCT was constructed by using direct method (bromatological analysis) and has its origin in Ecuador. The 100% of the tables made a clear distinction of the food and its method of cooking, 88% of FCT expressed values of nutrients per 100g of edible portion, 77% gave precise additional information about the use of the table, and 55% presented all the macro and micro nutrients on a detailed way. The more complete FCT were: INCAP (Central America), Composition of foods (Mexico). The more referred table was: Ecuadorian food composition table of 1965 (70%). The indirect method was used for most tables within this study. However, this method has the disadvantage that it generates less reliable food composition tables because foods show variations in composition. Therefore, a database cannot accurately predict the composition of any isolated sample of a food product.In conclusion, analyzing the pros and cons, and, despite being a FCT elaborated by using an indirect method, it is considered appropriate to work with the FCT of INCAP Central America, given the proximity to our country and a food items list that is very similar to ours. Also, it is imperative to have as a reference the table of composition for Ecuadorian food, which, although is not updated, was constructed using the direct method with Ecuadorian foods. Hence, both tables will be used to elaborate a questionnaire with the purpose of assessing the food consumption of the Ecuadorian population. In case of having disparate values, we will proceed by taking just the INCAP values because this is an updated table.Keywords: Ecuadorian food composition tables, FCT elaborated by direct method, ingest of nutrients of Ecuadorians, Latin America food composition tables
Procedia PDF Downloads 432974 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology
Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao
Abstract:
With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.Keywords: optimisation, plate, sensor effectiveness, vibration control
Procedia PDF Downloads 234973 Innovative Technologies Functional Methods of Dental Research
Authors: Sergey N. Ermoliev, Margarita A. Belousova, Aida D. Goncharenko
Abstract:
Application of the diagnostic complex of highly informative functional methods (electromyography, reodentography, laser Doppler flowmetry, reoperiodontography, vital computer capillaroscopy, optical tissue oximetry, laser fluorescence diagnosis) allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment. Introduction. It is necessary to create a complex of innovative highly informative and safe functional diagnostic methods for improvement of the quality of patient treatment by the early detection of stomatologic diseases. The purpose of the present study was to investigate the etiology and pathogenesis of functional disorders identified in the pathology of hard tissue, dental pulp, periodontal, oral mucosa and chewing function, and the creation of new approaches to the diagnosis of dental diseases. Material and methods. 172 patients were examined. Density of hard tissues of the teeth and jaw bone was studied by intraoral ultrasonic densitometry (USD). Electromyographic activity of masticatory muscles was assessed by electromyography (EMG). Functional state of dental pulp vessels assessed by reodentography (RDG) and laser Doppler flowmetry (LDF). Reoperiodontography method (RPG) studied regional blood flow in the periodontal tissues. Microcirculatory vascular periodontal studied by vital computer capillaroscopy (VCC) and laser Doppler flowmetry (LDF). The metabolic level of the mucous membrane was determined by optical tissue oximetry (OTO) and laser fluorescence diagnosis (LFD). Results and discussion. The results obtained revealed changes in mineral density of hard tissues of the teeth and jaw bone, the bioelectric activity of masticatory muscles, regional blood flow and microcirculation in the dental pulp and periodontal tissues. LDF and OTO methods estimated fluctuations of saturation level and oxygen transport in microvasculature of periodontal tissues. With LFD identified changes in the concentration of enzymes (nicotinamide, flavins, lipofuscin, porphyrins) involved in metabolic processes Conclusion. Our preliminary results confirmed feasibility and safety the of intraoral ultrasound densitometry technique in the density of bone tissue of periodontium. Conclusion. Application of the diagnostic complex of above mentioned highly informative functional methods allows to perform a multifactorial analysis of the dental status and to prescribe complex etiopathogenetic treatment.Keywords: electromyography (EMG), reodentography (RDG), laser Doppler flowmetry (LDF), reoperiodontography method (RPG), vital computer capillaroscopy (VCC), optical tissue oximetry (OTO), laser fluorescence diagnosis (LFD)
Procedia PDF Downloads 280972 Study the Effect of Liquefaction on Buried Pipelines during Earthquakes
Authors: Mohsen Hababalahi, Morteza Bastami
Abstract:
Buried pipeline damage correlations are critical part of loss estimation procedures applied to lifelines for future earthquakes. The vulnerability of buried pipelines against earthquake and liquefaction has been observed during some of previous earthquakes and there are a lot of comprehensive reports about this event. One of the main reasons for impairment of buried pipelines during earthquake is liquefaction. Necessary conditions for this phenomenon are loose sandy soil, saturation of soil layer and earthquake intensity. Because of this fact that pipelines structure are very different from other structures (being long and having light mass) by paying attention to the results of previous earthquakes and compare them with other structures, it is obvious that the danger of liquefaction for buried pipelines is not high risked, unless effective parameters like earthquake intensity and non-dense soil and other factors be high. Recent liquefaction researches for buried pipeline include experimental and theoretical ones as well as damage investigations during actual earthquakes. The damage investigations have revealed that a damage ratio of pipelines (Number/km ) has much larger values in liquefied grounds compared with one in shaking grounds without liquefaction according to damage statistics during past severe earthquakes, and that damages of joints and pipelines connected with manholes were remarkable. The purpose of this research is numerical study of buried pipelines under the effect of liquefaction by case study of the 2013 Dashti (Iran) earthquake. Water supply and electrical distribution systems of this township interrupted during earthquake and water transmission pipelines were damaged severely due to occurrence of liquefaction. The model consists of a polyethylene pipeline with 100 meters length and 0.8 meter diameter which is covered by light sandy soil and the depth of burial is 2.5 meters from surface. Since finite element method is used relatively successfully in order to solve geotechnical problems, we used this method for numerical analysis. For evaluating this case, some information like geotechnical information, classification of earthquakes levels, determining the effective parameters in probability of liquefaction, three dimensional numerical finite element modeling of interaction between soil and pipelines are necessary. The results of this study on buried pipelines indicate that the effect of liquefaction is function of pipe diameter, type of soil, and peak ground acceleration. There is a clear increase in percentage of damage with increasing the liquefaction severity. The results indicate that although in this form of the analysis, the damage is always associated to a certain pipe material, but the nominally defined “failures” include by failures of particular components (joints, connections, fire hydrant details, crossovers, laterals) rather than material failures. At the end, there are some retrofit suggestions in order to decrease the risk of liquefaction on buried pipelines.Keywords: liquefaction, buried pipelines, lifelines, earthquake, finite element method
Procedia PDF Downloads 513971 Assessing Children’s Probabilistic and Creative Thinking in a Non-formal Learning Context
Authors: Ana Breda, Catarina Cruz
Abstract:
Daily, we face unpredictable events, often attributed to chance, as there is no justification for such an occurrence. Chance, understood as a source of uncertainty, is present in several aspects of human life, such as weather forecasts, dice rolling, and lottery. Surprisingly, humans and some animals can quickly adjust their behavior to handle efficiently doubly stochastic processes (random events with two layers of randomness, like unpredictable weather affecting dice rolling). This adjustment ability suggests that the human brain has built-in mechanisms for perceiving, understanding, and responding to simple probabilities. It also explains why current trends in mathematics education include probability concepts in official curriculum programs, starting from the third year of primary education onwards. In the first years of schooling, children learn to use a certain type of (specific) vocabulary, such as never, always, rarely, perhaps, likely, and unlikely, to help them to perceive and understand the probability of some events. These are keywords of crucial importance for their perception and understanding of probabilities. The development of the probabilistic concepts comes from facts and cause-effect sequences resulting from the subject's actions, as well as the notion of chance and intuitive estimates based on everyday experiences. As part of a junior summer school program, which took place at a Portuguese university, a non-formal learning experiment was carried out with 18 children in the 5th and 6th grades. This experience was designed to be implemented in a dynamic of a serious ice-breaking game, to assess their levels of probabilistic, critical, and creative thinking in understanding impossible, certain, equally probable, likely, and unlikely events, and also to gain insight into how the non-formal learning context influenced their achievements. The criteria used to evaluate probabilistic thinking included the creative ability to conceive events classified in the specified categories, the ability to properly justify the categorization, the ability to critically assess the events classified by other children, and the ability to make predictions based on a given probability. The data analysis employs a qualitative, descriptive, and interpretative-methods approach based on students' written productions, audio recordings, and researchers' field notes. This methodology allowed us to conclude that such an approach is an appropriate and helpful formative assessment tool. The promising results of this initial exploratory study require a future research study with children from these levels of education, from different regions, attending public or private schools, to validate and expand our findings.Keywords: critical and creative thinking, non-formal mathematics learning, probabilistic thinking, serious game
Procedia PDF Downloads 28970 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).Keywords: fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects
Procedia PDF Downloads 273