Search results for: computer based environment
1111 Development of a Culturally Safe Wellbeing Intervention Tool for and with the Inuit in Quebec
Authors: Liliana Gomez Cardona, Echo Parent-Racine, Joy Outerbridge, Arlene Laliberté, Outi Linnaranta
Abstract:
Suicide rates among Inuit in Nunavik are six to eleven times larger than the Canadian average. The colonization, religious missions, residential schools as well as economic and political marginalization are factors that have challenged the well-being and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous peoples because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Inuit in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Inuit population. Qualitative, collaborative, and participatory action research project which respects First Nations and Inuit protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Inuit. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Inuit in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in resilience and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Inuit population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.Keywords: cultural adaptation, cultural safety, empowerment, Inuit, mental health, Nunavik, resiliency
Procedia PDF Downloads 1181110 Informative, Inclusive and Transparent Planning Methods for Sustainable Heritage Management
Authors: Mathilde Kirkegaard
Abstract:
The paper will focus on management of heritage that integrates the local community, and argue towards an obligation to integrate this social aspect in heritage management. By broadening the understanding of heritage, a sustainable heritage management takes its departure in more than a continual conservation of the physicality of heritage. The social aspect, or the local community, is in many govern heritage management situations being overlooked and it is not managed through community based urban planning methods, e.g.: citizen-inclusion, a transparent process, informative and inviting initiatives, etc. Historical sites are often being described by embracing terms such as “ours” and “us”: “our history” and “a history that is part of us”. Heritage is not something static, it is a link between the life that has been lived in the historical frames, and the life that is defining it today. This view on heritage is rooted in the strive to ensure that heritage sites, besides securing the national historical interest, have a value for those people who are affected by it: living in it or visiting it. Antigua Guatemala is a UNESCO-defined heritage site and this site is being ‘threatened’ by tourism, habitation and recreation. In other words: ‘the use’ of the site is considered a threat of the preservation of the heritage. Contradictory the same types of use (tourism and habitation) can also be considered development ability, and perhaps even a sustainable management solution. ‘The use’ of heritage is interlinked with the perspective that heritage sites ought to have a value for people today. In other words, the heritage sites should be comprised of a contemporary substance. Heritage is entwined in its context of physical structures and the social layer. A synergy between the use of heritage and the knowledge about the heritage can generate a sustainable preservation solution. The paper will exemplify this symbiosis with different examples of a heritage management that is centred around a local community inclusion. The inclusive method is not new in architectural planning and it refers to a top-down and bottom-up balance in decision making. It can be endeavoured through designs of an inclusive nature. Catalyst architecture is a planning method that strives to move the process of design solutions into the public space. Through process-orientated designs, or catalyst designs, the community can gain an insight into the process or be invited to participate in the process. A balance between bottom-up and top-down in the development process of a heritage site can, in relation to management measures, be understood to generate a socially sustainable solution. The ownership and engagement that can be created among the local community, along with the use that ultimately can gain an economic benefit, can delegate the maintenance and preservation. Informative, inclusive and transparent planning methods can generate a heritage management that is long-term due to the collective understanding and effort. This method handles sustainable management on two levels: the current preservation necessities and the long-term management, while ensuring a value for people today.Keywords: community, intangible, inclusion, planning
Procedia PDF Downloads 1181109 Analysis of the Barriers and Aids That Lecturers Offer to Students with Disabilities
Authors: Anabel Moriña
Abstract:
In recent years, advances have been made in disability policy at Spanish universities, especially in terms of creating more inclusive learning environments. Nevertheless, while efforts to foster inclusion at the tertiary level -and the growing number of students with disabilities at university- are clear signs of progress, serious barriers to full participation in learning still exist. The research shows that university responses to diversity tend to be reactive, not proactive; as a result, higher education (HE) environments can be especially disabling. It has been demonstrated that the performance of students with disabilities is closely linked to the good will of university faculty and staff. Lectures are key players when it comes to helping or hindering students throughout the teaching/learning process. This paper presents an analysis of how lecturers respond to students with disabilities, the initial question being: do lecturers aid or hinder students? The general aim is to analyse-by listen to the students themselves-lecturers barriers and support identified as affecting academic performance and overall perception of the higher education (HE) experience. Biographical-narrative methodology was employed. This research analysed the results differentiating by fields of knowledge. The research was conducted in two phases: discussion groups along with individual oral/written interviews were set up with 44 students with disabilities and mini life histories were completed for 16 students who participated in the first stage. The study group consisted of students with disabilities enrolled during three academic years. The results of this paper noted that participating students identified many more barriers than bridges when speaking about the role lecturers play in their learning experience. Findings are grouped into several categories: Faculty attitudes when “dealing with” students with disabilities, teaching methodologies, curricular adaptations, and faculty training in working with students. Faculty does not always display appropriate attitudes towards students with disabilities. Study participants speak of them turning their backs on their problems-or behaving in an awkward manner. In many cases, it seems lecturers feel that curricular adaptations of any kind are a form of favouritism. Positive attitudes, however, often depend almost entirely on the good will of faculty and-although well received by students-are hard to come by. As the participants themselves suggest, this study confirms that good teaching practices not only benefit students with disabilities but the student body as a whole. In this sense, inclusive curricula provide new opportunities for all students. A general coincidence has been the lack of training on behalf of lecturers to adequately attend disabled students, and the need to cover this shortage. This can become a primary barrier and is more often due to deficient faculty training than to inappropriate attitudes on the part of lecturers. In conclusion, based on this research we can conclude that more barriers than bridges exist. That said, students do report receiving a good deal of support from their lecturers-although almost exclusively in a spirit of good will; when lecturers do help, however, it tends to have a very positive impact on students' academic performance.Keywords: barriers, disability, higher education, lecturers
Procedia PDF Downloads 2551108 Domestic Violence Against Women (With Special Reference to India): A Human Rights Issue
Authors: N. B. Chandrakala
Abstract:
Domestic violence is one of the most under-reported crimes. Problem with domestic violence is that it is not even considered as abuse in many parts of the world especially certain parts of Asia, Africa and Middle East. It is viewed as “doing the needful”. Domestic violence could be in form of emotional harassment, physical injury or psychological abuse perpetrated by one of the family members to another. It is a worldwide phenomenon mainly targeting women. The acts of violence have terrible negative impact on women. It is also an infringement of women’s rights and can be safely termed as human rights abuse. In cases pertaining to domestic violence, male adults often misuses his authority and power to control another using physical or psychological means. Violence and other forms of abuse are common in domestic violence. Sexual assaults, molestation and battering are common in these cases. Domestic violence is a human rights issue and a serious deterrent to development. Domestic violence could also take place in subtle forms like making the person feel worthless or not giving the victims any personal space or freedom. The problematic aspect is cases of domestic violence are very rarely reported. The majority of the victims are women but children are also made to suffer silently. They are abused and neglected. Their innocent minds are adversely affected with the incidents of domestic violence. According to a report by World Health Organization (WHO), sexual trafficking, female feticide, dowry death, public humiliation and physical torture are some of the most common forms of domestic violence against Indian women. Such acts belie our growth and claim as an economic superpower. It is ironic that we claim to be one of the most rapidly advancing countries in the world and yet we have done hardly anything of note against social hazards like domestic violence. Laws are not that stringent when it comes to reporting acts of domestic violence. Even if the report is filed it turns out to be a long drawn process and not every victim has that much resource to fight till the end. It is also a social taboo to make your family matters public. The big challenge in front now is to enforce it in true sense. Steps that are actually needed; tough laws against domestic violence, speedy execution and change in the mindset of society only then we can expect to have some improvement in such inhuman cases. An effective response to violence must be multi-sectoral; addressing the immediate practical needs of women experiencing abuse; providing long-term follow up and assistance; and focusing on changing those cultural norms, attitudes and legal provisions that promote the acceptance of and even encourage violence against women, and undermine women's enjoyment of their full human rights and freedoms. Hence the responses to the problem must be based on integrated approach. The effectiveness of measures and initiatives will depend on coherence and coordination associated with their design and implementation.Keywords: domestic violence, human rights, sexual assaults, World Health Organization
Procedia PDF Downloads 5421107 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia
Authors: Zeleke Ayalew Alemu
Abstract:
This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.Keywords: taxation, tax system, tax administration, small and medium enterprises
Procedia PDF Downloads 731106 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 871105 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 731104 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries
Authors: Eva Masson, Andrea Kübler
Abstract:
Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG
Procedia PDF Downloads 381103 Rethinking Modernization Strategy of Muslim Society: The Need for Value-Based Approach
Authors: Louay Safi
Abstract:
The notion of secular society that evolved over the last two centuries was initially intended to free the public sphere from religious imposition, before it assumed the form a comprehensive ideology whose aim is to prevent any overt religious expression from the public space. The negative view of religious expression, and the desire by political elites to purge the public space from all forms of religious expressions were first experienced in the Middle East in the last decades of the twentieth century in relation to Islam, before it manifests itself in the twentieth century Europe. Arab regimes were encouraged by European democracies to marginalize all forms of religious expressions in the public as part of the secularization process that was deemed necessary for modernization and progress. The prohibition of Islamic symbols and outlawing the headscarf was first undertaken to Middle Eastern republics, such as Turkey in 1930s and Syria in 1970s, before it is implemented recently in France. Secularization has been perceived by European powers as the central aspect of social and political liberalization, and was given priority over democratization and human rights, so much so that European elites were willing to entrust the task of nurturing liberal democracy to Arab autocrats and dictators. Not only did the strategy of empowering autocratic regimes to effect liberal democratic culture failed, but it contributed to the rise of Islamist extremism and produced failed states in Syria and Iraq that undermine both national and global peace and stability. The paper adopts the distinction made by John Rawls between political and comprehensive liberalism to argue that the modernization via secularization in Muslim societies is counterproductive and has subverted early successful efforts at democratization and reform in the Middle East. Using case studies that illustrate the role of the secularization strategy in Syria, Iran, and Egypt in undermining democratic and reformist movements in those countries, the paper calls for adopting a different approach rooted in liberal and democratic values rather than cultural practices and lifestyle. The paper shows that Islamic values as articulated by reform movements support a democratic and pluralist political order, and emphasizes the need to legitimize and support social forces that advocate democracy and human rights. Such an alternative strategy allows for internal competition among social groups for popular support, and therefore enhances the chances that those with inclusive and forward-looking political principles and policies would create a democratic and pluralist political order more conducive to meaningful national and global cooperation, and respectful of human dignity.Keywords: democracy, Islamic values, political liberalism, secularization
Procedia PDF Downloads 1681102 Destruction of Colon Cells by Nanocontainers of Ferromagnetic
Authors: Lukasz Szymanski, Zbigniew Kolacinski, Grzegorz Raniszewski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza, Karolina Przybylowska-Sygut, Ireneusz Majsterek, Zbigniew Kaminski, Justyna Fraczyk, Malgorzata Walczak, Beata Kolasinska, Adam Bednarek, Joanna Konka
Abstract:
The aim of this work is to investigate the influence of electromagnetic field from the range of radio frequencies on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon - ferromagnetic nanocontainers (FNCs) includes: The synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Pristine ferromagnetic carbon nanotubes are not suitable for application in medicine and biotechnology. Appropriate functionalization of ferromagnetic carbon nanotubes allows to receiving materials useful in medicine. Finally, a product contains folic acids on the surface of FNCs. The folic acid is a ligand of folate receptors – α which is overexpressed on the surface of epithelial tumours cells. It is expected that folic acids will be recognized and selectively bound by receptors presented on the surface of tumour cells. In our research, FNCs were covalently functionalized in a multi-step procedure. Ferromagnetic carbon nanotubes were oxidated using different oxidative agents. For this purpose, strong acids such as HNO3, or mixture HNO3 and H2SO4 were used. Reactive carbonyl and carboxyl groups were formed on the open sides and at the defects on the sidewalls of FNCs. These groups allow further modification of FNCs as a reaction of amidation, reaction of introduction appropriate linkers which separate solid surface of FNCs and ligand (folic acid). In our studies, amino acid and peptide have been applied as ligands. The last step of chemical modification was reaction-condensation with folic acid. In all reaction as coupling reagents were used derivatives of 1,3,5-triazine. The first trials in the device for hyperthermal RF generator have been done. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz. Obtained functionalized nanoparticles enabled to reach the temperature of denaturation tumor cells in given frequencies.Keywords: cancer colon cells, carbon nanotubes, hyperthermia, ligands
Procedia PDF Downloads 3131101 Belonging without Believing: Life Narratives of Six Social Generations of Members of the Apostolic Society
Authors: Frederique A. Demeijer
Abstract:
This article addresses the religious beliefs of members of the Apostolic Society –a Dutch religious community wherein the oldest living members were raised with very different beliefs than those upheld today. Currently, the Apostolic Society is the largest liberal religious community of the Netherlands, consisting of roughly 15,000 members. It is characterized by its close-knit community life and the importance of its apostle: the spiritual leader who writes a weekly letter around which the Sunday morning service is centered. The society sees itself as ‘religious-humanistic’, inspired by its Judeo-Christian roots without being dogmatic. Only a century earlier, the beliefs of the religious community revolved more strongly around the Bible, the apostle is a link to Christ. Also, the community believed in the return of the Lord, resonating with the millenarian roots of community in 1830. Thus, the oldest living members have experienced fundamental changes in beliefs and rituals, yet remained members. This article reveals how members experience(d) their religious beliefs and feelings of belonging to the community, how these may or may not have changed over time, and what role the Apostolic Society played in their lives. The article presents a qualitative research approach based on two main pillars. First, life narrative interviews were conducted, to work inductively and allow different interview topics to emerge. Second, it uses generational theory, in three ways: 1) to select respondents; 2) to guide the interview methodology –by being sensitive to differences in socio-historical context and events experienced during formative years of interviewees of different social generations, and 3) to analyze and contextualize the qualitative interview data. The data were gathered from 27 respondents, belonging to six social generations. All interviews were recorded, transcribed, coded, and analyzed, using the Atlas.ti software program. First, the elder generations talk about growing up with the Apostolic Society being absolutely central in their daily and spiritual lives. They spent most of their time with fellow members and dedicated their free time to Apostolic activities. The central beliefs of the Apostolic Society were clear and strongly upheld, and they experienced strong belonging. Although they now see the set of central beliefs to be more individually interpretable and are relieved to not have to spend all that time to Apostolic activities anymore, they still regularly attend services and speak longingly of the past with its strong belief and belonging. Second, the younger generations speak of growing up in a non-dogmatic, religious-humanist set of beliefs, but still with a very strong belonging to the religious community. They now go irregularly to services, and talk about belonging, but not as strong as the elderly generations do. Third, across the generations, members spend more time outside of the Apostolic Society than within. The way they speak about their religious beliefs is fluid and differs as much within generations as between: for example, there is no central view on what God is. It seems the experience of members of the Apostolic Society across different generations can now be characterized as belonging without believing.Keywords: generational theory, individual religious experiences, life narrative history interviews, qualitative research design
Procedia PDF Downloads 1111100 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 1831099 Beyond Objectification: Moderation Analysis of Trauma and Overexcitability Dynamics in Women
Authors: Ritika Chaturvedi
Abstract:
Introduction: Sexual objectification, characterized by the reduction of an individual to a mere object of sexual desire, remains a pervasive societal issue with profound repercussions on individual well-being. Such experiences, often rooted in systemic and cultural norms, have long-lasting implications for mental and emotional health. This study aims to explore the intricate relationship between experiences of sexual objectification and insidious trauma, further investigating the potential moderating effects of overexcitabilities as proposed by Dabrowski's theory of positive disintegration. Methodology: The research involved a comprehensive cohort of 204 women, spanning ages from 18 to 65 years. Participants were tasked with completing self-administered questionnaires designed to capture their experiences with sexual objectification. Additionally, the questionnaire assessed symptoms indicative of insidious trauma and explored overexcitabilities across five distinct domains: emotional, intellectual, psychomotor, sensory, and imaginational. Employing advanced statistical techniques, including multiple regression and moderation analysis, the study sought to decipher the intricate interplay among these variables. Findings: The study's results revealed a compelling positive correlation between experiences of sexual objectification and the onset of symptoms indicative of insidious trauma. This correlation underscores the profound and detrimental effects of sexual objectification on an individual's psychological well-being. Interestingly, the moderation analyses introduced a nuanced understanding, highlighting the differential roles of various overexcitabilities. Specifically, emotional, intellectual, and sensual overexcitabilities were found to exacerbate trauma symptomatology. In contrast, psychomotor overexcitability emerged as a protective factor, demonstrating a mitigating influence on the relationship between sexual objectification and trauma. Implications: The study's findings hold significant implications for a diverse array of stakeholders, encompassing mental health practitioners, educators, policymakers, and advocacy groups. The identified moderating effects of overexcitabilities emphasize the need for tailored interventions that consider individual differences in coping and resilience mechanisms. By recognizing the pivotal role of overexcitabilities in modulating the traumatic consequences of sexual objectification, this research advocates for the development of more nuanced and targeted support frameworks. Moreover, the study underscores the importance of continued research endeavors to unravel the intricate mechanisms and dynamics underpinning these relationships. Such endeavors are crucial for fostering the evolution of informed, evidence-based interventions and strategies aimed at mitigating the adverse effects of sexual objectification and promoting holistic well-being.Keywords: sexual objectification, insidious trauma, emotional overexcitability, intellectual overexcitability, sensual overexcitability, psychomotor overexcitability, imaginational overexcitability
Procedia PDF Downloads 461098 Triple Case Phantom Tumor of Lungs
Authors: Angelis P. Barlampas
Abstract:
Introduction: The term phantom lung mass describes the ovoid collection of fluid within the interlobular fissure, which initially creates the impression of a mass. The problem of correct differential diagnosis is great, especially in plain radiography. A case is presented with three nodular pulmonary foci, the shape, location, and density of which, as well as the presence of chronic loculated pleural effusions, suggest the presence of multiple phantom tumors of the lung. Purpose: The aim of this paper is to draw the attention of non-experienced and non-specialized physicians to the existence of benign findings that mimic pathological conditions and vice versa. The careful study of a radiological examination and the comparison with previous exams or further control protect against quick wrong conclusions. Methods: A hospitalized patient underwent a non-contrast CT scan of the chest as part of the general control of her situation. Results: Computed tomography revealed pleural effusions, some of them loculated, increased cardiothoracic index, as well as the presence of three nodular foci, one in the left lung and two in the right with a maximum density of up to 18 Hounsfield units and a mean diameter of approximately five centimeters. Two of them are located in the characteristical anatomical position of the major interlobular fissure. The third one is located in the area of the right lower lobe’s posterior basal part, and it presents the same characteristics as the previous ones and is likely to be a loculated fluid collection, within an auxiliary interlobular fissure or a cyst, in the context of the patient's more general pleural entrapments and loculations. The differential diagnosis of nodular foci based on their imaging characteristics includes the following: a) rare metastatic foci with low density (liposarcoma, mucous tumors of the digestive or genital system, necrotic metastatic foci, metastatic renal cancer, etc.), b) necrotic multiple primary lung tumor locations (squamous epithelial cancer, etc. ), c) hamartomas of the lung, d) fibrotic tumors of the interlobular fissures, e) lipoid pneumonia, f) fluid concentrations within the interlobular fissures, g) lipoma of the lung, h) myelolipomas of the lung. Conclusions: The collection of fluid within the interlobular fissure of the lung can give the false impression of a lung mass, particularly on plain chest radiography. In the case of computed tomography, the ability to measure the density of a lesion, combined with the provided high anatomical details of the location and characteristics of the lesion, can lead relatively easily to the correct diagnosis. In cases of doubt or image artifacts, comparison with previous or subsequent examinations can resolve any disagreements, while in rare cases, intravenous contrast may be necessary.Keywords: phantom mass, chest CT, pleural effusion, cancer
Procedia PDF Downloads 551097 C-Spine Imaging in a Non-trauma Centre: Compliance with NEXUS Criteria Audit
Authors: Andrew White, Abigail Lowe, Kory Watkins, Hamed Akhlaghi, Nicole Winter
Abstract:
The timing and appropriateness of diagnostic imaging are critical to the evaluation and management of traumatic injuries. Within the subclass of trauma patients, the prevalence of c-spine injury is less than 4%. However, the incidence of delayed diagnosis within this cohort has been documented as up to 20%, with inadequate radiological examination most cited issue. In order to assess those in which c-spine injury cannot be fully excluded based on clinical examination alone and, therefore, should undergo diagnostic imaging, a set of criteria is used to provide clinical guidance. The NEXUS (National Emergency X-Radiography Utilisation Study) criteria is a validated clinical decision-making tool used to facilitate selective c-spine radiography. The criteria allow clinicians to determine whether cervical spine imaging can be safely avoided in appropriate patients. The NEXUS criteria are widely used within the Emergency Department setting given their ease of use and relatively straightforward application and are used in the Victorian State Trauma System’s guidelines. This audit utilized retrospective data collection to examine the concordance of c-spine imaging in trauma patients to that of the NEXUS criteria and assess compliance with state guidance on diagnostic imaging in trauma. Of the 183 patients that presented with trauma to the head, neck, or face (244 excluded due to incorrect triage), 98 did not undergo imaging of the c-spine. Out of those 98, 44% fulfilled at least one of the NEXUS criteria, meaning the c-spine could not be clinically cleared as per the current guidelines. The criterion most met was intoxication, comprising 42% (18 of 43), with midline spinal tenderness (or absence of documentation of this) the second most common with 23% (10 of 43). Intoxication being the most met criteria is significant but not unexpected given the cohort of patients seen at St Vincent’s and within many emergency departments in general. Given these patients will always meet NEXUS criteria, an element of clinical judgment is likely needed, or concurrent use of the Canadian C-Spine Rules to exclude the need for imaging. Midline tenderness as a met criterion was often in the context of poor or absent documentation relating to this, emphasizing the importance of clear and accurate assessments. The distracting injury was identified in 7 out of the 43 patients; however, only one of these patients exhibited a thoracic injury (T11 compression fracture), with the remainder comprising injuries to the extremities – some studies suggest that C-spine imaging may not be required in the evaluable blunt trauma patient despite distracting injuries in any body regions that do not involve the upper chest. This emphasises the need for standardised definitions for distracting injury, at least at a departmental/regional level. The data highlights the currently poor application of the NEXUS guidelines, with likely common themes throughout emergency departments, highlighting the need for further education regarding implementation and potential refinement/clarification of criteria. Of note, there appeared to be no significant differences between levels of experience with respect to inappropriately clearing the c-spine clinically with respect to the guidelines.Keywords: imaging, guidelines, emergency medicine, audit
Procedia PDF Downloads 721096 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 1271095 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 1491094 The Cultural Shift in Pre-owned Fashion as Sustainable Consumerism in Vietnam
Authors: Lam Hong Lan
Abstract:
The textile industry is said to be the second-largest polluter, responsible for 92 million tonnes of waste annually. There is an urgent need to practice the circular economy to increase the use and reuse around the world. By its nature, the pre-owned fashion business is considered part of the circular economy as it helps to eliminate waste and circulate products. Second-hand clothes and accessories used to be associated with a ‘cheap image’ that carried ‘old energy’ in Vietnam. This perception has been shifted, especially amongst the younger generation. Vietnamese consumer is spending more on products and services that increase self-esteem. The same consumer is moving away from a collectivist social identity towards a ‘me, not we’ outlook as they look for a way to express their individual identity. And pre-owned fashion is one of their solutions as it values money, can create a unique personal style for the wearer and links with sustainability. The design of this study is based on the second-hand shopping motivation theory. A semi-structured online survey with 100 consumers from one pre-owned clothing community and one pre-owned e-commerce site in Vietnam. The findings show that in contrast with Vietnamese older consumers (55+yo) who, in the previous study, generally associated pre-owned fashion with ‘low-cost’, ‘cheap image’ that carried ‘old energy’, young customers (20-30 yo) were actively promoted their pre-owned fashion items to the public via outlet’s social platforms and their social media. This cultural shift comes from the impact of global and local discourse around sustainable fashion and the growth of digital platforms in the pre-owned fashion business in the last five years, which has generally supported wider interest in pre-owned fashion in Vietnam. It can be summarised in three areas: (1) global and local celebrity influencers. A number of celebrities have been photographed wearing vintage items in music videos, photoshoots or at red carpet events. (2) E-commerce and intermediaries. International e-commerce sites – e.g., Vinted, TheRealReal – and/or local apps – e.g., Re.Loved – can influence attitudes and behaviors towards pre-owned consumption. (3) Eco-awareness. The increased online coverage of climate change and environmental pollution has encouraged customers to adopt a more eco-friendly approach to their wardrobes. While sustainable biomaterials and designs are still navigating their way into sustainability, sustainable consumerism via pre-owned fashion seems to be an immediate solution to lengthen the clothes lifecycle. This study has found that young consumers are primarily seeking value for money and/or a unique personal style from pre-owned/vintage fashion while using these purchases to promote their own “eco-awareness” via their social media networks. This is a good indication for fashion designers to keep in mind in their design process and for fashion enterprises in their business model’s choice to not overproduce fashion items.Keywords: cultural shift, pre-owned fashion, sustainable consumption, sustainable fashion.
Procedia PDF Downloads 831093 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 1391092 Age-Related Health Problems and Needs of Elderly People Living in Rural Areas in Poland
Authors: Anna Mirczak
Abstract:
Introduction: In connection with the aging of the population and the increase in the number of people with chronic illnesses, the priority objective for public health has become not only lengthening life, but also improving quality of life in older persons, as well as maintenance of their relative independence and active participation in social life. The most important determinant of a person’s quality of life is health. According to the literature, older people with chronic illness who live in rural settings are at greater risk for poor outcomes than their urban counterparts. Furthermore research characterizes the rural elderly as having a higher incidence of sickness, dysfunction, disability, restricted mobility, and acute and chronic conditions than their urban citizens. It is dictated by the overlapping certain specific socio-economic factors typical for rural areas which include: social and geography exclusion, limited access to health care centers, and low socioeconomic status. Aim of the study: The objective of this study was to recognize health status and needs of older people living in selected rural areas in Poland and evaluate the impacts of working in the farm on their health status. Material and methods: The study was performed personally, using interviews based on the structural questionnaires, during the period from March 2011 to October 2012. The group of respondents consisted 203 people aged 65 years and over living in selected rural areas in Poland. The analysis of collected research material was performed using the statistical package SPSS 19 for Windows. The level of significance for the tested the hypotheses assumed value of 0.05. Results: The mean age of participants was 75,5 years (SD=5,7) range from 65 to 94 years. Most of the interviewees had children (89.2%) and grandchildren (83.7) and lived mainly with family members (75.9%) mostly in double (46.8%) and triple (20.8%) household. The majority of respondents (71,9%) were physical working on the farm. At the time of interview, each of the respondents reported that they had been diagnosed with at least one chronic diseases by their GP. The most common were: hypertension (67,5%), osteoarthritis (44,8%), atherosclerosis (43,3%), cataract (40,4%), arrhythmia (28,6%), diabetes mellitus (19,7%) and stomach or duodenum ulcer diseases (17,2%).The number of diseases occurring of the sample was dependent on gender and age. Significant associations were observed between working on the farm and frequency of occurrence cardiovascular diseases, the gastrointestinal tract dysfunction and sensory disorders. Conclusions: The most common causes of disability among older citizens were: chronic diseases, malnutrition and complaints about access to health services (especially to cardiologist and an ophthalmologist). Health care access and health status are a particular concern in rural areas where the population is older, has lower education and income levels, and is more likely to be living in medically underserved areas than is the case in urban areas.Keywords: ageing, health status, older people, rural
Procedia PDF Downloads 2621091 Anaerobic Soil Disinfestation: Feasible Alternative to Soil Chemical Fumigants
Authors: P. Serrano-Pérez, M. C. Rodríguez-Molina, C. Palo, E. Palo, A. Lacasa
Abstract:
Phytophthora nicotianae is the principal causal agent of root and crown rot disease of red pepper plants in Extremadura (Western Spain). There is a need to develop a biologically-based method of soil disinfestation that facilitates profitable and sustainable production without the use of chemical fumigants. Anaerobic Soil Disinfestation (ASD), as well know as biodisinfestation, has been shown to control a wide range of soil-borne pathogens and nematodes in numerous crop production systems. This method implies soil wetting, incorporation of a easily decomposable carbon-rich organic amendment and covering with plastic film for several weeks. ASD with rapeseed cake (var. Tocatta, a glucosinolates-free variety) used as C-source was assayed in spring 2014, before the pepper crop establishment. The field experiment was conducted at the Agricultural Research Centre Finca La Orden (Southwestern Spain) and the treatments were: rapeseed cake (RCP); rapeseed cake without plastic cover (RC); control non-amendment (CP) and control non-amendment without plastic cover (C). The experimental design was a randomized complete block design with four replicates and a plot size of 5 x 5 m. On 26 March, rapeseed cake (1 kg·m-2) was incorporated into the soil with a rotovator. Biological probes with the inoculum were buried at 15 and 30-cm depth (biological probes were previously prepared with 100 g of disinfected soil inoculated with chlamydospores (chlam) of P. nicotianae P13 isolate [100 chlam·g-1 of soil] and wrapped in agryl cloth). Sprinkler irrigation was run until field capacity and the corresponding plots were covered with transparent plastic (PE 0.05 mm). On 6 May plastics were removed, the biological probes were dug out and a bioassay was established. One pepper seedling at the 2 to 4 true-leaves stage was transplanted in the soil from each biological probe. Plants were grown in a climatic chamber and disease symptoms were recorded every week during 2 months. Fragments of roots and crown of symptomatic plants were analyzed on NARPH media and soil from rizospheres was analyzed using carnation petals as baits. Results of “survival” were expressed as the percentage of soil samples where P. nicotianae was detected and results of “infectivity” were expressed as the percentage of diseased plants. No differences were detected in deep effect. Infectivity of P. nicotianae chlamydospores was successfully reduced in RCP treatment (4.2% of infectivity) compared with the controls (41.7% of infectivity). The pattern of survival was similar to infectivity observed by the bioassay: 21% of survival in RCP; 79% in CP; 83% in C and 87% in RC. Although ASD may be an effective alternative to chemical fumigants to pest management, more research is necessary to show their impact on the microbial community and chemistry of the soil.Keywords: biodisinfestation, BSD, soil fumigant alternatives, organic amendments
Procedia PDF Downloads 2171090 Isolation, Selection and Identification of Bacteria for Bioaugmentation of Paper Mills White Water
Authors: Nada Verdel, Tomaz Rijavec, Albin Pintar, Ales Lapanje
Abstract:
Objectives: White water circuits of woodfree paper mills contain suspended, dissolved, and colloidal particles, such as cellulose, starch, paper sizings, and dyes. By closing the white water circuits, these particles start to accumulate and affect the production. Due to high amount of organic matter that scavenge radicals and adsorbs onto catalyst surfaces, treatment of white water with photocatalysis is inappropriate. The most suitable approach should be bioaugmentation-assisted bioremediation. Accordingly, objectives were: - to isolate bacteria capable of degrading organic compounds used for the papermaking process - to select the most active bacteria for bioaugmentation. Status: The state-of-the-art of bioaugmentation of pulp and paper mill effluents is mostly based on biodegradation of lignin. Whereas in white water circuits of woodfree paper mills only papermaking compounds are present. As far as one can tell from the literature, the study on degradation activities of bacteria for all possible compounds of the papermaking process is a novelty. Methodology: The main parameters of the selected white water were systematically analyzed during a period of two months. Bacteria were isolated on selective media with particular carbon source. Organic substances used as carbon source either enter white water circuits as base paper or as recycled broke. The screening of bacterial activities for starch, cellulose, latex, polyvinyl alcohol, alkyl ketene dimers, and resin acids was followed by addition of lugol. Degraders of polycyclic aromatic dyes were selected by cometabolism tests; cometabolism is simultaneous biodegradation of two compounds, in which the degradation of the second compound depends on the presence of the first. The obtained strains were identified by 16S rRNA sequencing. Findings: 335 autochthonous strains were isolated on plates with selected carbon source. The isolated strains were selected according to degradation of the particular carbon source. The ultimate degraders of cationic starch, cellulose, and sizings are Pseudomonas sp. NV-CE12-CF and Aeromonas sp. NV-RES19-BTP. The most active strains capable of degrading azo dyes are Aeromonas sp. NV-RES19-BTP and Sphingomonas sp. NV-B14-CF. Klebsiella sp. NV-Y14A-BTP degrade polycyclic aromatic direct blue 15 and also yellow dye, Agromyces sp. NV-RED15A-BF and Cellulosimicrobium sp. NV-A4-BF are specialists for whitener and Aeromonas sp. NV-RES19-BTP is general degrader of all compounds. To the white water adapted bacteria were isolated and selected according to their degradation activities for particular organic substances. Mostly isolated bacteria are specialized to lower the competition in the microbial community. Degraders of readily-biodegradable compounds do not degrade recalcitrant polycyclic aromatic dyes and vice versa. General degraders are rare.Keywords: bioaugmentation, biodegradation of azo dyes, cometabolism, smart wastewater treatment technologies
Procedia PDF Downloads 2031089 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process
Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan
Abstract:
The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation
Procedia PDF Downloads 2631088 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 3571087 Content Monetization as a Mark of Media Economy Quality
Authors: Bela Lebedeva
Abstract:
Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.Keywords: content monetization, state capitalism, media liberalization, media economy, information quality
Procedia PDF Downloads 2481086 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 1981085 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis
Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger
Abstract:
In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis
Procedia PDF Downloads 3321084 Development of 3D Printed Natural Fiber Reinforced Composite Scaffolds for Maxillofacial Reconstruction
Authors: Sri Sai Ramya Bojedla, Falguni Pati
Abstract:
Nature provides the best of solutions to humans. One such incredible gift to regenerative medicine is silk. The literature has publicized a long appreciation for silk owing to its incredible physical and biological assets. Its bioactive nature, unique mechanical strength, and processing flexibility make us curious to explore further to apply it in the clinics for the welfare of mankind. In this study, Antheraea mylitta and Bombyx mori silk fibroin microfibers are developed by two economical and straightforward steps via degumming and hydrolysis for the first time, and a bioactive composite is manufactured by mixing silk fibroin microfibers at various concentrations with polycaprolactone (PCL), a biocompatible, aliphatic semi-crystalline synthetic polymer. Reconstructive surgery in any part of the body except for the maxillofacial region deals with replacing its function. But answering both the aesthetics and function is of utmost importance when it comes to facial reconstruction as it plays a critical role in the psychological and social well-being of the patient. The main concern in developing adequate bone graft substitutes or a scaffold is the noteworthy variation in each patient's bone anatomy. Additionally, the anatomical shape and size will vary based on the type of defect. The advent of additive manufacturing (AM) or 3D printing techniques to bone tissue engineering has facilitated overcoming many of the restraints of conventional fabrication techniques. The acquired patient's CT data is converted into a stereolithographic (STL)-file which is further utilized by the 3D printer to create a 3D scaffold structure in an interconnected layer-by-layer fashion. This study aims to address the limitations of currently available materials and fabrication technologies and develop a customized biomaterial implant via 3D printing technology to reconstruct complex form, function, and aesthetics of the facial anatomy. These composite scaffolds underwent structural and mechanical characterization. Atomic force microscopic (AFM) and field emission scanning electron microscopic (FESEM) images showed the uniform dispersion of the silk fibroin microfibers in the PCL matrix. With the addition of silk, there is improvement in the compressive strength of the hybrid scaffolds. The scaffolds with Antheraea mylitta silk revealed higher compressive modulus than that of Bombyx mori silk. The above results of PCL-silk scaffolds strongly recommend their utilization in bone regenerative applications. Successful completion of this research will provide a great weapon in the maxillofacial reconstructive armamentarium.Keywords: compressive modulus, 3d printing, maxillofacial reconstruction, natural fiber reinforced composites, silk fibroin microfibers
Procedia PDF Downloads 1991083 Understanding How to Increase Restorativeness of Interiors: A Qualitative Exploratory Study on Attention Restoration Theory in Relation to Interior Design
Authors: Hande Burcu Deniz
Abstract:
People in the U.S. spend a considerable portion of their time indoors. This makes it crucial to provide environments that support the well-being of people. Restorative environments aim to help people recover their cognitive resources that were spent due to intensive use of directed attention. Spending time in nature and taking a nap are two of the best ways to restore these resources. However, they are not possible to do most of the time. The problem is that many studies have revealed how nature and spending time in natural contexts can help boost restoration, but there are fewer studies conducted to understand how cognitive resources can be restored in interior settings. This study aims to explore the answer to this question: which qualities of interiors increase the restorativeness of an interior setting and how do they mediate restorativeness of an interior. To do this, a phenomenological qualitative study was conducted. The study was interested in the definition of attention restoration and the experiences of the phenomena. As the themes emerged, they were analyzed to match with Attention Restoration Theory components (being away, extent, fascination, compatibility) to examine how interior design elements mediate the restorativeness of an interior. The data was gathered from semi-structured interviews with international residents of Minnesota. The interviewees represent young professionals who work in Minnesota and often experience mental fatigue. Also, they have less emotional connections with places in Minnesota, which enabled data to be based on the physical qualities of a space rather than emotional connections. In the interviews, participants were asked about where they prefer to be when they experience mental fatigue. Next, they were asked to describe the physical qualities of the places they prefer to be with reasons. Four themes were derived from the analysis of interviews. The themes are in order according to their frequency. The first, and most common, the theme was “connection to outside”. The analysis showed that people need to be either physically or visually connected to recover from mental fatigue. Direct connection to nature was reported as preferable, whereas urban settings were the secondary preference along with interiors. The second theme emerged from the analysis was “the presence of the artwork,” which was experienced differently by the interviewees. The third theme was “amenities”. Interviews pointed out that people prefer to have the amenities that support desired activity during recovery from mental fatigue. The last theme was “aesthetics.” Interviewees stated that they prefer places that are pleasing to their eyes. Additionally, they could not get rid of the feeling of being worn out in places that are not well-designed. When we matched the themes with the four art components (being away, extent, fascination, compatibility), some of the interior qualities showed overlapping since they were experienced differently by the interviewees. In conclusion, this study showed that interior settings have restorative potential, and they are multidimensional in their experience.Keywords: attention restoration, fatigue, interior design, qualitative study, restorative environments
Procedia PDF Downloads 2621082 Climate Change and Landslide Risk Assessment in Thailand
Authors: Shotiros Protong
Abstract:
The incidents of sudden landslides in Thailand during the past decade have occurred frequently and more severely. It is necessary to focus on the principal parameters used for analysis such as land cover land use, rainfall values, characteristic of soil and digital elevation model (DEM). The combination of intense rainfall and severe monsoons is increasing due to global climate change. Landslide occurrences rapidly increase during intense rainfall especially in the rainy season in Thailand which usually starts around mid-May and ends in the middle of October. The rain-triggered landslide hazard analysis is the focus of this research. The combination of geotechnical and hydrological data are used to determine permeability, conductivity, bedding orientation, overburden and presence of loose blocks. The regional landslide hazard mapping is developed using the Slope Stability Index SINMAP model supported on Arc GIS software version 10.1. Geological and land use data are used to define the probability of landslide occurrences in terms of geotechnical data. The geological data can indicate the shear strength and the angle of friction values for soils above given rock types, which leads to the general applicability of the approach for landslide hazard analysis. To address the research objectives, the methods are described in this study: setup and calibration of the SINMAP model, sensitivity of the SINMAP model, geotechnical laboratory, landslide assessment at present calibration and landslide assessment under future climate simulation scenario A2 and B2. In terms of hydrological data, the millimetres/twenty-four hours of average rainfall data are used to assess the rain triggered landslide hazard analysis in slope stability mapping. During 1954-2012 period, is used for the baseline of rainfall data at the present calibration. The climate change in Thailand, the future of climate scenarios are simulated by spatial and temporal scales. The precipitation impact is need to predict for the climate future, Statistical Downscaling Model (SDSM) version 4.2, is used to assess the simulation scenario of future change between latitude 16o 26’ and 18o 37’ north and between longitude 98o 52’ and 103o 05’ east by SDSM software. The research allows the mapping of risk parameters for landslide dynamics, and indicates the spatial and time trends of landslide occurrences. Thus, regional landslide hazard mapping under present-day climatic conditions from 1954 to 2012 and simulations of climate change based on GCM scenarios A2 and B2 from 2013 to 2099 related to the threshold rainfall values for the selected the study area in Uttaradit province in the northern part of Thailand. Finally, the landslide hazard mapping will be compared and shown by areas (km2 ) in both the present and the future under climate simulation scenarios A2 and B2 in Uttaradit province.Keywords: landslide hazard, GIS, slope stability index (SINMAP), landslides, Thailand
Procedia PDF Downloads 564