Search results for: inquiry- based instruction
1387 Analysis of the Barriers and Aids That Lecturers Offer to Students with Disabilities
Authors: Anabel Moriña
Abstract:
In recent years, advances have been made in disability policy at Spanish universities, especially in terms of creating more inclusive learning environments. Nevertheless, while efforts to foster inclusion at the tertiary level -and the growing number of students with disabilities at university- are clear signs of progress, serious barriers to full participation in learning still exist. The research shows that university responses to diversity tend to be reactive, not proactive; as a result, higher education (HE) environments can be especially disabling. It has been demonstrated that the performance of students with disabilities is closely linked to the good will of university faculty and staff. Lectures are key players when it comes to helping or hindering students throughout the teaching/learning process. This paper presents an analysis of how lecturers respond to students with disabilities, the initial question being: do lecturers aid or hinder students? The general aim is to analyse-by listen to the students themselves-lecturers barriers and support identified as affecting academic performance and overall perception of the higher education (HE) experience. Biographical-narrative methodology was employed. This research analysed the results differentiating by fields of knowledge. The research was conducted in two phases: discussion groups along with individual oral/written interviews were set up with 44 students with disabilities and mini life histories were completed for 16 students who participated in the first stage. The study group consisted of students with disabilities enrolled during three academic years. The results of this paper noted that participating students identified many more barriers than bridges when speaking about the role lecturers play in their learning experience. Findings are grouped into several categories: Faculty attitudes when “dealing with” students with disabilities, teaching methodologies, curricular adaptations, and faculty training in working with students. Faculty does not always display appropriate attitudes towards students with disabilities. Study participants speak of them turning their backs on their problems-or behaving in an awkward manner. In many cases, it seems lecturers feel that curricular adaptations of any kind are a form of favouritism. Positive attitudes, however, often depend almost entirely on the good will of faculty and-although well received by students-are hard to come by. As the participants themselves suggest, this study confirms that good teaching practices not only benefit students with disabilities but the student body as a whole. In this sense, inclusive curricula provide new opportunities for all students. A general coincidence has been the lack of training on behalf of lecturers to adequately attend disabled students, and the need to cover this shortage. This can become a primary barrier and is more often due to deficient faculty training than to inappropriate attitudes on the part of lecturers. In conclusion, based on this research we can conclude that more barriers than bridges exist. That said, students do report receiving a good deal of support from their lecturers-although almost exclusively in a spirit of good will; when lecturers do help, however, it tends to have a very positive impact on students' academic performance.Keywords: barriers, disability, higher education, lecturers
Procedia PDF Downloads 2551386 Domestic Violence Against Women (With Special Reference to India): A Human Rights Issue
Authors: N. B. Chandrakala
Abstract:
Domestic violence is one of the most under-reported crimes. Problem with domestic violence is that it is not even considered as abuse in many parts of the world especially certain parts of Asia, Africa and Middle East. It is viewed as “doing the needful”. Domestic violence could be in form of emotional harassment, physical injury or psychological abuse perpetrated by one of the family members to another. It is a worldwide phenomenon mainly targeting women. The acts of violence have terrible negative impact on women. It is also an infringement of women’s rights and can be safely termed as human rights abuse. In cases pertaining to domestic violence, male adults often misuses his authority and power to control another using physical or psychological means. Violence and other forms of abuse are common in domestic violence. Sexual assaults, molestation and battering are common in these cases. Domestic violence is a human rights issue and a serious deterrent to development. Domestic violence could also take place in subtle forms like making the person feel worthless or not giving the victims any personal space or freedom. The problematic aspect is cases of domestic violence are very rarely reported. The majority of the victims are women but children are also made to suffer silently. They are abused and neglected. Their innocent minds are adversely affected with the incidents of domestic violence. According to a report by World Health Organization (WHO), sexual trafficking, female feticide, dowry death, public humiliation and physical torture are some of the most common forms of domestic violence against Indian women. Such acts belie our growth and claim as an economic superpower. It is ironic that we claim to be one of the most rapidly advancing countries in the world and yet we have done hardly anything of note against social hazards like domestic violence. Laws are not that stringent when it comes to reporting acts of domestic violence. Even if the report is filed it turns out to be a long drawn process and not every victim has that much resource to fight till the end. It is also a social taboo to make your family matters public. The big challenge in front now is to enforce it in true sense. Steps that are actually needed; tough laws against domestic violence, speedy execution and change in the mindset of society only then we can expect to have some improvement in such inhuman cases. An effective response to violence must be multi-sectoral; addressing the immediate practical needs of women experiencing abuse; providing long-term follow up and assistance; and focusing on changing those cultural norms, attitudes and legal provisions that promote the acceptance of and even encourage violence against women, and undermine women's enjoyment of their full human rights and freedoms. Hence the responses to the problem must be based on integrated approach. The effectiveness of measures and initiatives will depend on coherence and coordination associated with their design and implementation.Keywords: domestic violence, human rights, sexual assaults, World Health Organization
Procedia PDF Downloads 5421385 Tax Administration Constraints: The Case of Small and Medium Size Enterprises in Addis Ababa, Ethiopia
Authors: Zeleke Ayalew Alemu
Abstract:
This study aims to investigate tax administration constraints in Addis Ababa with a focus on small and medium-sized enterprises by identifying issues and constraints in tax administration and assessment. The study identifies problems associated with taxpayers and tax-collecting authorities in the city. The research used qualitative and quantitative research designs and employed questionnaires, focus group discussion and key informant interviews for primary data collection and also used secondary data from different sources. The study identified many constraints that taxpayers are facing. Among others, tax administration offices’ inefficiency, reluctance to respond to taxpayers’ questions, limited tax assessment and administration knowledge and skills, and corruption and unethical practices are the major ones. Besides, the tax laws and regulations are complex and not enforced equally and fully on all taxpayers, causing a prevalence of business entities not paying taxes. This apparently results in an uneven playing field. Consequently, the tax system at present is neither fair nor transparent and increases compliance costs. In case of dispute, the appeal process is excessively long and the tax authority’s decision is irreversible. The Value Added Tax (VAT) administration and compliance system is not well designed, and VAT has created economic distortion among VAT-registered and non-registered taxpayers. Cash registration machine administration and the reporting system are big headaches for taxpayers. With regard to taxpayers, there is a lack of awareness of tax laws and documentation. Based on the above and other findings, the study forwarded recommendations, such as, ensuring fairness and transparency in tax collection and administration, enhancing the efficiency of tax authorities by use of modern technologies and upgrading human resources, conducting extensive awareness creation programs, and enforcing tax laws in a fair and equitable manner. The objective of this study is to assess problems, weaknesses and limitations of small and medium-sized enterprise taxpayers, tax authority administrations, and laws as sources of inefficiency and dissatisfaction to forward recommendations that bring about efficient, fair and transparent tax administration. The entire study has been conducted in a participatory and process-oriented manner by involving all partners and stakeholders at all levels. Accordingly, the researcher used participatory assessment methods in generating both secondary and primary data as well as both qualitative and quantitative data on the field. The research team held FGDs with 21 people from Addis Ababa City Administration tax offices and selected medium and small taxpayers. The study team also interviewed 10 KIIs selected from the various segments of stakeholders. The lead, along with research assistants, handled the KIIs using a predesigned semi-structured questionnaire.Keywords: taxation, tax system, tax administration, small and medium enterprises
Procedia PDF Downloads 721384 A Finite Element Analysis of Hexagonal Double-Arrowhead Auxetic Structure with Enhanced Energy Absorption Characteristics and Stiffness
Abstract:
Auxetic materials, as an emerging artificial designed metamaterial has attracted growing attention due to their promising negative Poisson’s ratio behaviors and tunable properties. The conventional auxetic lattice structures for which the deformation process is governed by a bending-dominated mechanism have faced the limitation of poor mechanical performance for many potential engineering applications. Recently, both load-bearing and energy absorption capabilities have become a crucial consideration in auxetic structure design. This study reports the finite element analysis of a class of hexagonal double-arrowhead auxetic structures with enhanced stiffness and energy absorption performance. The structure design was developed by extending the traditional double-arrowhead honeycomb to a hexagon frame, the stretching-dominated deformation mechanism was determined according to Maxwell’s stability criterion. The finite element (FE) models of 2D lattice structures established with stainless steel material were analyzed in ABAQUS/Standard for predicting in-plane structural deformation mechanism, failure process, and compressive elastic properties. Based on the computational simulation, the parametric analysis was studied to investigate the effect of the structural parameters on Poisson’s ratio and mechanical properties. The geometrical optimization was then implemented to achieve the optimal Poisson’s ratio for the maximum specific energy absorption. In addition, the optimized 2D lattice structure was correspondingly converted into a 3D geometry configuration by using the orthogonally splicing method. The numerical results of 2D and 3D structures under compressive quasi-static loading conditions were compared separately with the traditional double-arrowhead re-entrant honeycomb in terms of specific Young's moduli, Poisson's ratios, and specified energy absorption. As a result, the energy absorption capability and stiffness are significantly reinforced with a wide range of Poisson’s ratio compared to traditional double-arrowhead re-entrant honeycomb. The auxetic behaviors, energy absorption capability, and yield strength of the proposed structure are adjustable with different combinations of joint angle, struts thickness, and the length-width ratio of the representative unit cell. The numerical prediction in this study suggests the proposed concept of hexagonal double-arrowhead structure could be a suitable candidate for the energy absorption applications with a constant request of load-bearing capacity. For future research, experimental analysis is required for the validation of the numerical simulation.Keywords: auxetic, energy absorption capacity, finite element analysis, negative Poisson's ratio, re-entrant hexagonal honeycomb
Procedia PDF Downloads 871383 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint
Authors: Juliane Spaak
Abstract:
A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient
Procedia PDF Downloads 731382 Approach-Avoidance Conflict in the T-Maze: Behavioral Validation for Frontal EEG Activity Asymmetries
Authors: Eva Masson, Andrea Kübler
Abstract:
Anxiety disorders (AD) are the most prevalent psychological disorders. However, far from most affected individuals are diagnosed and receive treatment. This gap is probably due to the diagnosis criteria, relying on symptoms (according to the DSM-5 definition) with no objective biomarker. Approach-avoidance conflict tasks are one common approach to simulate such disorders in a lab setting, with most of the paradigms focusing on the relationships between behavior and neurophysiology. Approach-avoidance conflict tasks typically place participants in a situation where they have to make a decision that leads to both positive and negative outcomes, thereby sending conflicting signals that trigger the Behavioral Inhibition System (BIS). Furthermore, behavioral validation of such paradigms adds credibility to the tasks – with overt conflict behavior, it is safer to assume that the task actually induced a conflict. Some of those tasks have linked asymmetrical frontal brain activity to induced conflicts and the BIS. However, there is currently no consensus for the direction of the frontal activation. The authors present here a modified version of the T-Maze paradigm, a motivational conflict desktop task, in which behavior is recorded simultaneously to the recording of high-density EEG (HD-EEG). Methods: In this within-subject design, HD-EEG and behavior of 35 healthy participants was recorded. EEG data was collected with a 128 channels sponge-based system. The motivational conflict desktop task consisted of three blocks of repeated trials. Each block was designed to record a slightly different behavioral pattern, to increase the chances of eliciting conflict. This variety of behavioral patterns was however similar enough to allow comparison of the number of trials categorized as ‘overt conflict’ between the blocks. Results: Overt conflict behavior was exhibited in all blocks, but always for under 10% of the trials, in average, in each block. However, changing the order of the paradigms successfully introduced a ‘reset’ of the conflict process, therefore providing more trials for analysis. As for the EEG correlates, the authors expect a different pattern for trials categorized as conflict, compared to the other ones. More specifically, we expect an elevated alpha frequency power in the left frontal electrodes at around 200ms post-cueing, compared to the right one (relative higher right frontal activity), followed by an inversion around 600ms later. Conclusion: With this comprehensive approach of a psychological mechanism, new evidence would be brought to the frontal asymmetry discussion, and its relationship with the BIS. Furthermore, with the present task focusing on a very particular type of motivational approach-avoidance conflict, it would open the door to further variations of the paradigm to introduce different kinds of conflicts involved in AD. Even though its application as a potential biomarker sounds difficult, because of the individual reliability of both the task and peak frequency in the alpha range, we hope to open the discussion for task robustness for neuromodulation and neurofeedback future applications.Keywords: anxiety, approach-avoidance conflict, behavioral inhibition system, EEG
Procedia PDF Downloads 381381 Rethinking Modernization Strategy of Muslim Society: The Need for Value-Based Approach
Authors: Louay Safi
Abstract:
The notion of secular society that evolved over the last two centuries was initially intended to free the public sphere from religious imposition, before it assumed the form a comprehensive ideology whose aim is to prevent any overt religious expression from the public space. The negative view of religious expression, and the desire by political elites to purge the public space from all forms of religious expressions were first experienced in the Middle East in the last decades of the twentieth century in relation to Islam, before it manifests itself in the twentieth century Europe. Arab regimes were encouraged by European democracies to marginalize all forms of religious expressions in the public as part of the secularization process that was deemed necessary for modernization and progress. The prohibition of Islamic symbols and outlawing the headscarf was first undertaken to Middle Eastern republics, such as Turkey in 1930s and Syria in 1970s, before it is implemented recently in France. Secularization has been perceived by European powers as the central aspect of social and political liberalization, and was given priority over democratization and human rights, so much so that European elites were willing to entrust the task of nurturing liberal democracy to Arab autocrats and dictators. Not only did the strategy of empowering autocratic regimes to effect liberal democratic culture failed, but it contributed to the rise of Islamist extremism and produced failed states in Syria and Iraq that undermine both national and global peace and stability. The paper adopts the distinction made by John Rawls between political and comprehensive liberalism to argue that the modernization via secularization in Muslim societies is counterproductive and has subverted early successful efforts at democratization and reform in the Middle East. Using case studies that illustrate the role of the secularization strategy in Syria, Iran, and Egypt in undermining democratic and reformist movements in those countries, the paper calls for adopting a different approach rooted in liberal and democratic values rather than cultural practices and lifestyle. The paper shows that Islamic values as articulated by reform movements support a democratic and pluralist political order, and emphasizes the need to legitimize and support social forces that advocate democracy and human rights. Such an alternative strategy allows for internal competition among social groups for popular support, and therefore enhances the chances that those with inclusive and forward-looking political principles and policies would create a democratic and pluralist political order more conducive to meaningful national and global cooperation, and respectful of human dignity.Keywords: democracy, Islamic values, political liberalism, secularization
Procedia PDF Downloads 1681380 Destruction of Colon Cells by Nanocontainers of Ferromagnetic
Authors: Lukasz Szymanski, Zbigniew Kolacinski, Grzegorz Raniszewski, Slawomir Wiak, Lukasz Pietrzak, Dariusz Koza, Karolina Przybylowska-Sygut, Ireneusz Majsterek, Zbigniew Kaminski, Justyna Fraczyk, Malgorzata Walczak, Beata Kolasinska, Adam Bednarek, Joanna Konka
Abstract:
The aim of this work is to investigate the influence of electromagnetic field from the range of radio frequencies on the desired nanoparticles for cancer therapy. In the article, the development and demonstration of the method and the model device for hyperthermic selective destruction of cancer cells are presented. This method was based on the synthesis and functionalization of carbon nanotubes serving as ferromagnetic material nanocontainers. The methodology of the production carbon - ferromagnetic nanocontainers (FNCs) includes: The synthesis of carbon nanotubes, chemical, and physical characterization, increasing the content of a ferromagnetic material and biochemical functionalization involving the attachment of the key addresses. The ferromagnetic nanocontainers were synthesised in CVD and microwave plasma system. Biochemical functionalization of ferromagnetic nanocontainers is necessary in order to increase the binding selectively with receptors presented on the surface of tumour cells. Multi-step modification procedure was finally used to attach folic acid on the surface of ferromagnetic nanocontainers. Pristine ferromagnetic carbon nanotubes are not suitable for application in medicine and biotechnology. Appropriate functionalization of ferromagnetic carbon nanotubes allows to receiving materials useful in medicine. Finally, a product contains folic acids on the surface of FNCs. The folic acid is a ligand of folate receptors – α which is overexpressed on the surface of epithelial tumours cells. It is expected that folic acids will be recognized and selectively bound by receptors presented on the surface of tumour cells. In our research, FNCs were covalently functionalized in a multi-step procedure. Ferromagnetic carbon nanotubes were oxidated using different oxidative agents. For this purpose, strong acids such as HNO3, or mixture HNO3 and H2SO4 were used. Reactive carbonyl and carboxyl groups were formed on the open sides and at the defects on the sidewalls of FNCs. These groups allow further modification of FNCs as a reaction of amidation, reaction of introduction appropriate linkers which separate solid surface of FNCs and ligand (folic acid). In our studies, amino acid and peptide have been applied as ligands. The last step of chemical modification was reaction-condensation with folic acid. In all reaction as coupling reagents were used derivatives of 1,3,5-triazine. The first trials in the device for hyperthermal RF generator have been done. The frequency of RF generator was in the ranges from 10 to 14Mhz and from 265 to 621kHz. Obtained functionalized nanoparticles enabled to reach the temperature of denaturation tumor cells in given frequencies.Keywords: cancer colon cells, carbon nanotubes, hyperthermia, ligands
Procedia PDF Downloads 3131379 Belonging without Believing: Life Narratives of Six Social Generations of Members of the Apostolic Society
Authors: Frederique A. Demeijer
Abstract:
This article addresses the religious beliefs of members of the Apostolic Society –a Dutch religious community wherein the oldest living members were raised with very different beliefs than those upheld today. Currently, the Apostolic Society is the largest liberal religious community of the Netherlands, consisting of roughly 15,000 members. It is characterized by its close-knit community life and the importance of its apostle: the spiritual leader who writes a weekly letter around which the Sunday morning service is centered. The society sees itself as ‘religious-humanistic’, inspired by its Judeo-Christian roots without being dogmatic. Only a century earlier, the beliefs of the religious community revolved more strongly around the Bible, the apostle is a link to Christ. Also, the community believed in the return of the Lord, resonating with the millenarian roots of community in 1830. Thus, the oldest living members have experienced fundamental changes in beliefs and rituals, yet remained members. This article reveals how members experience(d) their religious beliefs and feelings of belonging to the community, how these may or may not have changed over time, and what role the Apostolic Society played in their lives. The article presents a qualitative research approach based on two main pillars. First, life narrative interviews were conducted, to work inductively and allow different interview topics to emerge. Second, it uses generational theory, in three ways: 1) to select respondents; 2) to guide the interview methodology –by being sensitive to differences in socio-historical context and events experienced during formative years of interviewees of different social generations, and 3) to analyze and contextualize the qualitative interview data. The data were gathered from 27 respondents, belonging to six social generations. All interviews were recorded, transcribed, coded, and analyzed, using the Atlas.ti software program. First, the elder generations talk about growing up with the Apostolic Society being absolutely central in their daily and spiritual lives. They spent most of their time with fellow members and dedicated their free time to Apostolic activities. The central beliefs of the Apostolic Society were clear and strongly upheld, and they experienced strong belonging. Although they now see the set of central beliefs to be more individually interpretable and are relieved to not have to spend all that time to Apostolic activities anymore, they still regularly attend services and speak longingly of the past with its strong belief and belonging. Second, the younger generations speak of growing up in a non-dogmatic, religious-humanist set of beliefs, but still with a very strong belonging to the religious community. They now go irregularly to services, and talk about belonging, but not as strong as the elderly generations do. Third, across the generations, members spend more time outside of the Apostolic Society than within. The way they speak about their religious beliefs is fluid and differs as much within generations as between: for example, there is no central view on what God is. It seems the experience of members of the Apostolic Society across different generations can now be characterized as belonging without believing.Keywords: generational theory, individual religious experiences, life narrative history interviews, qualitative research design
Procedia PDF Downloads 1111378 A Survey of Digital Health Companies: Opportunities and Business Model Challenges
Authors: Iris Xiaohong Quan
Abstract:
The global digital health market reached 175 billion U.S. dollars in 2019, and is expected to grow at about 25% CAGR to over 650 billion USD by 2025. Different terms such as digital health, e-health, mHealth, telehealth have been used in the field, which can sometimes cause confusion. The term digital health was originally introduced to refer specifically to the use of interactive media, tools, platforms, applications, and solutions that are connected to the Internet to address health concerns of providers as well as consumers. While mHealth emphasizes the use of mobile phones in healthcare, telehealth means using technology to remotely deliver clinical health services to patients. According to FDA, “the broad scope of digital health includes categories such as mobile health (mHealth), health information technology (IT), wearable devices, telehealth and telemedicine, and personalized medicine.” Some researchers believe that digital health is nothing else but the cultural transformation healthcare has been going through in the 21st century because of digital health technologies that provide data to both patients and medical professionals. As digital health is burgeoning, but research in the area is still inadequate, our paper aims to clear the definition confusion and provide an overall picture of digital health companies. We further investigate how business models are designed and differentiated in the emerging digital health sector. Both quantitative and qualitative methods are adopted in the research. For the quantitative analysis, our research data came from two databases Crunchbase and CBInsights, which are well-recognized information sources for researchers, entrepreneurs, managers, and investors. We searched a few keywords in the Crunchbase database based on companies’ self-description: digital health, e-health, and telehealth. A search of “digital health” returned 941 unique results, “e-health” returned 167 companies, while “telehealth” 427. We also searched the CBInsights database for similar information. After merging and removing duplicate ones and cleaning up the database, we came up with a list of 1464 companies as digital health companies. A qualitative method will be used to complement the quantitative analysis. We will do an in-depth case analysis of three successful unicorn digital health companies to understand how business models evolve and discuss the challenges faced in this sector. Our research returned some interesting findings. For instance, we found that 86% of the digital health startups were founded in the recent decade since 2010. 75% of the digital health companies have less than 50 employees, and almost 50% with less than 10 employees. This shows that digital health companies are relatively young and small in scale. On the business model analysis, while traditional healthcare businesses emphasize the so-called “3P”—patient, physicians, and payer, digital health companies extend to “5p” by adding patents, which is the result of technology requirements (such as the development of artificial intelligence models), and platform, which is an effective value creation approach to bring the stakeholders together. Our case analysis will detail the 5p framework and contribute to the extant knowledge on business models in the healthcare industry.Keywords: digital health, business models, entrepreneurship opportunities, healthcare
Procedia PDF Downloads 1831377 Beyond Objectification: Moderation Analysis of Trauma and Overexcitability Dynamics in Women
Authors: Ritika Chaturvedi
Abstract:
Introduction: Sexual objectification, characterized by the reduction of an individual to a mere object of sexual desire, remains a pervasive societal issue with profound repercussions on individual well-being. Such experiences, often rooted in systemic and cultural norms, have long-lasting implications for mental and emotional health. This study aims to explore the intricate relationship between experiences of sexual objectification and insidious trauma, further investigating the potential moderating effects of overexcitabilities as proposed by Dabrowski's theory of positive disintegration. Methodology: The research involved a comprehensive cohort of 204 women, spanning ages from 18 to 65 years. Participants were tasked with completing self-administered questionnaires designed to capture their experiences with sexual objectification. Additionally, the questionnaire assessed symptoms indicative of insidious trauma and explored overexcitabilities across five distinct domains: emotional, intellectual, psychomotor, sensory, and imaginational. Employing advanced statistical techniques, including multiple regression and moderation analysis, the study sought to decipher the intricate interplay among these variables. Findings: The study's results revealed a compelling positive correlation between experiences of sexual objectification and the onset of symptoms indicative of insidious trauma. This correlation underscores the profound and detrimental effects of sexual objectification on an individual's psychological well-being. Interestingly, the moderation analyses introduced a nuanced understanding, highlighting the differential roles of various overexcitabilities. Specifically, emotional, intellectual, and sensual overexcitabilities were found to exacerbate trauma symptomatology. In contrast, psychomotor overexcitability emerged as a protective factor, demonstrating a mitigating influence on the relationship between sexual objectification and trauma. Implications: The study's findings hold significant implications for a diverse array of stakeholders, encompassing mental health practitioners, educators, policymakers, and advocacy groups. The identified moderating effects of overexcitabilities emphasize the need for tailored interventions that consider individual differences in coping and resilience mechanisms. By recognizing the pivotal role of overexcitabilities in modulating the traumatic consequences of sexual objectification, this research advocates for the development of more nuanced and targeted support frameworks. Moreover, the study underscores the importance of continued research endeavors to unravel the intricate mechanisms and dynamics underpinning these relationships. Such endeavors are crucial for fostering the evolution of informed, evidence-based interventions and strategies aimed at mitigating the adverse effects of sexual objectification and promoting holistic well-being.Keywords: sexual objectification, insidious trauma, emotional overexcitability, intellectual overexcitability, sensual overexcitability, psychomotor overexcitability, imaginational overexcitability
Procedia PDF Downloads 461376 A Conceptual Framework of Integrated Evaluation Methodology for Aquaculture Lakes
Authors: Robby Y. Tallar, Nikodemus L., Yuri S., Jian P. Suen
Abstract:
Research in the subject of ecological water resources management is full of trivial questions addressed and it seems, today to be one branch of science that can strongly contribute to the study of complexity (physical, biological, ecological, socio-economic, environmental, and other aspects). Existing literature available on different facets of these studies, much of it is technical and targeted for specific users. This study offered the combination all aspects in evaluation methodology for aquaculture lakes with its paradigm refer to hierarchical theory and to the effects of spatial specific arrangement of an object into a space or local area. Therefore, the process in developing a conceptual framework represents the more integrated and related applicable concept from the grounded theory. A design of integrated evaluation methodology for aquaculture lakes is presented. The method is based on the identification of a series of attributes which can be used to describe status of aquaculture lakes using certain indicators from aquaculture water quality index (AWQI), aesthetic aquaculture lake index (AALI) and rapid appraisal for fisheries index (RAPFISH). The preliminary preparation could be accomplished as follows: first, the characterization of study area was undertaken at different spatial scales. Second, an inventory data as a core resource such as city master plan, water quality reports from environmental agency, and related government regulations. Third, ground-checking survey should be completed to validate the on-site condition of study area. In order to design an integrated evaluation methodology for aquaculture lakes, finally we integrated and developed rating scores system which called Integrated Aquaculture Lake Index (IALI).The development of IALI are reflecting a compromise all aspects and it responds the needs of concise information about the current status of aquaculture lakes by the comprehensive approach. IALI was elaborated as a decision aid tool for stakeholders to evaluate the impact and contribution of anthropogenic activities on the aquaculture lake’s environment. The conclusion was while there is no denying the fact that the aquaculture lakes are under great threat from the pressure of the increasing human activities, one must realize that no evaluation methodology for aquaculture lakes can succeed by keeping the pristine condition. The IALI developed in this work can be used as an effective, low-cost evaluation methodology of aquaculture lakes for developing countries. Because IALI emphasizes the simplicity and understandability as it must communicate to decision makers and the experts. Moreover, stakeholders need to be helped to perceive their lakes so that sites can be accepted and valued by local people. For this site of lake development, accessibility and planning designation of the site is of decisive importance: the local people want to know whether the lake condition is safe or whether it can be used.Keywords: aesthetic value, AHP, aquaculture lakes, integrated lakes, RAPFISH
Procedia PDF Downloads 2371375 Triple Case Phantom Tumor of Lungs
Authors: Angelis P. Barlampas
Abstract:
Introduction: The term phantom lung mass describes the ovoid collection of fluid within the interlobular fissure, which initially creates the impression of a mass. The problem of correct differential diagnosis is great, especially in plain radiography. A case is presented with three nodular pulmonary foci, the shape, location, and density of which, as well as the presence of chronic loculated pleural effusions, suggest the presence of multiple phantom tumors of the lung. Purpose: The aim of this paper is to draw the attention of non-experienced and non-specialized physicians to the existence of benign findings that mimic pathological conditions and vice versa. The careful study of a radiological examination and the comparison with previous exams or further control protect against quick wrong conclusions. Methods: A hospitalized patient underwent a non-contrast CT scan of the chest as part of the general control of her situation. Results: Computed tomography revealed pleural effusions, some of them loculated, increased cardiothoracic index, as well as the presence of three nodular foci, one in the left lung and two in the right with a maximum density of up to 18 Hounsfield units and a mean diameter of approximately five centimeters. Two of them are located in the characteristical anatomical position of the major interlobular fissure. The third one is located in the area of the right lower lobe’s posterior basal part, and it presents the same characteristics as the previous ones and is likely to be a loculated fluid collection, within an auxiliary interlobular fissure or a cyst, in the context of the patient's more general pleural entrapments and loculations. The differential diagnosis of nodular foci based on their imaging characteristics includes the following: a) rare metastatic foci with low density (liposarcoma, mucous tumors of the digestive or genital system, necrotic metastatic foci, metastatic renal cancer, etc.), b) necrotic multiple primary lung tumor locations (squamous epithelial cancer, etc. ), c) hamartomas of the lung, d) fibrotic tumors of the interlobular fissures, e) lipoid pneumonia, f) fluid concentrations within the interlobular fissures, g) lipoma of the lung, h) myelolipomas of the lung. Conclusions: The collection of fluid within the interlobular fissure of the lung can give the false impression of a lung mass, particularly on plain chest radiography. In the case of computed tomography, the ability to measure the density of a lesion, combined with the provided high anatomical details of the location and characteristics of the lesion, can lead relatively easily to the correct diagnosis. In cases of doubt or image artifacts, comparison with previous or subsequent examinations can resolve any disagreements, while in rare cases, intravenous contrast may be necessary.Keywords: phantom mass, chest CT, pleural effusion, cancer
Procedia PDF Downloads 551374 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans
Authors: Jian Wu
Abstract:
Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory
Procedia PDF Downloads 1861373 Comparison between Two Software Packages GSTARS4 and HEC-6 about Prediction of the Sedimentation Amount in Dam Reservoirs and to Estimate Its Efficient Life Time in the South of Iran
Authors: Fatemeh Faramarzi, Hosein Mahjoob
Abstract:
Building dams on rivers for utilization of water resources causes problems in hydrodynamic equilibrium and results in leaving all or part of the sediments carried by water in dam reservoir. This phenomenon has also significant impacts on water and sediment flow regime and in the long term can cause morphological changes in the environment surrounding the river, reducing the useful life of the reservoir which threatens sustainable development through inefficient management of water resources. In the past, empirical methods were used to predict the sedimentation amount in dam reservoirs and to estimate its efficient lifetime. But recently the mathematical and computational models are widely used in sedimentation studies in dam reservoirs as a suitable tool. These models usually solve the equations using finite element method. This study compares the results from tow software packages, GSTARS4 & HEC-6, in the prediction of the sedimentation amount in Dez dam, southern Iran. The model provides a one-dimensional, steady-state simulation of sediment deposition and erosion by solving the equations of momentum, flow and sediment continuity and sediment transport. GSTARS4 (Generalized Sediment Transport Model for Alluvial River Simulation) which is based on a one-dimensional mathematical model that simulates bed changes in both longitudinal and transverse directions by using flow tubes in a quasi-two-dimensional scheme to calibrate a period of 47 years and forecast the next 47 years of sedimentation in Dez Dam, Southern Iran. This dam is among the highest dams all over the world (with its 203 m height), and irrigates more than 125000 square hectares of downstream lands and plays a major role in flood control in the region. The input data including geometry, hydraulic and sedimentary data, starts from 1955 to 2003 on a daily basis. To predict future river discharge, in this research, the time series data were assumed to be repeated after 47 years. Finally, the obtained result was very satisfactory in the delta region so that the output from GSTARS4 was almost identical to the hydrographic profile in 2003. In the Dez dam due to the long (65 km) and a large tank, the vertical currents are dominant causing the calculations by the above-mentioned method to be inaccurate. To solve this problem, we used the empirical reduction method to calculate the sedimentation in the downstream area which led to very good answers. Thus, we demonstrated that by combining these two methods a very suitable model for sedimentation in Dez dam for the study period can be obtained. The present study demonstrated successfully that the outputs of both methods are the same.Keywords: Dez Dam, prediction, sedimentation, water resources, computational models, finite element method, GSTARS4, HEC-6
Procedia PDF Downloads 3131372 Investigation into the Socio-ecological Impact of Migration of Fulani Herders in Anambra State of Nigeria Through a Climate Justice Lens
Authors: Anselm Ego Onyimonyi, Maduako Johnpaul O.
Abstract:
The study was designed to investigate into the socio-ecological impact of migration of Fulani herders in Anambra state of Nigeria, through a climate justice lens. Nigeria is one of the world’s most densely populated countries with a population of over 284 million people, half of which are considered to be in abject poverty. There is no doubt that livestock production provides sustainable contributions to food security and poverty reduction to Nigeria economy, but not without some environmental implications like any other economic activities. Nigeria is recognized as being vulnerable to climate change. Climate change and global warming if left unchecked will cause adverse effects on livelihoods in Nigeria, such as livestock production, crop production, fisheries, forestry and post-harvest activities, because the rainfall regimes and patterns will be altered, floods which devastate farmlands would occur, increase in temperature and humidity which increases pest and disease would occur and other natural disasters like desertification, drought, floods, ocean and storm surges, which not only damage Nigerians’ livelihood but also cause harm to life and property, would occur. This and other climatic issue as it affects Fulani herdsmen was what this study investigated. In carrying out this research, a survey research design was adopted. A simple sampling technique was used. One local government area (LGA) was selected purposively from each of the four agricultural zone in the state based on its predominance of Fulani herders. For appropriate sampling, 25 respondents from each of the four Agricultural zones in the state were randomly selected making up the 100 respondent being sampled. Primary data were generated by using a set of structured 5-likert scale questionnaire. Data generated were analyzed using SPSS and the result presented using descriptive statistics. From the data analyzed, the study indentified; Unpredicted rainfall (mean = 3.56), Forest fire (mean = 4.63), Drying Water Source (mean = 3.99), Dwindling Grazing (mean 4.43), Desertification (mean = 4.44), Fertile land scarcity (mean = 3.42) as major factor predisposing Fulani herders to migrate southward while rejecting Natural inclination to migrate (mean = 2.38) and migration to cause trouble as a factor. On the reason why Fulani herders are trying to establish a permanent camp in Anambra state; Moderate temperature (mean= 3.60), Avoiding overgrazing (4.42), Search for fodder and water (mean = 4.81) and (mean = 4.70) respectively, Need for market (4.28), Favorable environment (mean = 3.99) and Access to fertile land (3.96) were identified. It was concluded that changing climatic variables necessitated the migration of herders from Northern Nigeria to areas in the South were the variables are most favorable to the herders and their animals.Keywords: socio-ecological, migration, fulani, climate, justice, lens
Procedia PDF Downloads 431371 C-Spine Imaging in a Non-trauma Centre: Compliance with NEXUS Criteria Audit
Authors: Andrew White, Abigail Lowe, Kory Watkins, Hamed Akhlaghi, Nicole Winter
Abstract:
The timing and appropriateness of diagnostic imaging are critical to the evaluation and management of traumatic injuries. Within the subclass of trauma patients, the prevalence of c-spine injury is less than 4%. However, the incidence of delayed diagnosis within this cohort has been documented as up to 20%, with inadequate radiological examination most cited issue. In order to assess those in which c-spine injury cannot be fully excluded based on clinical examination alone and, therefore, should undergo diagnostic imaging, a set of criteria is used to provide clinical guidance. The NEXUS (National Emergency X-Radiography Utilisation Study) criteria is a validated clinical decision-making tool used to facilitate selective c-spine radiography. The criteria allow clinicians to determine whether cervical spine imaging can be safely avoided in appropriate patients. The NEXUS criteria are widely used within the Emergency Department setting given their ease of use and relatively straightforward application and are used in the Victorian State Trauma System’s guidelines. This audit utilized retrospective data collection to examine the concordance of c-spine imaging in trauma patients to that of the NEXUS criteria and assess compliance with state guidance on diagnostic imaging in trauma. Of the 183 patients that presented with trauma to the head, neck, or face (244 excluded due to incorrect triage), 98 did not undergo imaging of the c-spine. Out of those 98, 44% fulfilled at least one of the NEXUS criteria, meaning the c-spine could not be clinically cleared as per the current guidelines. The criterion most met was intoxication, comprising 42% (18 of 43), with midline spinal tenderness (or absence of documentation of this) the second most common with 23% (10 of 43). Intoxication being the most met criteria is significant but not unexpected given the cohort of patients seen at St Vincent’s and within many emergency departments in general. Given these patients will always meet NEXUS criteria, an element of clinical judgment is likely needed, or concurrent use of the Canadian C-Spine Rules to exclude the need for imaging. Midline tenderness as a met criterion was often in the context of poor or absent documentation relating to this, emphasizing the importance of clear and accurate assessments. The distracting injury was identified in 7 out of the 43 patients; however, only one of these patients exhibited a thoracic injury (T11 compression fracture), with the remainder comprising injuries to the extremities – some studies suggest that C-spine imaging may not be required in the evaluable blunt trauma patient despite distracting injuries in any body regions that do not involve the upper chest. This emphasises the need for standardised definitions for distracting injury, at least at a departmental/regional level. The data highlights the currently poor application of the NEXUS guidelines, with likely common themes throughout emergency departments, highlighting the need for further education regarding implementation and potential refinement/clarification of criteria. Of note, there appeared to be no significant differences between levels of experience with respect to inappropriately clearing the c-spine clinically with respect to the guidelines.Keywords: imaging, guidelines, emergency medicine, audit
Procedia PDF Downloads 721370 Let’s Work It Out: Effects of a Cooperative Learning Approach on EFL Students’ Motivation and Reading Comprehension
Authors: Shiao-Wei Chu
Abstract:
In order to enhance the ability of their graduates to compete in an increasingly globalized economy, the majority of universities in Taiwan require students to pass Freshman English in order to earn a bachelor's degree. However, many college students show low motivation in English class for several important reasons, including exam-oriented lessons, unengaging classroom activities, a lack of opportunities to use English in authentic contexts, and low levels of confidence in using English. Students’ lack of motivation in English classes is evidenced when students doze off, work on assignments from other classes, or use their phones to chat with others, play video games or watch online shows. Cooperative learning aims to address these problems by encouraging language learners to use the target language to share individual experiences, cooperatively complete tasks, and to build a supportive classroom learning community whereby students take responsibility for one another’s learning. This study includes approximately 50 student participants in a low-proficiency Freshman English class. Each week, participants will work together in groups of between 3 and 4 students to complete various in-class interactive tasks. The instructor will employ a reward system that incentivizes students to be responsible for their own as well as their group mates’ learning. The rewards will be based on points that team members earn through formal assessment scores as well as assessment of their participation in weekly in-class discussions. The instructor will record each team’s week-by-week improvement. Once a team meets or exceeds its own earlier performance, the team’s members will each receive a reward from the instructor. This cooperative learning approach aims to stimulate EFL freshmen’s learning motivation by creating a supportive, low-pressure learning environment that is meant to build learners’ self-confidence. Students will practice all four language skills; however, the present study focuses primarily on the learners’ reading comprehension. Data sources include in-class discussion notes, instructor field notes, one-on-one interviews, students’ midterm and final written reflections, and reading scores. Triangulation is used to determine themes and concerns, and an instructor-colleague analyzes the qualitative data to build interrater reliability. Findings are presented through the researcher’s detailed description. The instructor-researcher has developed this approach in the classroom over several terms, and its apparent success at motivating students inspires this research. The aims of this study are twofold: first, to examine the possible benefits of this cooperative approach in terms of students’ learning outcomes; and second, to help other educators to adapt a more cooperative approach to their classrooms.Keywords: freshman English, cooperative language learning, EFL learners, learning motivation, zone of proximal development
Procedia PDF Downloads 1451369 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software
Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi
Abstract:
Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.Keywords: climate change, GIS, interpolation, co-kriging
Procedia PDF Downloads 1271368 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 1491367 The Cultural Shift in Pre-owned Fashion as Sustainable Consumerism in Vietnam
Authors: Lam Hong Lan
Abstract:
The textile industry is said to be the second-largest polluter, responsible for 92 million tonnes of waste annually. There is an urgent need to practice the circular economy to increase the use and reuse around the world. By its nature, the pre-owned fashion business is considered part of the circular economy as it helps to eliminate waste and circulate products. Second-hand clothes and accessories used to be associated with a ‘cheap image’ that carried ‘old energy’ in Vietnam. This perception has been shifted, especially amongst the younger generation. Vietnamese consumer is spending more on products and services that increase self-esteem. The same consumer is moving away from a collectivist social identity towards a ‘me, not we’ outlook as they look for a way to express their individual identity. And pre-owned fashion is one of their solutions as it values money, can create a unique personal style for the wearer and links with sustainability. The design of this study is based on the second-hand shopping motivation theory. A semi-structured online survey with 100 consumers from one pre-owned clothing community and one pre-owned e-commerce site in Vietnam. The findings show that in contrast with Vietnamese older consumers (55+yo) who, in the previous study, generally associated pre-owned fashion with ‘low-cost’, ‘cheap image’ that carried ‘old energy’, young customers (20-30 yo) were actively promoted their pre-owned fashion items to the public via outlet’s social platforms and their social media. This cultural shift comes from the impact of global and local discourse around sustainable fashion and the growth of digital platforms in the pre-owned fashion business in the last five years, which has generally supported wider interest in pre-owned fashion in Vietnam. It can be summarised in three areas: (1) global and local celebrity influencers. A number of celebrities have been photographed wearing vintage items in music videos, photoshoots or at red carpet events. (2) E-commerce and intermediaries. International e-commerce sites – e.g., Vinted, TheRealReal – and/or local apps – e.g., Re.Loved – can influence attitudes and behaviors towards pre-owned consumption. (3) Eco-awareness. The increased online coverage of climate change and environmental pollution has encouraged customers to adopt a more eco-friendly approach to their wardrobes. While sustainable biomaterials and designs are still navigating their way into sustainability, sustainable consumerism via pre-owned fashion seems to be an immediate solution to lengthen the clothes lifecycle. This study has found that young consumers are primarily seeking value for money and/or a unique personal style from pre-owned/vintage fashion while using these purchases to promote their own “eco-awareness” via their social media networks. This is a good indication for fashion designers to keep in mind in their design process and for fashion enterprises in their business model’s choice to not overproduce fashion items.Keywords: cultural shift, pre-owned fashion, sustainable consumption, sustainable fashion.
Procedia PDF Downloads 831366 The Data Quality Model for the IoT based Real-time Water Quality Monitoring Sensors
Authors: Rabbia Idrees, Ananda Maiti, Saurabh Garg, Muhammad Bilal Amin
Abstract:
IoT devices are the basic building blocks of IoT network that generate enormous volume of real-time and high-speed data to help organizations and companies to take intelligent decisions. To integrate this enormous data from multisource and transfer it to the appropriate client is the fundamental of IoT development. The handling of this huge quantity of devices along with the huge volume of data is very challenging. The IoT devices are battery-powered and resource-constrained and to provide energy efficient communication, these IoT devices go sleep or online/wakeup periodically and a-periodically depending on the traffic loads to reduce energy consumption. Sometime these devices get disconnected due to device battery depletion. If the node is not available in the network, then the IoT network provides incomplete, missing, and inaccurate data. Moreover, many IoT applications, like vehicle tracking and patient tracking require the IoT devices to be mobile. Due to this mobility, If the distance of the device from the sink node become greater than required, the connection is lost. Due to this disconnection other devices join the network for replacing the broken-down and left devices. This make IoT devices dynamic in nature which brings uncertainty and unreliability in the IoT network and hence produce bad quality of data. Due to this dynamic nature of IoT devices we do not know the actual reason of abnormal data. If data are of poor-quality decisions are likely to be unsound. It is highly important to process data and estimate data quality before bringing it to use in IoT applications. In the past many researchers tried to estimate data quality and provided several Machine Learning (ML), stochastic and statistical methods to perform analysis on stored data in the data processing layer, without focusing the challenges and issues arises from the dynamic nature of IoT devices and how it is impacting data quality. A comprehensive review on determining the impact of dynamic nature of IoT devices on data quality is done in this research and presented a data quality model that can deal with this challenge and produce good quality of data. This research presents the data quality model for the sensors monitoring water quality. DBSCAN clustering and weather sensors are used in this research to make data quality model for the sensors monitoring water quality. An extensive study has been done in this research on finding the relationship between the data of weather sensors and sensors monitoring water quality of the lakes and beaches. The detailed theoretical analysis has been presented in this research mentioning correlation between independent data streams of the two sets of sensors. With the help of the analysis and DBSCAN, a data quality model is prepared. This model encompasses five dimensions of data quality: outliers’ detection and removal, completeness, patterns of missing values and checks the accuracy of the data with the help of cluster’s position. At the end, the statistical analysis has been done on the clusters formed as the result of DBSCAN, and consistency is evaluated through Coefficient of Variation (CoV).Keywords: clustering, data quality, DBSCAN, and Internet of things (IoT)
Procedia PDF Downloads 1391365 Age-Related Health Problems and Needs of Elderly People Living in Rural Areas in Poland
Authors: Anna Mirczak
Abstract:
Introduction: In connection with the aging of the population and the increase in the number of people with chronic illnesses, the priority objective for public health has become not only lengthening life, but also improving quality of life in older persons, as well as maintenance of their relative independence and active participation in social life. The most important determinant of a person’s quality of life is health. According to the literature, older people with chronic illness who live in rural settings are at greater risk for poor outcomes than their urban counterparts. Furthermore research characterizes the rural elderly as having a higher incidence of sickness, dysfunction, disability, restricted mobility, and acute and chronic conditions than their urban citizens. It is dictated by the overlapping certain specific socio-economic factors typical for rural areas which include: social and geography exclusion, limited access to health care centers, and low socioeconomic status. Aim of the study: The objective of this study was to recognize health status and needs of older people living in selected rural areas in Poland and evaluate the impacts of working in the farm on their health status. Material and methods: The study was performed personally, using interviews based on the structural questionnaires, during the period from March 2011 to October 2012. The group of respondents consisted 203 people aged 65 years and over living in selected rural areas in Poland. The analysis of collected research material was performed using the statistical package SPSS 19 for Windows. The level of significance for the tested the hypotheses assumed value of 0.05. Results: The mean age of participants was 75,5 years (SD=5,7) range from 65 to 94 years. Most of the interviewees had children (89.2%) and grandchildren (83.7) and lived mainly with family members (75.9%) mostly in double (46.8%) and triple (20.8%) household. The majority of respondents (71,9%) were physical working on the farm. At the time of interview, each of the respondents reported that they had been diagnosed with at least one chronic diseases by their GP. The most common were: hypertension (67,5%), osteoarthritis (44,8%), atherosclerosis (43,3%), cataract (40,4%), arrhythmia (28,6%), diabetes mellitus (19,7%) and stomach or duodenum ulcer diseases (17,2%).The number of diseases occurring of the sample was dependent on gender and age. Significant associations were observed between working on the farm and frequency of occurrence cardiovascular diseases, the gastrointestinal tract dysfunction and sensory disorders. Conclusions: The most common causes of disability among older citizens were: chronic diseases, malnutrition and complaints about access to health services (especially to cardiologist and an ophthalmologist). Health care access and health status are a particular concern in rural areas where the population is older, has lower education and income levels, and is more likely to be living in medically underserved areas than is the case in urban areas.Keywords: ageing, health status, older people, rural
Procedia PDF Downloads 2621364 Anaerobic Soil Disinfestation: Feasible Alternative to Soil Chemical Fumigants
Authors: P. Serrano-Pérez, M. C. Rodríguez-Molina, C. Palo, E. Palo, A. Lacasa
Abstract:
Phytophthora nicotianae is the principal causal agent of root and crown rot disease of red pepper plants in Extremadura (Western Spain). There is a need to develop a biologically-based method of soil disinfestation that facilitates profitable and sustainable production without the use of chemical fumigants. Anaerobic Soil Disinfestation (ASD), as well know as biodisinfestation, has been shown to control a wide range of soil-borne pathogens and nematodes in numerous crop production systems. This method implies soil wetting, incorporation of a easily decomposable carbon-rich organic amendment and covering with plastic film for several weeks. ASD with rapeseed cake (var. Tocatta, a glucosinolates-free variety) used as C-source was assayed in spring 2014, before the pepper crop establishment. The field experiment was conducted at the Agricultural Research Centre Finca La Orden (Southwestern Spain) and the treatments were: rapeseed cake (RCP); rapeseed cake without plastic cover (RC); control non-amendment (CP) and control non-amendment without plastic cover (C). The experimental design was a randomized complete block design with four replicates and a plot size of 5 x 5 m. On 26 March, rapeseed cake (1 kg·m-2) was incorporated into the soil with a rotovator. Biological probes with the inoculum were buried at 15 and 30-cm depth (biological probes were previously prepared with 100 g of disinfected soil inoculated with chlamydospores (chlam) of P. nicotianae P13 isolate [100 chlam·g-1 of soil] and wrapped in agryl cloth). Sprinkler irrigation was run until field capacity and the corresponding plots were covered with transparent plastic (PE 0.05 mm). On 6 May plastics were removed, the biological probes were dug out and a bioassay was established. One pepper seedling at the 2 to 4 true-leaves stage was transplanted in the soil from each biological probe. Plants were grown in a climatic chamber and disease symptoms were recorded every week during 2 months. Fragments of roots and crown of symptomatic plants were analyzed on NARPH media and soil from rizospheres was analyzed using carnation petals as baits. Results of “survival” were expressed as the percentage of soil samples where P. nicotianae was detected and results of “infectivity” were expressed as the percentage of diseased plants. No differences were detected in deep effect. Infectivity of P. nicotianae chlamydospores was successfully reduced in RCP treatment (4.2% of infectivity) compared with the controls (41.7% of infectivity). The pattern of survival was similar to infectivity observed by the bioassay: 21% of survival in RCP; 79% in CP; 83% in C and 87% in RC. Although ASD may be an effective alternative to chemical fumigants to pest management, more research is necessary to show their impact on the microbial community and chemistry of the soil.Keywords: biodisinfestation, BSD, soil fumigant alternatives, organic amendments
Procedia PDF Downloads 2161363 Isolation, Selection and Identification of Bacteria for Bioaugmentation of Paper Mills White Water
Authors: Nada Verdel, Tomaz Rijavec, Albin Pintar, Ales Lapanje
Abstract:
Objectives: White water circuits of woodfree paper mills contain suspended, dissolved, and colloidal particles, such as cellulose, starch, paper sizings, and dyes. By closing the white water circuits, these particles start to accumulate and affect the production. Due to high amount of organic matter that scavenge radicals and adsorbs onto catalyst surfaces, treatment of white water with photocatalysis is inappropriate. The most suitable approach should be bioaugmentation-assisted bioremediation. Accordingly, objectives were: - to isolate bacteria capable of degrading organic compounds used for the papermaking process - to select the most active bacteria for bioaugmentation. Status: The state-of-the-art of bioaugmentation of pulp and paper mill effluents is mostly based on biodegradation of lignin. Whereas in white water circuits of woodfree paper mills only papermaking compounds are present. As far as one can tell from the literature, the study on degradation activities of bacteria for all possible compounds of the papermaking process is a novelty. Methodology: The main parameters of the selected white water were systematically analyzed during a period of two months. Bacteria were isolated on selective media with particular carbon source. Organic substances used as carbon source either enter white water circuits as base paper or as recycled broke. The screening of bacterial activities for starch, cellulose, latex, polyvinyl alcohol, alkyl ketene dimers, and resin acids was followed by addition of lugol. Degraders of polycyclic aromatic dyes were selected by cometabolism tests; cometabolism is simultaneous biodegradation of two compounds, in which the degradation of the second compound depends on the presence of the first. The obtained strains were identified by 16S rRNA sequencing. Findings: 335 autochthonous strains were isolated on plates with selected carbon source. The isolated strains were selected according to degradation of the particular carbon source. The ultimate degraders of cationic starch, cellulose, and sizings are Pseudomonas sp. NV-CE12-CF and Aeromonas sp. NV-RES19-BTP. The most active strains capable of degrading azo dyes are Aeromonas sp. NV-RES19-BTP and Sphingomonas sp. NV-B14-CF. Klebsiella sp. NV-Y14A-BTP degrade polycyclic aromatic direct blue 15 and also yellow dye, Agromyces sp. NV-RED15A-BF and Cellulosimicrobium sp. NV-A4-BF are specialists for whitener and Aeromonas sp. NV-RES19-BTP is general degrader of all compounds. To the white water adapted bacteria were isolated and selected according to their degradation activities for particular organic substances. Mostly isolated bacteria are specialized to lower the competition in the microbial community. Degraders of readily-biodegradable compounds do not degrade recalcitrant polycyclic aromatic dyes and vice versa. General degraders are rare.Keywords: bioaugmentation, biodegradation of azo dyes, cometabolism, smart wastewater treatment technologies
Procedia PDF Downloads 2031362 Identifying the Barriers to Institutionalizing a One Health Concept in Responding to Zoonotic Diseases in South Asia
Authors: Rojan Dahal
Abstract:
One Health refers to a collaborative effort between multiple disciplines - locally, nationally, and globally - to attain optimal health. Although there were unprecedented intersectoral alliances between the animal and human health sectors during the avian influenza outbreak, there are different views and perceptions concerning institutionalizing One Health in South Asia. It is likely a structural barrier between the relevant professionals working in different entities or ministries when it comes to collaborating on One Health actions regarding zoonotic diseases. Politicians and the public will likely need to invest large amounts of money, demonstrate political will, and understand how One Health works to overcome these barriers. One Health might be hard to invest in South Asian countries, where the benefits are based primarily on models and projections and where numerous issues related to development and health need urgent attention. The other potential barrier to enabling the One Health concept in responding to zoonotic diseases is a failure to represent One Health in zoonotic disease control and prevention measures in the national health policy, which is a critical component of institutionalizing the One Health concept. One Health cannot be institutionalized without acknowledging the linkages between animal, human, and environmental sectors in dealing with zoonotic diseases. Efforts have been made in the past to prepare a preparedness plan for One Health implementation, but little has been done to establish a policy environment to institutionalize One Health. It is often assumed that health policy refers specifically to medical care issues and health care services. When drafting, reviewing, and redrafting the policy, it is important to engage a wide range of stakeholders. One Health institutionalization may also be hindered by the interplay between One Health professionals and bureaucratic inertia in defining the priorities of diseases due to competing interests on limited budgets. There is a possibility that policymakers do not recognize the importance of veterinary professionals in preventing human diseases originating in animals. Compared to veterinary medicine, the human health sector has produced most of the investment and research outputs related to zoonotic diseases. The public health profession may consider itself superior to the veterinary profession. Zoonotic diseases might not be recognized as threats to human health, impeding integrated policies. The effort of One Health institutionalization remained only among the donor agencies and multi-sectoral organizations. There is a need for strong political will and state capacity to overcome the existing institutional, financial, and professional barriers for its effective implementation. There is a need to assess the structural challenges, policy challenges, and the attitude of the professional working in the multiple disciplines related to One Health. Limited research has been conducted to identify the reasons behind the barriers to institutionalizing the One Health concept in South Asia. Institutionalizing One Health in responding to zoonotic diseases breaks down silos and integrates animals, humans, and the environment.Keywords: one health, institutionalization, South Asia, institutionalizations
Procedia PDF Downloads 981361 Removal of Problematic Organic Compounds from Water and Wastewater Using the Arvia™ Process
Authors: Akmez Nabeerasool, Michaelis Massaros, Nigel Brown, David Sanderson, David Parocki, Charlotte Thompson, Mike Lodge, Mikael Khan
Abstract:
The provision of clean and safe drinking water is of paramount importance and is a basic human need. Water scarcity coupled with tightening of regulations and the inability of current treatment technologies to deal with emerging contaminants and Pharmaceuticals and personal care products means that alternative treatment technologies that are viable and cost effective are required in order to meet demand and regulations for clean water supplies. Logistically, the application of water treatment in rural areas presents unique challenges due to the decentralisation of abstraction points arising from low population density and the resultant lack of infrastructure as well as the need to treat water at the site of use. This makes it costly to centralise treatment facilities and hence provide potable water direct to the consumer. Furthermore, across the UK there are segments of the population that rely on a private water supply which means that the owner or user(s) of these supplies, which can serve one household to hundreds, are responsible for the maintenance. The treatment of these private water supply falls on the private owners, and it is imperative that a chemical free technological solution that can operate unattended and does not produce any waste is employed. Arvia’s patented advanced oxidation technology combines the advantages of adsorption and electrochemical regeneration within a single unit; the Organics Destruction Cell (ODC). The ODC uniquely uses a combination of adsorption and electrochemical regeneration to destroy organics. Key to this innovative process is an alternative approach to adsorption. The conventional approach is to use high capacity adsorbents (e.g. activated carbons with high porosities and surface areas) that are excellent adsorbents, but require complex and costly regeneration. Arvia’s technology uses a patent protected adsorbent, Nyex™, which is a non-porous, highly conductive, graphite based adsorbent material that enables it to act as both the adsorbent and as a 3D electrode. Adsorbed organics are oxidised and the surface of the Nyex™ is regenerated in-situ for further adsorption without interruption or replacement. Treated water flows from the bottom of the cell where it can either be re-used or safely discharged. Arvia™ Technology Ltd. has trialled the application of its tertiary water treatment technology in treating reservoir water abstracted near Glasgow, Scotland, with promising results. Several other pilot plants have also been successfully deployed at various locations in the UK showing the suitability and effectiveness of the technology in removing recalcitrant organics (including pharmaceuticals, steroids and hormones), COD and colour.Keywords: Arvia™ process, adsorption, water treatment, electrochemical oxidation
Procedia PDF Downloads 2631360 A Demonstration of How to Employ and Interpret Binary IRT Models Using the New IRT Procedure in SAS 9.4
Authors: Ryan A. Black, Stacey A. McCaffrey
Abstract:
Over the past few decades, great strides have been made towards improving the science in the measurement of psychological constructs. Item Response Theory (IRT) has been the foundation upon which statistical models have been derived to increase both precision and accuracy in psychological measurement. These models are now being used widely to develop and refine tests intended to measure an individual's level of academic achievement, aptitude, and intelligence. Recently, the field of clinical psychology has adopted IRT models to measure psychopathological phenomena such as depression, anxiety, and addiction. Because advances in IRT measurement models are being made so rapidly across various fields, it has become quite challenging for psychologists and other behavioral scientists to keep abreast of the most recent developments, much less learn how to employ and decide which models are the most appropriate to use in their line of work. In the same vein, IRT measurement models vary greatly in complexity in several interrelated ways including but not limited to the number of item-specific parameters estimated in a given model, the function which links the expected response and the predictor, response option formats, as well as dimensionality. As a result, inferior methods (a.k.a. Classical Test Theory methods) continue to be employed in efforts to measure psychological constructs, despite evidence showing that IRT methods yield more precise and accurate measurement. To increase the use of IRT methods, this study endeavors to provide a comprehensive overview of binary IRT models; that is, measurement models employed on test data consisting of binary response options (e.g., correct/incorrect, true/false, agree/disagree). Specifically, this study will cover the most basic binary IRT model, known as the 1-parameter logistic (1-PL) model dating back to over 50 years ago, up until the most recent complex, 4-parameter logistic (4-PL) model. Binary IRT models will be defined mathematically and the interpretation of each parameter will be provided. Next, all four binary IRT models will be employed on two sets of data: 1. Simulated data of N=500,000 subjects who responded to four dichotomous items and 2. A pilot analysis of real-world data collected from a sample of approximately 770 subjects who responded to four self-report dichotomous items pertaining to emotional consequences to alcohol use. Real-world data were based on responses collected on items administered to subjects as part of a scale-development study (NIDA Grant No. R44 DA023322). IRT analyses conducted on both the simulated data and analyses of real-world pilot will provide a clear demonstration of how to construct, evaluate, and compare binary IRT measurement models. All analyses will be performed using the new IRT procedure in SAS 9.4. SAS code to generate simulated data and analyses will be available upon request to allow for replication of results.Keywords: instrument development, item response theory, latent trait theory, psychometrics
Procedia PDF Downloads 3561359 Content Monetization as a Mark of Media Economy Quality
Authors: Bela Lebedeva
Abstract:
Characteristics of the Web as a channel of information dissemination - accessibility and openness, interactivity and multimedia news - become wider and cover the audience quickly, positively affecting the perception of content, but blur out the understanding of the journalistic work. As a result audience and advertisers continue migrating to the Internet. Moreover, online targeting allows monetizing not only the audience (as customarily given to traditional media) but also the content and traffic more accurately. While the users identify themselves with the qualitative characteristics of the new market, its actors are formed. Conflict of interests is laid in the base of the economy of their relations, the problem of traffic tax as an example. Meanwhile, content monetization actualizes fiscal interest of the state too. The balance of supply and demand is often violated due to the political risks, particularly in terms of state capitalism, populism and authoritarian methods of governance such social institutions as the media. A unique example of access to journalistic material, limited by monetization of content is a television channel Dozhd' (Rain) in Russian web space. Its liberal-minded audience has a better possibility for discussion. However, the channel could have been much more successful in terms of unlimited free speech. Avoiding state pressure and censorship its management has decided to save at least online performance and monetizing all of the content for the core audience. The study Methodology was primarily based on the analysis of journalistic content, on the qualitative and quantitative analysis of the audience. Reconstructing main events and relationships of actors on the market for the last six years researcher has reached some conclusions. First, under the condition of content monetization the capitalization of its quality will always strive to quality characteristics of user, thereby identifying him. Vice versa, the user's demand generates high-quality journalism. The second conclusion follows the previous one. The growth of technology, information noise, new political challenges, the economy volatility and the cultural paradigm change – all these factors form the content paying model for an individual user. This model defines him as a beneficiary of specific knowledge and indicates the constant balance of supply and demand other conditions being equal. As a result, a new economic quality of information is created. This feature is an indicator of the market as a self-regulated system. Monetized information quality is less popular than that of the Public Broadcasting Service, but this audience is able to make decisions. These very users keep the niche sectors which have more potential of technology development, including the content monetization ways. The third point of the study allows develop it in the discourse of media space liberalization. This cultural phenomenon may open opportunities for the development of social and economic relations architecture both locally and regionally.Keywords: content monetization, state capitalism, media liberalization, media economy, information quality
Procedia PDF Downloads 2481358 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 196