Search results for: undocumented critical theory
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9268

Search results for: undocumented critical theory

118 Optimizing Solids Control and Cuttings Dewatering for Water-Powered Percussive Drilling in Mineral Exploration

Authors: S. J. Addinell, A. F. Grabsch, P. D. Fawell, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising down-hole water-powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barren cover. This system has shown superior rates of penetration in water-rich, hard rock formations at depths exceeding 500 metres. With fluid flow rates of up to 120 litres per minute at 200 bar operating pressure to energise the bottom hole tooling, excessive quantities of high quality drilling fluid (water) would be required for a prolonged drilling campaign. As a result, drilling fluid recovery and recycling has been identified as a necessary option to minimise costs and logistical effort. While the majority of the cuttings report as coarse particles, a significant fines fraction will typically also be present. To maximise tool life longevity, the percussive bottom hole assembly requires high quality fluid with minimal solids loading and any recycled fluid needs to have a solids cut point below 40 microns and a concentration less than 400 ppm before it can be used to reenergise the system. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process shows a strong power law relationship for particle size distributions. This data is critical in optimising solids control strategies and cuttings dewatering techniques. Optimisation of deployable solids control equipment is discussed and how the required centrate clarity was achieved in the presence of pyrite-rich metasediment cuttings. Key results were the successful pre-aggregation of fines through the selection and use of high molecular weight anionic polyacrylamide flocculants and the techniques developed for optimal dosing prior to scroll decanter centrifugation, thus keeping sub 40 micron solids loading within prescribed limits. Experiments on maximising fines capture in the presence of thixotropic drilling fluid additives (e.g. Xanthan gum and other biopolymers) are also discussed. As no core is produced during the drilling process, it is intended that the particle laden returned drilling fluid is used for top-of-hole geochemical and mineralogical assessment. A discussion is therefore presented on the biasing and latency of cuttings representivity by dewatering techniques, as well as the resulting detrimental effects on depth fidelity and accuracy. Data pertaining to the sample biasing with respect to geochemical signatures due to particle size distributions is presented and shows that, depending on the solids control and dewatering techniques used, it can have unwanted influence on top-of-hole analysis. Strategies are proposed to overcome these effects, improving sample quality. Successful solids control and cuttings dewatering for water-powered percussive drilling is presented, contributing towards the successful advancement of coiled tubing based greenfields mineral exploration.

Keywords: cuttings, dewatering, flocculation, percussive drilling, solids control

Procedia PDF Downloads 230
117 The BETA Module in Action: An Empirical Study on Enhancing Entrepreneurial Skills through Kearney's and Bloom's Guiding Principles

Authors: Yen Yen Tan, Lynn Lam, Cynthia Lam, Angela Koh, Edwin Seng

Abstract:

Entrepreneurial education plays a crucial role in nurturing future innovators and change-makers. Over time, significant progress has been made in refining instructional approaches to develop the necessary skills among learners effectively. Two highly valuable frameworks, Kearney's "4 Principles of Entrepreneurial Pedagogy" and Bloom's "Three Domains of Learning," serve as guiding principles in entrepreneurial education. Kearney's principles align with experiential and student-centric learning, which are crucial for cultivating an entrepreneurial mindset. The potential synergies between these frameworks hold great promise for enhancing entrepreneurial acumen among students. However, despite this potential, their integration remains largely unexplored. This study aims to bridge this gap by building upon the Business Essentials through Action (BETA) module and investigating its contributions to nurturing the entrepreneurial mindset. This study employs a quasi-experimental mixed-methods approach, combining quantitative and qualitative elements to ensure comprehensive and insightful data. A cohort of 235 students participated, with 118 enrolled in the BETA module and 117 in a traditional curriculum. Their Personal Entrepreneurial Competencies (PECs) were assessed before admission (pre-Y1) and one year into the course (post-Y1) using a comprehensive 55-item PEC questionnaire, enabling measurement of critical traits such as opportunity-seeking, persistence, and risk-taking. Rigorous computations of individual entrepreneurial competencies and overall PEC scores were performed, including a correction factor to mitigate potential self-assessment bias. The orchestration of Kearney's principles and Bloom's domains within the BETA module necessitates a granular examination. Here, qualitative revelations surface, courtesy of structured interviews aligned with contemporary research methodologies. These interviews act as a portal, ushering us into the transformative journey undertaken by students. Meanwhile, the study pivots to explore the BETA module's influence on students' entrepreneurial competencies from the vantage point of faculty members. A symphony of insights emanates from intimate focus group discussions featuring six dedicated lecturers, who share their perceptions, experiences, and reflective narratives, illuminating the profound impact of pedagogical practices embedded within the BETA module. Preliminary findings from ongoing data analysis indicate promising results, showcasing a substantial improvement in entrepreneurial skills among students participating in the BETA module. This study promises not only to elevate students' entrepreneurial competencies but also to illuminate the broader canvas of applicability for Kearney's principles and Bloom's domains. The dynamic interplay of quantitative analyses, proffering precise competency metrics, and qualitative revelations, delving into the nuanced narratives of transformative journeys, engenders a holistic understanding of this educational endeavour. Through a rigorous quasi-experimental mixed-methods approach, this research aims to establish the BETA module's effectiveness in fostering entrepreneurial acumen among students at Singapore Polytechnic, thereby contributing valuable insights to the broader discourse on educational methodologies.

Keywords: entrepreneurial education, experiential learning, pedagogical frameworks, innovative competencies

Procedia PDF Downloads 42
116 Targeting Basic Leucine Zipper Transcription Factor ATF-Like Mediated Immune Cells Regulation to Reduce Crohn’s Disease Fistula Incidence

Authors: Mohammadjavad Sotoudeheian, Soroush Nematollahi

Abstract:

Crohn’s disease (CD) is a chronic gastrointestinal segment inflammation encompassing immune dysregulation in a genetically susceptible individual in response to the environmental triggers and interaction between the microbiome and immune system. Uncontrolled inflammation leads to long-term complications, including fibrotic strictures and enteric fistulae. Increased production of Th1 and Th17-cell cytokines and defects in T-regulatory cells have been associated with CD. Th17-cells are essential for protection against extracellular pathogens, but their atypical activity can cause autoimmunity. Intrinsic defects in the control of programmed cell death in the mucosal T-cell compartment are strongly implicated in the pathogenesis of CD. The apoptosis defect in mucosal T-cells in CD has been endorsed as an imbalance of the Bcl-2 and the Bax. The immune system encounters foreign antigens through microbial colonization of mucosal surfaces or infections. In addition, FOSL downregulated IL-26 expression, a cytokine that marks inflammatory Th17-populations in patients suffering from CD. Furthermore, the expression of IL-23 is associated with the transcription factor primary leucine zipper transcription factor ATF-like (Batf). Batf-deficiency demonstrated the crucial role of Batf in colitis development. Batf and IL-23 mediate their effects by inducing IL-6 production. Strong association of IL-23R, Stat3, and Stat4 with IBD susceptibility point to a critical involvement of T-cells. IL-23R levels in transfer fistula were dependent on the AP-1 transcription factor JunB that additionally controlled levels of RORγt by facilitating DNA binding of Batf. T lymphocytes lacking JunB failed to induce IL-23- and Th17-mediated experimental colitis highlighting the relevance of JunB for the IL-23/ Th17 pathway. The absence of T-bet causes unrestrained Th17-cell differentiation. T-cells are central parts of immune-mediated colon fistula. Especially Th17-cells were highly prevalent in inflamed IBD tissues, as RORγt is effective in preventing colitis. Intraepithelial lymphocytes (IEL) contain unique T-cell subsets, including cells expressing RORγt. Increased activated Th17 and decreased T-regulatory cells in inflamed intestinal tissues had been seen. T-cells differentiate in response to many cytokines, including IL-1β, IL-6, IL-23, and TGF-β, into Th17-cells, a process which is critically dependent on the Batf. IL-23 promotes Th17-cell in the colon. Batf manages the generation of IL-23 induced IL-23R+ Th17-cells. Batf is necessary for TGF-β/IL-6-induced Th17-polarization. Batf-expressing T-cells are the core of T-cell-mediated colitis. The human-specific parts of three AP-1 transcription factors, FOSL1, FOSL2, and BATF, are essential during the early stages of Th17 differentiation. BATF supports the Th17 lineage. FOSL1, FOSL2, and BATF make possession of regulatory loci of genes in the Th17 lineage cascade. The AP1 transcription factor Batf is identified to control intestinal inflammation and seems to regulate pathways within lymphocytes, which could theoretically control the expression of several genes. It shows central regulatory properties over Th17-cell development and is intensely upregulated within IBD-affected tissues. Here, we demonstrated that targeting Batf in IBD appears as a therapeutic approach that reduces colitogenic T-cell activities during fistula formation while aiming to affect inflammation in the gut epithelial cells.

Keywords: immune system, Crohn’s Disease, BATF, T helper cells, Bcl, interleukin, FOSL

Procedia PDF Downloads 126
115 Development of Peptide Inhibitors against Dengue Virus Infection by in Silico Design

Authors: Aussara Panya, Nunghathai Sawasdee, Mutita Junking, Chatchawan Srisawat, Kiattawee Choowongkomon, Pa-Thai Yenchitsomanus

Abstract:

Dengue virus (DENV) infection is a global public health problem with approximately 100 million infected cases a year. Presently, there is no approved vaccine or effective drug available; therefore, the development of anti-DENV drug is urgently needed. The clinical reports revealing the positive association between the disease severity and viral titer has been reported previously suggesting that the anti-DENV drug therapy can possibly ameliorate the disease severity. Although several anti-DENV agents showed inhibitory activities against DENV infection, to date none of them accomplishes clinical use in the patients. The surface envelope (E) protein of DENV is critical for the viral entry step, which includes attachment and membrane fusion; thus, the blocking of envelope protein is an attractive strategy for anti-DENV drug development. To search the safe anti-DENV agent, this study aimed to search for novel peptide inhibitors to counter DENV infection through the targeting of E protein using a structure-based in silico design. Two selected strategies has been used including to identify the peptide inhibitor which interfere the membrane fusion process whereby the hydrophobic pocket on the E protein was the target, the destabilization of virion structure organization through the disruption of the interaction between the envelope and membrane proteins, respectively. The molecular docking technique has been used in the first strategy to search for the peptide inhibitors that specifically bind to the hydrophobic pocket. The second strategy, the peptide inhibitor has been designed to mimic the ectodomain portion of membrane protein to disrupt the protein-protein interaction. The designed peptides were tested for the effects on cell viability to measure the toxic to peptide to the cells and their inhibitory assay to inhibit the DENV infection in Vero cells. Furthermore, their antiviral effects on viral replication, intracellular protein level and viral production have been observed by using the qPCR, cell-based flavivirus immunodetection and immunofluorescence assay. None of tested peptides showed the significant effect on cell viability. The small peptide inhibitors achieved from molecular docking, Glu-Phe (EF), effectively inhibited DENV infection in cell culture system. Its most potential effect was observed for DENV2 with a half maximal inhibition concentration (IC50) of 96 μM, but it partially inhibited other serotypes. Treatment of EF at 200 µM on infected cells also significantly reduced the viral genome and protein to 83.47% and 84.15%, respectively, corresponding to the reduction of infected cell numbers. An additional approach was carried out by using peptide mimicking membrane (M) protein, namely MLH40. Treatment of MLH40 caused the reduction of foci formation in four individual DENV serotype (DENV1-4) with IC50 of 24-31 μM. Further characterization suggested that the MLH40 specifically blocked viral attachment to host membrane, and treatment with 100 μM could diminish 80% of viral attachment. In summary, targeting the hydrophobic pocket and M-binding site on the E protein by using the peptide inhibitors could inhibit DENV infection. The results provide proof of-concept for the development of antiviral therapeutic peptide inhibitors to counter DENV infection through the use of a structure-based design targeting conserved viral protein.

Keywords: dengue virus, dengue virus infection, drug design, peptide inhibitor

Procedia PDF Downloads 337
114 Exploring Type V Hydrogen Storage Tanks: Shape Analysis and Material Evaluation for Enhanced Safety and Efficiency Focusing on Drop Test Performance

Authors: Mariam Jaber, Abdullah Yahya, Mohammad Alkhedher

Abstract:

The shift toward sustainable energy solutions increasingly focuses on hydrogen, recognized for its potential as a clean energy carrier. Despite its benefits, hydrogen storage poses significant challenges, primarily due to its low energy density and high volatility. Among the various solutions, pressure vessels designed for hydrogen storage range from Type I to Type V, each tailored for specific needs and benefits. Notably, Type V vessels, with their all-composite, liner-less design, significantly reduce weight and costs while optimizing space and decreasing maintenance demands. This study focuses on optimizing Type V hydrogen storage tanks by examining how different shapes affect performance in drop tests—a crucial aspect of achieving ISO 15869 certification. This certification ensures that if a tank is dropped, it will fail in a controlled manner, ideally by leaking before bursting. While cylindrical vessels are predominant in mobile applications due to their manufacturability and efficient use of space, spherical vessels offer superior stress distribution and require significantly less material thickness for the same pressure tolerance, making them advantageous for high-pressure scenarios. However, spherical tanks are less efficient in terms of packing and more complex to manufacture. Additionally, this study introduces toroidal vessels to assess their performance relative to the more traditional shapes, noting that the toroidal shape offers a more space-efficient option. The research evaluates how different shapes—spherical, cylindrical, and toroidal—affect drop test outcomes when combined with various composite materials and layup configurations. The ultimate goal is to identify optimal vessel geometries that enhance the safety and efficiency of hydrogen storage systems. For our materials, we selected high-performance composites such as Carbon T-700/Epoxy, Kevlar/Epoxy, E-Glass Fiber/Epoxy, and Basalt/Epoxy, configured in various orientations like [0,90]s, [45,-45]s, and [54,-54]. Our tests involved dropping tanks from different angles—horizontal, vertical, and 45 degrees—with an internal pressure of 35 MPa to replicate real-world scenarios as closely as possible. We used finite element analysis and first-order shear deformation theory, conducting tests with the Abaqus Explicit Dynamics software, which is ideal for handling the quick, intense stresses of an impact. The results from these simulations will provide valuable insights into how different designs and materials can enhance the durability and safety of hydrogen storage tanks. Our findings aim to guide future designs, making them more effective at withstanding impacts and safer overall. Ultimately, this research will contribute to the broader field of lightweight composite materials and polymers, advancing more innovative and practical approaches to hydrogen storage. By refining how we design these tanks, we are moving toward more reliable and economically feasible hydrogen storage solutions, further emphasizing hydrogen's role in the landscape of sustainable energy carriers.

Keywords: hydrogen storage, drop test, composite materials, type V tanks, finite element analysis

Procedia PDF Downloads 24
113 Rebuilding Beyond Bricks: The Environmental Psychological Foundations of Community Healing After the Lytton Creek Fire

Authors: Tugba Altin

Abstract:

In a time characterized by escalating climate change impacts, communities globally face extreme events with deep-reaching tangible and intangible consequences. At the intersection of these phenomena lies the profound impact on the cultural and emotional connections that individuals forge with their environments. This study casts a spotlight on the Lytton Creek Fire of 2021, showcasing it as an exemplar of both the visible destruction brought by such events and the more covert yet deeply impactful disturbances to place attachment (PA). Defined as the emotional and cognitive bond individuals form with their surroundings, PA is critical in comprehending how such catastrophic events reshape cultural identity and the bond with the land. Against the stark backdrop of the Lytton Creek Fire's devastation, the research seeks to unpack the multilayered dynamics of PA amidst the tangible wreckage and the intangible repercussions such as emotional distress and disrupted cultural landscapes. Delving deeper, it examines how affected populations renegotiate their affiliations with these drastically altered environments, grappling with both the tangible loss of their homes and the intangible challenges to solace, identity, and community cohesion. This exploration is instrumental in the broader climate change narrative, as it offers crucial insights into how these personal-place relationships can influence and shape climate adaptation and recovery strategies. Departing from traditional data collection methodologies, this study adopts an interpretive phenomenological approach enriched by hermeneutic insights and places the experiences of the Lytton community and its co-researchers at its core. Instead of conventional interviews, innovative methods like walking audio sessions and photo elicitation are employed. These techniques allow participants to immerse themselves back into the environment, reviving and voicing their memories and emotions in real-time. Walking audio captures reflections on spatial narratives after the trauma, whereas photo voices encapsulate the intangible emotions, presenting a visual representation of place-based experiences. Key findings emphasize the indispensability of addressing both the tangible and intangible traumas in community recovery efforts post-disaster. The profound changes to the cultural landscape and the subsequent shifts in PA underscore the need for holistic, culturally attuned, and emotionally insightful adaptation strategies. These strategies, rooted in the lived experiences and testimonies of the affected individuals, promise more resonant and effective recovery efforts. The research further contributes to climate change discourse, highlighting the intertwined pathways of tangible reconstruction and the essentiality of emotional and cultural rejuvenation. Furthermore, the use of participatory methodologies in this inquiry challenges traditional research paradigms, pointing to potential evolutionary shifts in qualitative research norms. Ultimately, this study underscores the need for a more integrative approach in addressing the aftermath of environmental disasters, ensuring that both physical and emotional rebuilding are given equal emphasis.

Keywords: place attachment, community recovery, disaster reponse, sensory responses, intangible traumas, visual methodologies

Procedia PDF Downloads 39
112 Multi-Criteria Assessment of Biogas Feedstock

Authors: Rawan Hakawati, Beatrice Smyth, David Rooney, Geoffrey McCullough

Abstract:

Targets have been set in the EU to increase the share of renewable energy consumption to 20% by 2020, but developments have not occurred evenly across the member states. Northern Ireland is almost 90% dependent on imported fossil fuels. With such high energy dependency, Northern Ireland is particularly susceptible to the security of supply issues. Linked to fossil fuels are greenhouse gas emissions, and the EU plans to reduce emissions by 20% by 2020. The use of indigenously produced biomass could reduce both greenhouse gas emissions and external energy dependence. With a wide range of both crop and waste feedstock potentially available in Northern Ireland, anaerobic digestion has been put forward as a possible solution for renewable energy production, waste management, and greenhouse gas reduction. Not all feedstock, however, is the same, and an understanding of feedstock suitability is important for both plant operators and policy makers. The aim of this paper is to investigate biomass suitability for anaerobic digestion in Northern Ireland. It is also important that decisions are based on solid scientific evidence. For this reason, the methodology used is multi-criteria decision matrix analysis which takes multiple criteria into account simultaneously and ranks alternatives accordingly. The model uses the weighted sum method (which follows the Entropy Method to measure uncertainty using probability theory) to decide on weights. The Topsis method is utilized to carry out the mathematical analysis to provide the final scores. Feedstock that is currently available in Northern Ireland was classified into two categories: wastes (manure, sewage sludge and food waste) and energy crops, specifically grass silage. To select the most suitable feedstock, methane yield, feedstock availability, feedstock production cost, biogas production, calorific value, produced kilowatt-hours, dry matter content, and carbon to nitrogen ratio were assessed. The highest weight (0.249) corresponded to production cost reflecting a variation of £41 gate fee to 22£/tonne cost. The weights calculated found that grass silage was the most suitable feedstock. A sensitivity analysis was then conducted to investigate the impact of weights. The analysis used the Pugh Matrix Method which relies upon The Analytical Hierarchy Process and pairwise comparisons to determine a weighting for each criterion. The results showed that the highest weight (0.193) corresponded to biogas production indicating that grass silage and manure are the most suitable feedstock. Introducing co-digestion of two or more substrates can boost the biogas yield due to a synergistic effect induced by the feedstock to favor positive biological interactions. A further benefit of co-digesting manure is that the anaerobic digestion process also acts as a waste management strategy. From the research, it was concluded that energy from agricultural biomass is highly advantageous in Northern Ireland because it would increase the country's production of renewable energy, manage waste production, and would limit the production of greenhouse gases (current contribution from agriculture sector is 26%). Decision-making methods based on scientific evidence aid policy makers in classifying multiple criteria in a logical mathematical manner in order to reach a resolution.

Keywords: anaerobic digestion, biomass as feedstock, decision matrix, renewable energy

Procedia PDF Downloads 434
111 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 270
110 National Accreditation Board for Hospitals and Healthcare Reaccreditation, the Challenges and Advantages: A Qualitative Case Study

Authors: Narottam Puri, Gurvinder Kaur

Abstract:

Background: The National Accreditation Board for Hospitals & Healthcare Providers (NABH) is India’s apex standard setting accrediting body in health care which evaluates and accredits healthcare organizations. NABH requires accredited organizations to become reaccredited every three years. It is often though that once the initial accreditation is complete, the foundation is set and reaccreditation is a much simpler process. Fortis Hospital, Shalimar Bagh, a part of the Fortis Healthcare group is a 262 bed, multi-specialty tertiary care hospital. The hospital was successfully accredited in the year 2012. On completion of its first cycle, the hospital underwent a reaccreditation assessment in the year 2015. This paper aims to gain a better understanding of the challenges that accredited hospitals face when preparing for a renewal of their accreditations. Methods: The study was conducted using a cross-sectional mixed methods approach; semi-structured interviews were conducted with senior leadership team and staff members including doctors and nurses. Documents collated by the QA team while preparing for the re-assessment like the data on quality indicators: the method of collection, analysis, trending, continual incremental improvements made over time, minutes of the meetings, amendments made to the existing policies and new policies drafted was reviewed to understand the challenges. Results: The senior leadership had a concern about the cost of accreditation and its impact on the quality of health care services considering the staff effort and time consumed it. The management was however in favor of continuing with the accreditation since it offered competitive advantage, strengthened community confidence besides better pay rates from the payors. The clinicians regarded it as an increased non-clinical workload. Doctors felt accountable within a professional framework, to themselves, the patient and family, their peers and to their profession; but not to accreditation bodies and raised concerns on how the quality indicators were measured. The departmental leaders had a positive perception of accreditation. They agreed that it ensured high standards of care and improved management of their functional areas. However, they were reluctant in sparing people for the QA activities due to staffing issues. With staff turnover, a lot of work was lost as sticky knowledge and had to be redone. Listing the continual quality improvement initiatives over the last 3 years was a challenge in itself. Conclusion: The success of any quality assurance reaccreditation program depends almost entirely on the commitment and interest of the administrators, nurses, paramedical staff, and clinicians. The leader of the Quality Movement is critical in propelling and building momentum. Leaders need to recognize skepticism and resistance and consider ways in which staff can become positively engaged. Involvement of all the functional owners is the start point towards building ownership and accountability for standards compliance. Creativity plays a very valuable role. Communication by Mail Series, WhatsApp groups, Quizzes, Events, and any and every form helps. Leaders must be able to generate interest and commitment without burdening clinical and administrative staff with an activity they neither understand nor believe in.

Keywords: NABH, reaccreditation, quality assurance, quality indicators

Procedia PDF Downloads 207
109 Examining Three Psychosocial Factors of Tax Compliance in Self-Employed Individuals using the Mindspace Framework - Evidence from Australia and Pakistan

Authors: Amna Tariq Shah

Abstract:

Amid the pandemic, the contemporary landscape has experienced accelerated growth in small business activities and an expanding digital marketplace, further exacerbating the issue of non-compliance among self-employed individuals through aggressive tax planning and evasion. This research seeks to address these challenges by developing strategic tax policies that promote voluntary compliance and improve taxpayer facilitation. The study employs the innovative MINDSPACE framework to examine three psychosocial factors—tax communication, tax literacy, and shaming—to optimize policy responses, address administrative shortcomings, and ensure adequate revenue collection for public goods and services. Preliminary findings suggest that incomprehensible communication from tax authorities drives individuals to seek alternative, potentially biased sources of tax information, thereby exacerbating non-compliance. Furthermore, the study reveals low tax literacy among Australian and Pakistani respondents, with many struggling to navigate complex tax processes and comprehend tax laws. Consequently, policy recommendations include simplifying tax return filing and enhancing pre-populated tax returns. In terms of shaming, the research indicates that Australians, being an individualistic society, may not respond well to shaming techniques due to privacy concerns. In contrast, Pakistanis, as a collectivistic society, may be more receptive to naming and shaming approaches. The study employs a mixed-method approach, utilizing interviews and surveys to analyze the issue in both jurisdictions. The use of mixed methods allows for a more comprehensive understanding of tax compliance behavior, combining the depth of qualitative insights with the generalizability of quantitative data, ultimately leading to more robust and well-informed policy recommendations. By examining evidence from opposite jurisdictions, namely a developed country (Australia) and a developing country (Pakistan), the study's applicability is enhanced, providing perspectives from two disparate contexts that offer insights from opposite ends of the economic, cultural, and social spectra. The non-comparative case study methodology offers valuable insights into human behavior, which can be applied to other jurisdictions as well. The application of the MINDSPACE framework in this research is particularly significant, as it introduces a novel approach to tax compliance behavior analysis. By integrating insights from behavioral economics, the framework enables a comprehensive understanding of the psychological and social factors influencing taxpayer decision-making, facilitating the development of targeted and effective policy interventions. This research carries substantial importance as it addresses critical challenges in tax compliance and administration, with far-reaching implications for revenue collection and the provision of public goods and services. By investigating the psychosocial factors that influence taxpayer behavior and utilizing the MINDSPACE framework, the study contributes invaluable insights to the field of tax policy. These insights can inform policymakers and tax administrators in developing more effective tax policies that enhance taxpayer facilitation, address administrative obstacles, promote a more equitable and efficient tax system, and foster voluntary compliance, ultimately strengthening the financial foundation of governments and communities.

Keywords: individual tax compliance behavior, psychosocial factors, tax non-compliance, tax policy

Procedia PDF Downloads 61
108 Risk Factors Associated to Low Back Pain among Active Adults: Cross-Sectional Study among Workers in Tunisian Public Hospital

Authors: Lamia Bouzgarrou, Irtyah Merchaoui, Amira Omrane, Salma Kammoun, Amine Daafa, Neila Chaari

Abstract:

Backgrounds: Currently, low back pain (LBP) is one of the most prevalent public health problems, which caused severe morbidity among a large portion of the adult population. It is also associated with heavy direct and indirect costs, in particular, related to absenteeism and early retirement. Health care workers are one of most occupational groups concerned by LBP, especially because of biomechanical and psycho-organizational risk factors. Our current study aims to investigate risk factors associated with chronic low back pain among Tunisian caregivers in university-hospitals. Methods: Cross-sectional study conducted over a period of 14 months, with a representative sample of caregivers, matched according to age, sex and work department, in two university-hospitals in Tunisia. Data collection included items related to socio-professional characteristics, the evaluation of the working capacity index (WAI), the occupational stress (Karazek job strain questionnaire); the quality of life (SF12), the musculoskeletal disorders Nordic questionnaire, and the examination of the spine flexibility (distance finger-ground, sit-stand maneuver and equilibrium test). Results: Totally, 293 caregivers were included with a mean age equal to 42.64 ± 11.65 years. A body mass index (BMI) exceeding 30, was noted in 20.82% of cases. Moreover, no regular physical activity was practiced in 51.9% of cases. In contrast, domestic activity equal or exceeding 20 hours per week, was reported by 38.22%. Job strain was noted in 19.79 % of cases and the work capacity was 'low' to 'average' among 27.64% of subjects. During the 12 months previous to the investigation, 65% of caregivers complained of LBP, with pain rated as 'severe' or 'extremely severe' in 54.4% of cases and with a frequency of discomfort exceeding one episode per week in 58.52% of cases. During physical examination, the mean distance finger-ground was 7.10 ± 7.5cm. Caregivers assigned to 'high workload' services had the highest prevalence of LBP (77.4%) compared to other categories of hospital services, with no statistically significant relationship (P = 0.125). LBP prevalence was statistically correlated with female gender (p = 0.01) and impaired work capacity (p < 10⁻³). Moreover, the increase of the distance finger-ground was statistically associated with LBP (p = 0.05), advanced age (p < 10⁻³), professional seniority (p < 10⁻³) and the BMI ≥ 25 (p = 0.001). Furthermore, others physical tests of spine flexibility were underperformed among LBP suffering workers with a statistically significant difference (sit-stand maneuver (p = 0.03); equilibrium test (p = 0.01)). According to the multivariate analysis, only the domestic activity exceeding 20H/week, the degraded quality of physical life, and the presence of neck pain were significantly corelated to LBP. The final model explains 36.7% of the variability of this complaint. Conclusion: Our results highlighted the elevate prevalence of LBP among caregivers in Tunisian public hospital and identified both professional and individual predisposing factors. The preliminary analysis supports the necessity of a multidimensional approach to prevent this critical occupational and public health problem. The preventive strategy should be based both on the improvement of working conditions, and also on lifestyle modifications, and reinforcement of healthy behaviors in these active populations.

Keywords: health care workers, low back pain, prevention, risk factor

Procedia PDF Downloads 132
107 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 246
106 A 2-D and 3-D Embroidered Textrode Testing Framework Adhering to ISO Standards

Authors: Komal K., Cleary F., Wells J S.G., Bennett L

Abstract:

Smart fabric garments enable various monitoring applications across sectors such as healthcare, sports and fitness, and the military. Healthcare smart garments monitoring EEG, EMG, and ECG rely on the use of electrodes (dry or wet). However, such electrodes, when used for long-term monitoring, can cause discomfort and skin irritation for the wearer because of their inflexible structure and weight. Ongoing research has been investigating textile-based electrodes (textrodes) in order to provide more comfortable and usable fabric-based electrodes capable of providing intuitive biopotential monitoring. Progress has been made in this space, but they still face a critical design challenge in maintaining consistent skin contact, which directly impacts signal quality. Furthermore, there is a lack of an ISO-based testing framework to validate the electrode design and assess its ability to achieve enhanced performance, strength, usability, and durability. This study proposes the development and evaluation of an ISO-compliant testing framework for standard 2D and advanced 3D embroidered textrodes designs that have a unique structure in order to establish enhanced skin contact for the wearer. This testing framework leverages ISO standards: ISO 13934-1:2013 for tensile and zone-wise strength tests; ISO 13937-2 for tear tests; and ISO 6330 for washing, validating the textrode's performance, a necessity for wearables health parameter monitoring applications. Five textrodes (C1-C5) were designed using EPC win digitization software. Varying patterns such as running stitches, lock stitches, back-to-back stitches, and moss stitches were used to create various embroidered tetrodes samples using Madeira HC12 conductive thread with a resistivity of 100 ohm/m. The textrode designs were then fabricated using a ZSK technical embroidery machine. A comparative analysis was conducted based on a series of laboratory tests adhering to ISO compliance requirements. Tests focusing on the application of strain were applied to the textrodes, and these included: (1) analysis of the electrode's overall surface area strength; (2) assessment of the robustness of the textrodes boundaries; and (3) the assignment of fault test zones to each textrode, where vertical and horizontal slits of 3mm were applied to evaluate the performance of textrodes and its durability. Specific ISO-compliant tests linked to washing were conducted multiple times on each textrode sample to assess both mechanical and chemical damage. Additionally, abrasion and pilling tests were performed to evaluate mechanical damage on the surface of the textrodes and to compare it with the washing test. Finally, the textrodes were assessed based on morphological and surface resistance changes. Results demonstrate that textrode C4, featuring a 3-D layered structure consisting of foam, fabric, and conductive thread layers, significantly enhances skin-electrode contact for biopotential recording. The inclusion of a 3D foam layer was particularly effective in maintaining the shape of the electrode during strain tests, making it the top-performing textrode sample. Therefore, the layered 3D design structure of textrode C4 ranks highest when tested for durability, reusability, and washability. The ISO testing framework established in this study will support future research, validating the durability and reliability of textrodes for a wide range of applications.

Keywords: smart fabric, textrodes, testing framework, ISO compliant

Procedia PDF Downloads 49
105 Pharmacokinetics of First-Line Tuberculosis Drugs in South African Patients from Kwazulu-Natal: Effects of Pharmacogenetic Variation on Rifampicin and Isoniazid Concentrations

Authors: Anushka Naidoo, Veron Ramsuran, Maxwell Chirehwa, Paolo Denti, Kogieleum Naidoo, Helen McIlleron, Nonhlanhla Yende-Zuma, Ravesh Singh, Sinaye Ngcapu, Nesri Padayatachi

Abstract:

Background: Despite efforts to introduce new drugs and shorter drug regimens for drug-susceptible tuberculosis (TB), the standard first-line treatment has not changed in over 50 years. Rifampicin, isoniazid, and pyrazinamide are critical components of the current standard treatment regimens. Some studies suggest that microbiologic failure and acquired drug resistance are primarily driven by low drug concentrations that result from pharmacokinetic (PK) variability independent of adherence to treatment. Wide between-patient pharmacokinetic variability for rifampin, isoniazid, and pyrazinamide has been reported in prior studies. There may be several reasons for this variability. However, genetic variability in genes coding for drug metabolizing and transporter enzymes have been shown to be a contributing factor for variable tuberculosis drug exposures. Objective: We describe the pharmacokinetics of first-line TB drugs rifampicin, isoniazid, and pyrazinamide and assess the effect of genetic variability in relevant selected drug metabolizing and transporter enzymes on pharmacokinetic parameters of isoniazid and rifampicin. Methods: We conducted the randomized-controlled Improving retreatment success TB trial in Durban, South Africa. The drug regimen included rifampicin, isoniazid, and pyrazinamide. Drug concentrations were measured in plasma, and concentration-time data were analysed using nonlinear-mixed-effects models to quantify the effects of relevant covariates and single nucleotide polymorphisms (SNP’s) of drug metabolizing and transporter genes on rifampicin, isoniazid and pyrazinamide exposure. A total of 25 SNP’s: four NAT2 (used to determine acetylator status), four SLCO1B1, three Pregnane X receptor (NR1), six ABCB1 and eight UGT1A, were selected for analysis in this study. Genotypes were determined for each of the SNP’s using a TaqMan® Genotyping OpenArray™. Results: Among fifty-eight patients studied; 41 (70.7%) were male, 97% black African, 42 (72.4%) HIV co-infected and 40 (95%) on efavirenz-based ART. Median weight, fat-free mass (FFM), and age at baseline were 56.9 kg (interquartile range, IQR: 51.1-65.2), 46.8 kg (IQR: 42.5-50.3) and 37 years (IQR: 31-42), respectively. The pharmacokinetics of rifampicin and pyrazinamide was best described using one-compartment models with first-order absorption and elimination, while for isoniazid two-compartment disposition was used. The median (interquartile range: IQR) AUC (h·mg/L) and Cmax (mg/L) for rifampicin, isoniazid, and pyrazinamide were; 25.62 (23.01-28.53) and 4.85 (4.36-5.40), 10.62 (9.20-12.25) and 2.79 (2.61-2.97), 345.74 (312.03-383.10) and 28.06 (25.01-31.52), respectively. Eighteen percent of patients were classified as rapid acetylators, and 34% and 43% as slow and intermediate acetylators, respectively. Rapid and intermediate acetylator status based on NAT 2 genotype resulted in 2.3 and 1.6 times higher isoniazid clearance than slow acetylators. We found no effects of the SLCO1B1 genotypes on rifampicin pharmacokinetics. Conclusion: Plasma concentrations of rifampicin, isoniazid, and pyrazinamide were low overall in our patients. Isoniazid clearance was high overall and as expected higher in rapid and intermediate acetylators resulting in lower drug exposures. In contrast to reports from previous South African or Ugandan studies, we did not find any effects of the SLCO1B1 or other genotypes tested on rifampicin PK. However, our findings are in keeping with more recent studies from Malawi and India emphasizing the need for geographically diverse and adequately powered studies. The clinical relevance of the low tuberculosis drug concentrations warrants further investigation.

Keywords: rifampicin, isoniazid pharmacokinetics, genetics, NAT2, SLCO1B1, tuberculosis

Procedia PDF Downloads 163
104 An Intelligence-Led Methodologly for Detecting Dark Actors in Human Trafficking Networks

Authors: Andrew D. Henshaw, James M. Austin

Abstract:

Introduction: Human trafficking is an increasingly serious transnational criminal enterprise and social security issue. Despite ongoing efforts to mitigate the phenomenon and a significant expansion of security scrutiny over past decades, it is not receding. This is true for many nations in Southeast Asia, widely recognized as the global hub for trafficked persons, including men, women, and children. Clearly, human trafficking is difficult to address because there are numerous drivers, causes, and motivators for it to persist, such as non-military and non-traditional security challenges, i.e., climate change, global warming displacement, and natural disasters. These make displaced persons and refugees particularly vulnerable. The issue is so large conservative estimates put a dollar value at around $150 billion-plus per year (Niethammer, 2020) spanning sexual slavery and exploitation, forced labor, construction, mining and in conflict roles, and forced marriages of girls and women. Coupled with corruption throughout military, police, and civil authorities around the world, and the active hands of powerful transnational criminal organizations, it is likely that such figures are grossly underestimated as human trafficking is misreported, under-detected, and deliberately obfuscated to protect those profiting from it. For example, the 2022 UN report on human trafficking shows a 56% reduction in convictions in that year alone (UNODC, 2022). Our Approach: To better understand this, our research utilizes a bespoke methodology. Applying a JAM (Juxtaposition Assessment Matrix), which we previously developed to detect flows of dark money around the globe (Henshaw, A & Austin, J, 2021), we now focus on the human trafficking paradigm. Indeed, utilizing a JAM methodology has identified key indicators of human trafficking not previously explored in depth. Being a set of structured analytical techniques that provide panoramic interpretations of the subject matter, this iteration of the JAM further incorporates behavioral and driver indicators, including the employment of Open-Source Artificial Intelligence (OS-AI) across multiple collection points. The extracted behavioral data was then applied to identify non-traditional indicators as they contribute to human trafficking. Furthermore, as the JAM OS-AI analyses data from the inverted position, i.e., the viewpoint of the traffickers, it examines the behavioral and physical traits required to succeed. This transposed examination of the requirements of success delivers potential leverage points for exploitation in the fight against human trafficking in a new and novel way. Findings: Our approach identified new innovative datasets that have previously been overlooked or, at best, undervalued. For example, the JAM OS-AI approach identified critical 'dark agent' lynchpins within human trafficking that are difficult to detect and harder to connect to actors and agents within a network. Our preliminary data suggests this is in part due to the fact that ‘dark agents’ in extant research have been difficult to detect and potentially much harder to directly connect to the actors and organizations in human trafficking networks. Our research demonstrates that using new investigative techniques such as OS-AI-aided JAM introduces a powerful toolset to increase understanding of human trafficking and transnational crime and illuminate networks that, to date, avoid global law enforcement scrutiny.

Keywords: human trafficking, open-source intelligence, transnational crime, human security, international human rights, intelligence analysis, JAM OS-AI, Dark Money

Procedia PDF Downloads 72
103 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 148
102 The Ductile Fracture of Armor Steel Targets Subjected to Ballistic Impact and Perforation: Calibration of Four Damage Criteria

Authors: Imen Asma Mbarek, Alexis Rusinek, Etienne Petit, Guy Sutter, Gautier List

Abstract:

Over the past two decades, the automotive, aerospace and army industries have been paying an increasing attention to Finite Elements (FE) numerical simulations of the fracture process of their structures. Thanks to the numerical simulations, it is nowadays possible to analyze several problems involving costly and dangerous extreme loadings safely and at a reduced cost such as blast or ballistic impact problems. The present paper is concerned with ballistic impact and perforation problems involving ductile fracture of thin armor steel targets. The target fracture process depends usually on various parameters: the projectile nose shape, the target thickness and its mechanical properties as well as the impact conditions (friction, oblique/normal impact...). In this work, the investigations are concerned with the normal impact of a conical head-shaped projectile on thin armor steel targets. The main aim is to establish a comparative study of four fracture criteria that are commonly used in the fracture process simulations of structures subjected to extreme loadings such as ballistic impact and perforation. Usually, the damage initiation results from a complex physical process that occurs at the micromechanical scale. On a macro scale and according to the following fracture models, the variables on which the fracture depends are mainly the stress triaxiality ƞ, the strain rate, temperature T, and eventually the Lode angle parameter Ɵ. The four failure criteria are: the critical strain to failure model, the Johnson-Cook model, the Wierzbicki model and the Modified Hosford-Coulomb model MHC. Using the SEM, the observations of the fracture facies of tension specimen and of armor steel targets impacted at low and high incident velocities show that the fracture of the specimens is a ductile fracture. The failure mode of the targets is petalling with crack propagation and the fracture facies are covered with micro-cavities. The parameters of each ductile fracture model have been identified for three armor steels and the applicability of each criterion was evaluated using experimental investigations coupled to numerical simulations. Two loading paths were investigated in this study, under a wide range of strain rates. Namely, quasi-static and intermediate uniaxial tension and quasi-static and dynamic double shear testing allow covering various values of stress triaxiality ƞ and of the Lode angle parameter Ɵ. All experiments were conducted on three different armor steel specimen under quasi-static strain rates ranging from 10-4 to 10-1 1/s and at three different temperatures ranging from 297K to 500K, allowing drawing the influence of temperature on the fracture process. Intermediate tension testing was coupled to dynamic double shear experiments conducted on the Hopkinson tube device, allowing to spot the effect of high strain rate on the damage evolution and the crack propagation. The aforementioned fracture criteria are implemented into the FE code ABAQUS via VUMAT subroutine and they were coupled to suitable constitutive relations allow having reliable results of ballistic impact problems simulation. The calibration of the four damage criteria as well as a concise evaluation of the applicability of each criterion are detailed in this work.

Keywords: armor steels, ballistic impact, damage criteria, ductile fracture, SEM

Procedia PDF Downloads 297
101 Simulation and Analysis of Mems-Based Flexible Capacitive Pressure Sensors with COMSOL

Authors: Ding Liangxiao

Abstract:

The technological advancements in Micro-Electro-Mechanical Systems (MEMS) have significantly contributed to the development of new, flexible capacitive pressure sensors,which are pivotal in transforming wearable and medical device technologies. This study employs the sophisticated simulation tools available in COMSOL Multiphysics® to develop and analyze a MEMS-based sensor with a tri-layered design. This sensor comprises top and bottom electrodes made from gold (Au), noted for their excellent conductivity, a middle dielectric layer made from a composite of Silver Nanowires (AgNWs) embedded in Thermoplastic Polyurethane (TPU), and a flexible, durable substrate of Polydimethylsiloxane (PDMS). This research was directed towards understanding how changes in the physical characteristics of the AgNWs/TPU dielectric layer—specifically, its thickness and surface area—impact the sensor's operational efficacy. We assessed several key electrical properties: capacitance, electric potential, and membrane displacement under varied pressure conditions. These investigations are crucial for enhancing the sensor's sensitivity and ensuring its adaptability across diverse applications, including health monitoring systems and dynamic user interface technologies. To ensure the reliability of our simulations, we applied the Effective Medium Theory to calculate the dielectric constant of the AgNWs/TPU composite accurately. This approach is essential for predicting how the composite material will perform under different environmental and operational stresses, thus facilitating the optimization of the sensor design for enhanced performance and longevity. Moreover, we explored the potential benefits of innovative three-dimensional structures for the dielectric layer compared to traditional flat designs. Our hypothesis was that 3D configurations might improve the stress distribution and optimize the electrical field interactions within the sensor, thereby boosting its sensitivity and accuracy. Our simulation protocol includes comprehensive performance testing under simulated environmental conditions, such as temperature fluctuations and mechanical pressures, which mirror the actual operational conditions. These tests are crucial for assessing the sensor's robustness and its ability to function reliably over extended periods, ensuring high reliability and accuracy in complex real-world environments. In our current research, although a full dynamic simulation analysis of the three-dimensional structures has not yet been conducted, preliminary explorations through three-dimensional modeling have indicated the potential for mechanical and electrical performance improvements over traditional planar designs. These initial observations emphasize the potential advantages and importance of incorporating advanced three-dimensional modeling techniques in the development of Micro-Electro-Mechanical Systems (MEMS)sensors, offering new directions for the design and functional optimization of future sensors. Overall, this study not only highlights the powerful capabilities of COMSOL Multiphysics® for modeling sophisticated electronic devices but also underscores the potential of innovative MEMS technology in advancing the development of more effective, reliable, and adaptable sensor solutions for a broad spectrum of technological applications.

Keywords: MEMS, flexible sensors, COMSOL Multiphysics, AgNWs/TPU, PDMS, 3D modeling, sensor durability

Procedia PDF Downloads 20
100 Household Water Practices in a Rapidly Urbanizing City and Its Implications for the Future of Potable Water: A Case Study of Abuja Nigeria

Authors: Emmanuel Maiyanga

Abstract:

Access to sufficiently good quality freshwater has been a global challenge, but more notably in low-income countries, particularly in the Sub-Saharan countries, which Nigeria is one. Urban population is soaring, especially in many low-income countries, the existing centralised water supply infrastructures are ageing and inadequate, moreover in households peoples’ lifestyles have become more water-demanding. So, people mostly device coping strategies where municipal supply is perceived to have failed. This development threatens the futures of groundwater and calls for a review of management strategy and research approach. The various issues associated with water demand management in low-income countries and Nigeria, in particular, are well documented in the literature. However, the way people use water daily in households and the reasons they do so, and how the situation is constructing demand among the middle-class population in Abuja Nigeria is poorly understood. This is what this research aims to unpack. This is achieved by using the social practices research approach (which is based on the Theory of Practices) to understand how this situation impacts on the shared groundwater resource. A qualitative method was used for data gathering. This involved audio-recorded interviews of householders and water professionals in the private and public sectors. It also involved observation, note-taking, and document study. The data were analysed thematically using NVIVO software. The research reveals the major household practices that draw on the water on a domestic scale, and they include water sourcing, body hygiene and sanitation, laundry, kitchen, and outdoor practices (car washing, domestic livestock farming, and gardening). Among all the practices, water sourcing, body hygiene, kitchen, and laundry practices, are identified to impact most on groundwater, with impact scale varying with household peculiarities. Water sourcing practices involve people sourcing mostly from personal boreholes because the municipal water supply is perceived inadequate and unreliable in terms of service delivery and water quality, and people prefer easier and unlimited access and control using boreholes. Body hygiene practices reveal that every respondent prefers bucket bathing at least once daily, and the majority bathe twice or more every day. Frequency is determined by the feeling of hotness and dirt on the skin. Thus, people bathe to cool down, stay clean, and satisfy perceived social, religious, and hygiene demand. Kitchen practice consumes water significantly as people run the tap for vegetable washing in daily food preparation and dishwashing after each meal. Laundry practice reveals that most people wash clothes most frequently (twice in a week) during hot and dusty weather, and washing with hands in basins and buckets is the most prevalent and water wasting due to soap overdose. The research also reveals poor water governance as a major cause of current inadequate municipal water delivery. The implication poor governance and widespread use of boreholes is an uncontrolled abstraction of groundwater to satisfy desired household practices, thereby putting the future of the shared aquifer at great risk of total depletion with attendant multiplying effects on the people and the environment and population continues to soar.

Keywords: boreholes, groundwater, household water practices, self-supply

Procedia PDF Downloads 111
99 Optimized Electron Diffraction Detection and Data Acquisition in Diffraction Tomography: A Complete Solution by Gatan

Authors: Saleh Gorji, Sahil Gulati, Ana Pakzad

Abstract:

Continuous electron diffraction tomography, also known as microcrystal electron diffraction (MicroED) or three-dimensional electron diffraction (3DED), is a powerful technique, which in combination with cryo-electron microscopy (cryo-ED), can provide atomic-scale 3D information about the crystal structure and composition of different classes of crystalline materials such as proteins, peptides, and small molecules. Unlike the well-established X-ray crystallography method, 3DED does not require large single crystals and can collect accurate electron diffraction data from crystals as small as 50 – 100 nm. This is a critical advantage as growing larger crystals, as required by X-ray crystallography methods, is often very difficult, time-consuming, and expensive. In most cases, specimens studied via 3DED method are electron beam sensitive, which means there is a limitation on the maximum amount of electron dose one can use to collect the required data for a high-resolution structure determination. Therefore, collecting data using a conventional scintillator-based fiber coupled camera brings additional challenges. This is because of the inherent noise introduced during the electron-to-photon conversion in the scintillator and transfer of light via the fibers to the sensor, which results in a poor signal-to-noise ratio and requires a relatively higher and commonly specimen-damaging electron dose rates, especially for protein crystals. As in other cryo-EM techniques, damage to the specimen can be mitigated if a direct detection camera is used which provides a high signal-to-noise ratio at low electron doses. In this work, we have used two classes of such detectors from Gatan, namely the K3® camera (a monolithic active pixel sensor) and Stela™ (that utilizes DECTRIS hybrid-pixel technology), to address this problem. The K3 is an electron counting detector optimized for low-dose applications (like structural biology cryo-EM), and Stela is also a counting electron detector but optimized for diffraction applications with high speed and high dynamic range. Lastly, data collection workflows, including crystal screening, microscope optics setup (for imaging and diffraction), stage height adjustment at each crystal position, and tomogram acquisition, can be one of the other challenges of the 3DED technique. Traditionally this has been all done manually or in a partly automated fashion using open-source software and scripting, requiring long hours on the microscope (extra cost) and extensive user interaction with the system. We have recently introduced Latitude® D in DigitalMicrograph® software, which is compatible with all pre- and post-energy-filter Gatan cameras and enables 3DED data acquisition in an automated and optimized fashion. Higher quality 3DED data enables structure determination with higher confidence, while automated workflows allow these to be completed considerably faster than before. Using multiple examples, this work will demonstrate how to direct detection electron counting cameras enhance 3DED results (3 to better than 1 Angstrom) for protein and small molecule structure determination. We will also show how Latitude D software facilitates collecting such data in an integrated and fully automated user interface.

Keywords: continuous electron diffraction tomography, direct detection, diffraction, Latitude D, Digitalmicrograph, proteins, small molecules

Procedia PDF Downloads 84
98 Integrating Evidence Into Health Policy: Navigating Cross-Sector and Interdisciplinary Collaboration

Authors: Tessa Heeren

Abstract:

The following proposal pertains to the complex process of successfully implementing health policies that are based on public health research. A systematic review was conducted by myself and faculty at the Cluj School of Public Health in Romania. The reviewed articles covered a wide range of topics, such as barriers and facilitators to multi-sector collaboration, differences in professional cultures, and systemic obstacles. The reviewed literature identified communication, collaboration, user-friendly dissemination, and documentation of processes in the execution of applied research as important themes for the promotion of evidence in the public health decision-making process. This proposal fits into the Academy Health National Health Policy conference because it identifies and examines differences between the worlds of research and politics. Implications and new insights for federal and/or state health policy: Recommendations made based on the findings of this research include using politically relevant levers to promote research (e.g. campaign donors, lobbies, established parties, etc.), modernizing dissemination practices, and reforms in which the involvement of external stakeholders is facilitated without relying on invitations from individual policy makers. Description of how evidence and/or data was or could be used: The reviewed articles illustrated shortcomings and areas for improvement in policy research processes and collaborative development. In general, the evidence base in the field of integrating research into policy lacks critical details of the actual process of developing evidence based policy. This shortcoming in logistical details creates a barrier for potential replication of collaborative efforts described in studies. Potential impact of the presentation for health policy: The reviewed articles focused on identifying barriers and facilitators that arise in cross sector collaboration, rather than the process and impact of integrating evidence into policy. In addition, the type of evidence used in policy was rarely specified, and widely varying interpretations of the definition of evidence complicated overall conclusions. Background: Using evidence to inform public health decision making processes has been proven effective; however, it is not clear how research is applied in practice. Aims: The objectives of the current study were to assess the extent to which evidence is used in public health decision-making process. Methods: To identify eligible studies, seven bibliographic databases, specifically, PubMed, Scopus, Cochrane Library, Science Direct, Web of Science, ClinicalKey, Health and Safety Science Abstract were screened (search dates: 1990 – September 2015); a general internet search was also conducted. Primary research and systematic reviews about the use of evidence in public health policy in Europe were included. The studies considered for inclusion were assessed by two reviewers, along with extracted data on objective, methods, population, and results. Data were synthetized as a narrative review. Results: Of 2564 articles initially identified, 2525 titles and abstracts were screened. Ultimately, 30 articles fit the research criteria by describing how or why evidence is used/not used in public health policy. The majority of included studies involved interviews and surveys (N=17). Study participants were policy makers, health care professionals, researchers, community members, service users, experts in public health.

Keywords: cross-sector, dissemination, health policy, policy implementation

Procedia PDF Downloads 207
97 Exploring the Influence of Maternal Self-Discrepancy on Psychological Well-Being: A Study on Middle-Aged Mothers

Authors: Chooi Fong Lee

Abstract:

Background: Maternal psychological well-being has been investigated from various aspects, such as social support, employment status. However, a perspective from self-discrepancy theory has not been employed. Moreover, most were focused on young mothers. Less is understanding the middle-aged mother’s psychological well-being. Objective: To examine the influence of maternal self-discrepancy between actual and ideal self on maternal role achievement, state anxiety, trait anxiety, and subjective well-being among Japanese middle-aged mothers across their employment status. Method: A pilot study was conducted with 20 mother participants (aged 40-55, 9 regular-employed, 8 non-regular-employed, and 3 homemaker mothers) to assess the viability of survey questionnaires (Maternal Role Achievement Scale, State-Trait Anxiety Inventory, Subjective Well-being Scale, and a self-report). Participants were randomly selected voluntarily from the college students’ mothers. Participants accessed the survey via a designated URL. The self-report questionnaire prompted participants to list up to 3 ideal selves they aspired to be and rate the extent to which their actual selves deviated from their ideal selves on a 7-point scale (1= not at all; 4 = medium; 7 = extremely). The findings confirmed the validity of the survey questionnaires, indicating their appropriateness for use in subsequent research. Self-discrepancy scores were calculated by subtracting participants’ degree ratings from a 7-point scale, summing them up, and then dividing the total by 3. Setting: We ensured participants were randomly selected from the research firm to mitigate bias. The self-report questionnaire was adapted from a validated instrument and underwent rigorous modification and testing in the pilot study. The final sample consisted of 241 participants, 97 regular-employed, 87 non-regular employed, and 57 homemaker mothers. Result: The reliability coefficient for the discrepancy score is α=.75. The findings indicate that regular-employed mothers tend to exhibit lower self-discrepancy scores compared to non-regular employed and homemaker mothers. This discrepancy negatively impacts maternal role, state anxiety, and subjective well-being while positively affecting trait anxiety. Trait anxiety arises when one feels they did not meet their ideal self, as evidenced by higher levels in homemaker mothers, who experience lower state anxiety. Conversely, regular-employed mothers exhibit higher state anxiety but lower trait anxiety, suggesting satisfaction in their professional pursuits despite balancing work and family responsibilities. Full-time maternal roles contribute to lower state anxiety but higher trait anxiety among homemaker mothers due to a lack of personal identity achievement. Non-regular employed mothers show similarities to homemaker mothers. In self-reports, regular-employed mothers highlight support and devotion to their children’s development, while non-regular-employed mothers seek life fulfillment through part-time work alongside child-rearing duties. Homemaker mothers emphasize qualities like sociability, and communication skills, potentially influencing their self-discrepancy scores. Furthermore, the hierarchical multiple regression analysis revealed that the discrepancy scores significantly predict subjective well-being. Conclusion: There may be the need for broader generalizability beyond our sample of Japanese mothers; however, the findings offer valuable insights into the impact of maternal self-discrepancy on psychological well-being among middle-aged mothers across different employment statuses. Understanding these dynamics becomes crucial as contemporary women increasingly pursue higher education and depart from traditional motherhood norms.

Keywords: maternal employment, maternal role, self-discrepancy, state-trait anxiety, subjective well-being

Procedia PDF Downloads 24
96 Regulatory and Economic Challenges of AI Integration in Cyber Insurance

Authors: Shreyas Kumar, Mili Shangari

Abstract:

Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.

Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware

Procedia PDF Downloads 13
95 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results

Authors: Athanasios Karasimos, Vasiliki Makri

Abstract:

This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.

Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes

Procedia PDF Downloads 71
94 Older Consumer’s Willingness to Trust Social Media Advertising: A Case of Australian Social Media Users

Authors: Simon J. Wilde, David M. Herold, Michael J. Bryant

Abstract:

Social media networks have become the hotbed for advertising activities due mainly to their increasing consumer/user base and, secondly, owing to the ability of marketers to accurately measure ad exposure and consumer-based insights on such networks. More than half of the world’s population (4.8 billion) now uses social media (60%), with 150 million new users having come online within the last 12 months (to June 2022). As the use of social media networks by users grows, key business strategies used for interacting with these potential customers have matured, especially social media advertising. Unlike other traditional media outlets, social media advertising is highly interactive and digital channel specific. Social media advertisements are clearly targetable, providing marketers with an extremely powerful marketing tool. Yet despite the measurable benefits afforded to businesses engaged in social media advertising, recent controversies (such as the relationship between Facebook and Cambridge Analytica in 2018) have only heightened the role trust and privacy play within these social media networks. Using a web-based quantitative survey instrument, survey participants were recruited via a reputable online panel survey site. Respondents to the survey represented social media users from all states and territories within Australia. Completed responses were received from a total of 258 social media users. Survey respondents represented all core age demographic groupings, including Gen Z/Millennials (18-45 years = 60.5% of respondents) and Gen X/Boomers (46-66+ years = 39.5% of respondents). An adapted ADTRUST scale, using a 20 item 7-point Likert scale, measured trust in social media advertising. The ADTRUST scale has been shown to be a valid measure of trust in advertising within traditional media, such as broadcast media and print media, and, more recently, the Internet (as a broader platform). The adapted scale was validated through exploratory factor analysis (EFA), resulting in a three-factor solution. These three factors were named reliability, usefulness and affect, and the willingness to rely on. Factor scores (weighted measures) were then calculated for these factors. Factor scores are estimates of the scores survey participants would have received on each of the factors had they been measured directly, with the following results recorded (Reliability = 4.68/7; Usefulness and Affect = 4.53/7; and Willingness to Rely On = 3.94/7). Further statistical analysis (independent samples t-test) determined the difference in factor scores between the factors when age (Gen Z/Millennials vs. Gen X/Boomers) was utilized as the independent, categorical variable. The results showed the difference in mean scores across all three factors to be statistically significant (p<0.05) for these two core age groupings: (1) Gen Z/Millennials Reliability = 4.90/7 vs. Gen X/Boomers Reliability = 4.34/7; (2) Gen Z/Millennials Usefulness and Affect = 4.85/7 vs Gen X/Boomers Usefulness and Affect = 4.05/7; and (3) Gen Z/Millennials Willingness to Rely On = 4.53/7 vs Gen X/Boomers Willingness to Rely On = 3.03/7. The results clearly indicate that older social media users lack trust in the quality of information conveyed in social media ads when compared to younger, more social media-savvy consumers. This is especially evident with respect to Factor 3 (Willingness to Rely On), whose underlying variables reflect one’s behavioral intent to act based on the information conveyed in advertising. These findings can be useful to marketers, advertisers, and brand managers in that the results highlight a critical need to design ‘authentic’ advertisements on social media sites to better connect with these older users in an attempt to foster positive behavioral responses from within this large demographic group – whose engagement with social media sites continues to increase year on year.

Keywords: social media advertising, trust, older consumers, internet studies

Procedia PDF Downloads 12
93 Adapting to College: Exploration of Psychological Well-Being, Coping, and Identity as Markers of Readiness

Authors: Marit D. Murry, Amy K. Marks

Abstract:

The transition to college is a critical period that affords abundant opportunities for growth in conjunction with novel challenges for emerging adults. During this time, emerging adults are garnering experiences and acquiring hosts of new information that they are required to synthesize and use to inform life-shaping decisions. This stage is characterized by instability and exploration, which necessitates a diverse set of coping skills to successfully navigate and positively adapt to their evolving environment. However, important sociocultural factors result in differences that occur developmentally for minority emerging adults (i.e., emerging adults with an identity that has been or is marginalized). While the transition to college holds vast potential, not all are afforded the same chances, and many individuals enter into this stage at varying degrees of readiness. Understanding the nuance and diversity of student preparedness for college and contextualizing these factors will better equip systems to support incoming students. Emerging adulthood for ethnic, racial minority students presents itself as an opportunity for growth and resiliency in the face of systemic adversity. Ethnic, racial identity (ERI) is defined as an identity that develops as a function of one’s ethnic-racial group membership. Research continues to demonstrate ERI as a resilience factor that promotes positive adjustment in young adulthood. Adaptive coping responses (e.g., engaging in help-seeking behavior, drawing on personal and community resources) have been identified as possible mechanisms through which ERI buffers youth against stressful life events, including discrimination. Additionally, trait mindfulness has been identified as a significant predictor of general psychological health, and mindfulness practice has been shown to be a self-regulatory strategy that promotes healthy stress responses and adaptive coping strategy selection. The current study employed a person-centered approach to explore emerging patterns across ethnic identity development and psychological well-being criterion variables among college freshmen. Data from 283 incoming college freshmen at Northeastern University were analyzed. The Brief COPE Acceptance and Emotional Support scales, the Five Factor Mindfulness Questionnaire, and MIEM Exploration and Affirmation measures were used to inform the cluster profiles. The TwoStep auto-clustering algorithm revealed an optimal three-cluster solution (BIC = 848.49), which classified 92.6% (n = 262) of participants in the sample into one of the three clusters. The clusters were characterized as ‘Mixed Adjustment’, ‘Lowest Adjustment’, and ‘Moderate Adjustment.’ Cluster composition varied significantly by ethnicity X² (2, N = 262) = 7.74 (p = .021) and gender X² (2, N = 259) = 10.40 (p = .034). The ‘Lowest Adjustment’ cluster contained the highest proportion of students of color, 41% (n = 32), and male-identifying students, 44.2% (n = 34). Follow-up analyses showed higher ERI exploration in ‘Moderate Adjustment’ cluster members, also reported higher levels of psychological distress, with significantly elevated depression scores (p = .011), psychological diagnoses of depression (p = .013), anxiety (p = .005) and psychiatric disorders (p = .025). Supporting prior research, students engaging with identity exploration processes often endure more psychological distress. These results indicate that students undergoing identity development may require more socialization and different services beyond normal strategies.

Keywords: adjustment, coping, college, emerging adulthood, ethnic-racial identity, psychological well-being, resilience

Procedia PDF Downloads 95
92 The Relationship between Fight-Flight-Freeze System, Level of Expressed Emotion in Family, and Emotion Regulation Difficulties of University Students: Comparison Experienced to Inexperienced Non-Suicidal Self-Injury Students (NSSI)

Authors: Hyojung Shin, Munhee Kweon

Abstract:

Non-suicide Self Injuri (NSSI) can be defined as the act of an individual who does not intend to die directly and intentionally damaging his or her body tissues. According to a study conducted by the Korean Ministry of Education in 2018, the NSSI is widely spreading among teenagers, with 7.9 percent of all middle school students and 6.4 percent of high school students reporting experience in NSSI. As such, it is understood that the first time of the NSSI is in adolescence. However, the NSSI may not start and stop at a certain time, but may last longer. However, despite the widespread prevalence of NSSI among teenagers, little is known about the process and maintenance of NSSI college students on a continuous development basis. Korea's NSSI research trends are mainly focused on individual internal vulnerabilities (high levels of painful emotions/awareness, lack of pain tolerance) and interpersonal vulnerabilities (poor communication skills and social problem solving), and little studies have been done on individuals' unique characteristics and environmental factors such as substrate or environmental vulnerability factors. In particular, environmental factors are associated with the occurrence of NSSI by acting as a vulnerability factor that can interfere with the emotional control of individuals, whereas individual factors play a more direct role by contributing to the maintenance of NSSI, so it is more important to consider this for personal environmental involvement in NSSI. This study focused on the Fight-Flight-Freeze System as a factor in the defensive avoidance system of Reward Sensitivity in individual factors. Also, Environmental factors include the level of expressed emotion in family. Wedig and Nock (2007) said that if parents with a self-critical cognitive style take the form of criticizing their children, the experience of NSSI increases. The high level of parental criticism is related to the increasing frequency of NSSI acts as well as to serious levels of NSSI. If the normal coping mechanism fails to control emotions, people want to overcome emotional difficulties even through NSSI, and emotional disturbances experienced by individuals within an unsupported social relationship increase vulnerability to NSSI. Based on these theories, this study is to find ways to prevent NSSI and intervene in counseling effectively by verifying the differences between the characteristics experienced NSSI persons and non-experienced NSSI persons. Therefore, the purpose of this research was to examine the relationship of Fight-Flight-Freeze System (FFFS), level of expressed emotion in family and emotion regulation difficulties, comparing those who experienced Non-Suicidal Self-Injury (NSSI) with those who did not experienced Non-Suicidal Self-Injury (NSSI). The data were collected from university students in Seoul Korea and Gyeonggi-do province. 99 subjects were experienced student of NSSI, while 375 were non- experienced student of NSSI. The results of this study are as follows. First, the result of t-test indicated that NSSI attempters showed a significant difference in fight-flight-freeze system, level of expressed emotion and emotion regulation difficulties, compared with non-attempters. Second, fight-flight-freeze system, level of expressed emotion in family and emotion regulation difficulties of NSSI attempters showed a significant difference in correlation. The correlation was significant only freeze system of fight-flight-freeze system, Level of expressed emotion in family and emotion regulation difficulties. Third, freeze system and level of expressed emotion in family predicted emotion regulation difficulties of NSSI attempters. Fight-freeze system and level of expressed emotion in family predicted emotion regulation difficulties of non-NSSI attempters. Lastly, Practical implications for counselors and limitations of this study are discussed.

Keywords: fight-flight-freeze system, level of expressed emotion in family, emotion regulation difficulty, non-suicidal self injury

Procedia PDF Downloads 97
91 City on Fire: An Ethnography of Play and Politics in Johannesburg Nightclubs

Authors: Beth Vale

Abstract:

Academic research has often neglected the city after dark. Surprisingly little consideration has been given to the every night life of cities: the spatial tactics and creative insurgencies of urban residents when night falls. The focus on ‘pleasure’ in the nocturnal city has often negated the subtle politics of night-time play, embedded in expressions of identity, attachment and resistance. This paper investigates Johannesburg nightclubs as sites of quotidian political labour, through which young people contest social space and their place in it, thereby contributing to the city’s effective and socio-political cartography. The tactical remodelling of the nocturnal city through nightclubbing traces lines of desire (material, emotional, sexual), affiliation, and fear. These in turn map onto young people’s expressions of their social and political identities, as well as their attempts at place-making in a ‘post-apartheid’ context. By examining the micro-politics of the cities' nightclubs, this paper speaks back to an earlier post-94 literature, which regularly characterised Johannesburg youth as superficial, individualist and idealistic. Similarly, some might position nightclubs as sites of frivolous consumption or liberatory permissiveness. Yet because nightclub spaces are racialised, classed and gendered, historically-signified and socially regulated, they are also profoundly political. Through ordinary encounters on the cities' dancefloors, young Jo’burgers are imagining, contesting and negotiating their socio-political identities and indeed their claims to the city. Meanwhile, the politics of this generation of youth, who are increasingly critical of the utopian post-apartheid city, are being increasingly inserted and coopted into night-time cultures. Data for this study was gathered through five months of ethnographic fieldwork in Johannesburg nightclubs, including over 120 hours of participant observation and in-depth interviews with organisers and partygoers. Interviewees recognised that parties, rather than being simple frivolity, are a cacophony of celebration, mourning, worship, rage, rebellion and attachment. Countering standard associations between partying and escapism, party planners, venue owners and nightclub audiences were infusing night-time infrastructures with the aesthetics of politics and protest. Not unlike parties, local political assemblies so often rely on music, dance, the occupation of space, and a heaving crowd. References to social movements, militancy and anti-establishment emerged in nightclub themes, dress codes and décor. Metaphors of fire crossed over between party and protest, both of which could be described as having ‘been lit’ or having ‘brought flames’. More so, young people’s articulations of the city’s night-time geography, and their place in it, reflected articulations of race, class and ideological affiliation. The location, entrance fees and stylistic choices of one’s chosen club destination demarcated who was welcome, while also signalling membership to a particular politics (whether progressive or materialistic, inclusive or elitist, mainstream or counter-culture). Because of their ability to divide and unite, aggravate and titillate, mask and reveal, club cultures might offer a mirror to the complex socialities of a generation of Jo’burg youth, as they inhabit, and bring into being, a contemporary South African city.

Keywords: affect, Johannesburg, nightclub, nocturnal city, politics

Procedia PDF Downloads 210
90 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change

Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo

Abstract:

With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.

Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation

Procedia PDF Downloads 53
89 Sustainable Housing and Urban Development: A Study on the Soon-To-Be-Old Population's Impetus to Migrate

Authors: Tristance Kee

Abstract:

With the unprecedented increase in elderly population globally, it is critical to search for new sustainable housing and urban development alternatives to traditional housing options. This research examines concepts of elderly migration pattern in the context of a high density city in Hong Kong to Mainland China. The research objectives are to: 1) explore the relationships between soon-to-be-old elderly and their intentions to move to Mainland upon retirement and their demographic characteristics; and 2) What are the desired amenities, locational factors and activities that are expected in the soon-to-be-old generation’s retirement housing environment? Primary data was collected through questionnaire survey conducted using random sampling method with respondents aged between 45-64 years old. The face-to-face survey was completed by 500 respondents. The survey was divided into four sections. The first section focused on respondent’s demographic information such as gender, age, education attainment, monthly income, housing tenure type and their visits to Mainland China. The second section focused on their retirement plans in terms of intended retirement age, prospective retirement funding and retirement housing options. The third section focused on the respondent’s attitudes toward retiring in Mainland for housing. It asked about their intentions to migrate retire into Mainland and incentives to retire in Hong Kong. The fourth section focused on respondent’s ideal housing environment including preferred housing amenities, desired living environment and retirement activities. The dependent variable in this study was ‘respondent’s consideration to move to Mainland China upon retirement’. Eight primary independent variables were integrated into the study to identify the correlations between them and retirement migration plan. The independent variables include: gender, age, marital status, monthly income, present housing tenure type, property ownership in Hong Kong, relationship with Mainland and the frequency of visiting Mainland China. In addition to the above independent variables, respondents were asked to indicate their retirement plans (retirement age, funding sources and retirement housing options), incentives to migrate to retire (choices included: property ownership, family relations, cost of living, living environment, medical facilities, government welfare benefits, etc.), perceived ideal retirement life qualities including desired amenities (sports, medical and leisure facilities etc.), desired locational qualities (green open space, convenient transport options and accessibility to urban settings etc.) and desired retirement activities (home-based leisure, elderly friendly sports, cultural activities, child care, social activities, etc.). The finding shows correlations between the used independent variables and consideration to migrate for housing options. The two independent variables indicated a possible correlation were gender and the frequency of visiting Mainland at present. When considering the increasing property prices across the border and strong social relationships, potential retirement migration is a very subjective decision that could vary from person to person. This research adds knowledge to housing research and migration study. Although the research is based in Mainland, most of the characteristics identified including better medical services, government welfare and sound urban amenities are shared qualities for all sustainable urban development and housing strategies.

Keywords: elderly migration, housing alternative, soon-to-be-old, sustainable environment

Procedia PDF Downloads 194