Search results for: repeated abortion
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 827

Search results for: repeated abortion

47 Generative Design of Acoustical Diffuser and Absorber Elements Using Large-Scale Additive Manufacturing

Authors: Saqib Aziz, Brad Alexander, Christoph Gengnagel, Stefan Weinzierl

Abstract:

This paper explores a generative design, simulation, and optimization workflow for the integration of acoustical diffuser and/or absorber geometry with embedded coupled Helmholtz-resonators for full-scale 3D printed building components. Large-scale additive manufacturing in conjunction with algorithmic CAD design tools enables a vast amount of control when creating geometry. This is advantageous regarding the increasing demands of comfort standards for indoor spaces and the use of more resourceful and sustainable construction methods and materials. The presented methodology highlights these new technological advancements and offers a multimodal and integrative design solution with the potential for an immediate application in the AEC-Industry. In principle, the methodology can be applied to a wide range of structural elements that can be manufactured by additive manufacturing processes. The current paper focuses on a case study of an application for a biaxial load-bearing beam grillage made of reinforced concrete, which allows for a variety of applications through the combination of additive prefabricated semi-finished parts and in-situ concrete supplementation. The semi-prefabricated parts or formwork bodies form the basic framework of the supporting structure and at the same time have acoustic absorption and diffusion properties that are precisely acoustically programmed for the space underneath the structure. To this end, a hybrid validation strategy is being explored using a digital and cross-platform simulation environment, verified with physical prototyping. The iterative workflow starts with the generation of a parametric design model for the acoustical geometry using the algorithmic visual scripting editor Grasshopper3D inside the building information modeling (BIM) software Revit. Various geometric attributes (i.e., bottleneck and cavity dimensions) of the resonator are parameterized and fed to a numerical optimization algorithm which can modify the geometry with the goal of increasing absorption at resonance and increasing the bandwidth of the effective absorption range. Using Rhino.Inside and LiveLink for Revit, the generative model was imported directly into the Multiphysics simulation environment COMSOL. The geometry was further modified and prepared for simulation in a semi-automated process. The incident and scattered pressure fields were simulated from which the surface normal absorption coefficients were calculated. This reciprocal process was repeated to further optimize the geometric parameters. Subsequently the numerical models were compared to a set of 3D concrete printed physical twin models, which were tested in a .25 m x .25 m impedance tube. The empirical results served to improve the starting parameter settings of the initial numerical model. The geometry resulting from the numerical optimization was finally returned to grasshopper for further implementation in an interdisciplinary study.

Keywords: acoustical design, additive manufacturing, computational design, multimodal optimization

Procedia PDF Downloads 157
46 Training Hearing Parents in SmiLE Therapy Supports the Maintenance and Generalisation of Deaf Children's Social Communication Skills

Authors: Martina Curtin, Rosalind Herman

Abstract:

Background: Deaf children can experience difficulties with understanding how social interaction works, particularly when communicating with unfamiliar hearing people. Deaf children often struggle with integrating into a mainstream, hearing environments. These negative experiences can lead to social isolation, depression and other mental health difficulties later in life. smiLE Therapy (Schamroth, 2015) is a video-based social communication intervention that aims to teach deaf children skills to confidently communicate with unfamiliar hearing people. Although two previous studies have reported improvements in communication skills immediately post intervention, evidence for maintenance of gains or generalisation of skills (i.e., the transfer of newly learnt skills to untrained situations) has not to date been demonstrated. Parental involvement has been shown to support deaf children’s therapy outcomes. Therefore, this study added parent training to the therapy children received to investigate the benefits to generalisation of children’s skills. Parents were also invited to present their perspective on the training they received. Aims: (1) To assess pupils’ progress from pre- to post-intervention in trained and untrained tasks, (2) to investigate if training parents improved their (a) understanding of their child’s needs and (b) their skills in supporting their child appropriately in smiLE Therapy tasks, (3) to assess if parent training had an impact on the pupil’s ability to (a) maintain their skills in trained tasks post-therapy, and (b) generalise their skills in untrained, community tasks. Methods: This was a mixed-methods, repeated measures study. 31 deaf pupils (aged between 7 and 14) received an hour of smiLE Therapy per week, for 6 weeks. Communication skills were assessed pre-, post- and 3-months post-intervention using the Communication Skills Checklist. Parents were then invited to attend two training sessions and asked to bring a video of their child communicating in a shop or café. These videos were used to assess whether, after parent training, the child was able to generalise their skills to a new situation. Finally, parents attended a focus group to discuss the effectiveness of the therapy, particularly the wider impact, i.e., more child participation within the hearing community. Results: All children significantly improved their scores following smiLE therapy and maintained these skills to high level. Children generalised a high percentage of their newly learnt skills to an untrained situation. Parents reported improved understanding of their child’s needs, their child’s potential and in how to support them in real-life situations. Parents observed that their children were more confident and independent when carrying out communication tasks with unfamiliar hearing people. Parents realised they needed to ‘let go’ and embrace their child’s independence and provide more opportunities for them to participate in their community. Conclusions: This study adds to the evidence base on smiLE Therapy; it is an effective intervention that develops deaf children’s ability to interact competently with unfamiliar, hearing, communication partners. It also provides preliminary evidence of the benefits of parent training in helping children to generalise their skills to other situations. These findings will be of value to therapists wishing to develop deaf children’s communication skills beyond the therapy setting.

Keywords: deaf children, generalisation, parent involvement, social communication

Procedia PDF Downloads 139
45 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass

Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski

Abstract:

The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.

Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals

Procedia PDF Downloads 434
44 Health Inequalities in the Global South: Identification of Poor People with Disabilities in Cambodia to Generate Access to Healthcare

Authors: Jamie Lee Harder

Abstract:

In the context of rapidly changing social and economic circumstances in the developing world, this paper analyses access to public healthcare for poor people with disabilities in Cambodia. Like other countries of South East Asia, Cambodia is developing at rapid pace. The historical past of Cambodia, however, has set former social policy structures to zero. This past forces Cambodia and its citizens to implement new public health policies to align with the needs of social care, healthcare, and urban planning. In this context, the role of people with disabilities (PwDs) is crucial as new developments should and can take into consideration their specific needs from the beginning onwards. This paper is based on qualitative research with expert interviews and focus group discussions in Cambodia. During the field work it became clear that the identification tool for the poorest households (HHs) does not count disability as a financial risk to fall into poverty neither when becoming sick nor because of higher health expenditures and/or lower income because of the disability. The social risk group of poor PwDs faces several barriers in accessing public healthcare. The urbanization, the socio-economic health status, and opportunities for education; all influence social status and have an impact on the health situation of these individuals. Cambodia has various difficulties with providing access to people with disabilities, mostly due to barriers regarding finances, geography, quality of care, poor knowledge about their rights and negative social and cultural beliefs. Shortened budgets and the lack of prioritizations lead to the need for reorientation of local communities, international and national non-governmental organizations and social policy. The poorest HHs are identified with a questionnaire, the IDPoor program, for which the Ministry of Planning is responsible. The identified HHs receive an ‘Equity Card’ which provides access free of charge to public healthcare centers and hospitals among other benefits. The dataset usually does not include information about the disability status. Four focus group discussions (FGD) with 28 participants showed various barriers in accessing public healthcare. These barriers go far beyond a missing ramp to access the healthcare center. The contents of the FGDs were ratified and repeated during the expert interviews with the local Ministries, NGOs, international organizations and private persons working in the field. The participants of the FGDs faced and continue to face high discrimination, low capacity to work and earn an own income, dependency on others and less social competence in their lives. When discussing their health situation, we identified, a huge difference between those who are identified and hold an Equity Card and those who do not. Participants reported high costs without IDPoor identification, positive experiences when going to the health center in terms of attitude and treatment, low satisfaction with specific capacities for treatments, negative rumors, and discrimination with the consequence of fear to seek treatment in many cases. The problem of accessing public healthcare by risk groups can be adapted to situations in other countries.

Keywords: access, disability, health, inequality, Cambodia

Procedia PDF Downloads 151
43 Dysphagia Tele Assessment Challenges Faced by Speech and Swallow Pathologists in India: Questionnaire Study

Authors: B. S. Premalatha, Mereen Rose Babu, Vaishali Prabhu

Abstract:

Background: Dysphagia must be assessed, either subjectively or objectively, in order to properly address the swallowing difficulty. Providing therapeutic care to patients with dysphagia via tele mode was one approach for providing clinical services during the COVID-19 epidemic. As a result, the teleassessment of dysphagia has increased in India. Aim: This study aimed to identify challenges faced by Indian SLPs while providing teleassessment to individuals with dysphagia during the outbreak of COVID-19 from 2020 to 2021. Method: After receiving approval from the institute's institutional review board and ethics committee, the current study was carried out. The study was cross-sectional in nature and lasted from 2020 to 2021. The study enrolled participants who met the inclusion and exclusion criteria of the study. It was decided to recruit roughly 246 people based on the sample size calculations. The research was done in three stages: questionnaire development and content validation, questionnaire administration. Five speech and hearing professionals' content verified the questionnaire for faults and clarity. Participants received questionnaires via various social media platforms such as e-mail and WhatsApp, which were written in Microsoft Word and then converted to Google Forms. SPSS software was used to examine the data. Results: In light of the obstacles that Indian SLPs encounter, the study's findings were examined. Only 135 people responded. During the COVID-19 lockdowns, 38% of participants said they did not deal with dysphagia patients. After the lockout, 70.4% of SLPs kept working with dysphagia patients, while 29.6% did not. From the beginning of the oromotor examination, the main problems in completing tele evaluation of dysphagia have been highlighted. Around 37.5% of SLPs said they don't undertake the OPME online because of difficulties doing the evaluation, such as the need for repeated instructions from patients and family members and trouble visualizing structures in various positions. The majority of SLPs' online assessments were inefficient and time-consuming. A bigger percentage of SLPs stated that they will not advocate tele evaluation in dysphagia to their colleagues. SLPs' use of dysphagia assessment has decreased as a result of the epidemic. When it came to the amount of food, the majority of people proposed a small amount. Apart from placing the patient for assessment and gaining less cooperation from the family, most SLPs found that Internet speed was a source of concern and a barrier. Hearing impairment and the presence of a tracheostomy in patients with dysphagia proved to be the most difficult conditions to treat online. For patients with NPO, the majority of SLPs did not advise tele-evaluation. In the anterior region of the oral cavity, oral meal residue was more visible. The majority of SLPs reported more anterior than posterior leakage. Even while the majority of SLPs could detect aspiration by coughing, many found it difficult to discern the gurgling tone of speech after swallowing. Conclusion: The current study sheds light on the difficulties that Indian SLPs experience when assessing dysphagia via tele mode, indicating that tele-assessment of dysphagia is still to gain importance in India.

Keywords: dysphagia, teleassessment, challenges, Indian SLP

Procedia PDF Downloads 136
42 Breast Cancer Therapy-Related Cardiac Dysfunction Identifying in Kazakhstan: Preliminary Findings of the Cohort Study

Authors: Saule Balmagambetova, Zhenisgul Tlegenova, Saule Madinova

Abstract:

Cardiotoxicity associated with anticancer treatment, now defined as cancer therapy-related cardiac dysfunction (CTRCD), accompanies cancer patients and negatively impacts their survivorship. Currently, a cardio-oncological service is being created in Kazakhstan based on the provisions of the European Society of Cardio-oncology (ESC) Guidelines. In the frames of a pilot project, a cohort study on CTRCD conditions was initiated at the Aktobe Cancer center. One hundred twenty-eight newly diagnosed breast cancer patients started on doxorubicin and/or trastuzumab were recruited. Echocardiography with global longitudinal strain (GLS) assessment, biomarkers panel (cardiac troponin (cTnI), brain natriuretic peptide (BNP), myeloperoxidase (MPO), galectin-3 (Gal-3), D-dimers, C-reactive protein (CRP)), and other tests were performed at baseline and every three months. Patients were stratified by the cardiovascular risks according to the ESC recommendations and allocated into the risk groups during the pre-treatment visit. Of them, 10 (7.8%) patients were assigned to the high-risk group, 48 (37.5%) to the medium-risk group, and 70 (54.7%) to the low-risk group, respectively. High-risk patients have been receiving their cardioprotective treatment from the outset. Patients were also divided by treatment - in the anthracycline-based 83 (64.8%), in trastuzumab- only 13 (10.2%), and in the mixed anthracycline/trastuzumab group 32 individuals (25%), respectively. Mild symptomatic CTRCD was revealed and treated in 2 (1.6%) participants, and a mild asymptomatic variant in 26 (20.5%). Mild asymptomatic conditions are defined as left ventricular ejection fraction (LVEF) ≥50% and further relative reduction in GLS by >15% from baseline and/or a further rise in cardiac biomarkers. The listed biomarkers were assessed longitudinally in repeated-measures linear regression models during 12 months of observation. The associations between changes in biomarkers and CTRCD and between changes in biomarkers and LVEF were evaluated. Analysis by risk groups revealed statistically significant differences in baseline LVEF scores (p 0.001), BNP (p 0.0075), and Gal-3 (p 0.0073). Treatment groups found no statistically significant differences at baseline. After 12 months of follow-up, only LVEF values showed a statistically significant difference by risk groups (p 0.0011). When assessing the temporal changes in the studied parameters for all treatment groups, there were statistically significant changes from visit to visit for LVEF (p 0.003); GLS (p 0.0001); BNP (p<0.00001); MPO (p<0.0001); and Gal-3 (p<0.0001). No moderate or strong correlations were found between the biomarkers values and LVEF, between biomarkers and GLS. Between the biomarkers themselves, a moderate, close to strong correlation was established between cTnI and D-dimer (r 0.65, p<0.05). The dose-dependent effect of anthracyclines has been confirmed: the summary dose has a moderate negative impact on GLS values: -r 0.31 for all treatment groups (p<0.05). The present study found myeloperoxidase as a promising biomarker of cardiac dysfunction in the mixed anthracycline/trastuzumab treatment group. The hazard of CTRCD increased by 24% (HR 1.21; 95% CI 1.01;1.73) per doubling in baseline MPO value (p 0.041). Increases in BNP were also associated with CTRCD (HR per doubling, 1.22; 95% CI 1.12;1.69). No cases of chemotherapy discontinuation due to cardiotoxic complications have been recorded. Further observations are needed to gain insight into the ability of biomarkers to predict CTRCD onset.

Keywords: breast cancer, chemotherapy, cardiotoxicity, Kazakhstan

Procedia PDF Downloads 92
41 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids

Authors: Laith F. Gulli, Nicole M. Mallory

Abstract:

The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.

Keywords: psychometric, parametric, domains, grids, therapists

Procedia PDF Downloads 278
40 Mangroves in the Douala Area, Cameroon: The Challenges of Open Access Resources for Forest Governance

Authors: Bissonnette Jean-François, Dossa Fabrice

Abstract:

The project focuses on analyzing the spatial and temporal evolution of mangrove forest ecosystems near the city of Douala, Cameroon, in response to increasing human and environmental pressures. The selected study area, located in the Wouri River estuary, has a unique combination of economic importance, and ecological prominence. The study included valuable insights by conducting semi-structured interviews with resource operators and local officials. The thorough analysis of socio-economic data, farmer surveys, and satellite-derived information was carried out utilizing quantitative approaches in Excel and SPSS. Simultaneously, qualitative data was subjected to rigorous classification and correlation with other sources. The use of ArcGIS and CorelDraw facilitated the visual representation of the gradual changes seen in various land cover classifications. The research reveals complex processes that characterize mangrove ecosystems on Manoka and Cape Cameroon Islands. The lack of regulations in urbanization and the continuous growth of infrastructure have led to a significant increase in land conversion, causing negative impacts on natural landscapes and forests. The repeated instances of flooding and coastal erosion have further shaped landscape alterations, fostering the proliferation of water and mudflat areas. The unregulated use of mangrove resources is a significant factor in the degradation of these ecosystems. Activities including the use of wood for smoking and fishing, together with the coastal pollution resulting from the absence of waste collection, have had a significant influence. In addition, forest operators contribute to the degradation of vegetation, hence exacerbating the harmful impact of invasive species on the ecosystem. Strategic interventions are necessary to guarantee the sustainable management of these ecosystems. The proposals include advocating for sustainable wood exploitation techniques, using appropriate techniques, along with regeneration, and enforcing rules to prevent wood overexploitation. By implementing these measures, the ecological balance can be preserved, safeguarding the long-term viability of these precious ecosystems. On a conceptual level, this paper uses the framework developed by Elinor Ostrom and her colleagues to investigate the consequences of open access resources, where local actors have not been able to enforce measures to prevent overexploitation of mangrove wood resources. Governmental authorities have demonstrated limited capacity to enforce sustainable management of wood resources and have not been able to establish effective relationships with local fishing communities and with communities involved in the purchase of wood. As a result, wood resources in the mangrove areas remain largely accessible, while authorities do not monitor wood volumes extracted nor methods of exploitation. There have only been limited and punctual attempts at forest restoration with no significant consequence on mangrove forests dynamics.

Keywords: Mangroves, forest management, governance, open access resources, Cameroon

Procedia PDF Downloads 62
39 Disseminating Positive Psychology Resources Online: Current Research and Future Directions

Authors: Warren Jared, Bekker Jeremy, Salazar Guy, Jackman Katelyn, Linford Lauren

Abstract:

Introduction: Positive Psychology research has burgeoned in the past 20 years; however, relatively few evidence-based resources to cultivate positive psychology skills are widely available to the general public. The positive psychology resources at www.mybestself101.org were developed to assist individuals in cultivating well-being using a variety of techniques, including gratitude, purpose, mindfulness, self-compassion, savoring, personal growth, and supportive relationships. These resources are empirically based and are built to be accessible to a broad audience. Key Objectives: This presentation highlights results from two recent randomized intervention studies of specific MBS101 learning modules. A key objective of this research is to empirically assess the efficacy and usability of these online resources. Another objective of this research is to encourage the broad dissemination of online positive psychology resources; thus, recommendations for further research and dissemination will be discussed. Methods: In both interventions, we recruited adult participants using social media advertisements. The participants completed several well-being and positive psychology construct-specific measures (savoring and self-compassion measures) at baseline and post-intervention. Participants in the experimental condition were also given a feedback questionnaire to gather qualitative data on how participants viewed the modules. Participants in the self-compassion study were randomly split between an experimental group, who received the treatment, and a control group, who were placed on a waitlist. There was no control group for the savoring study. Participants were instructed to read content on the module and practice savoring or self-compassion strategies listed in the module for a minimum of twenty minutes a day for 21 days. The intervention was semi-structured, as participants were free to choose which module activities they would complete from a menu of research-based strategies. Participants tracked which activities they completed and how long they spent on the modules each day. Results: In the savoring study, participants increased in savoring ability as indicated by multiple measures. In addition, participants increased in well-being from pre- to post-treatment. In the self-compassion study, repeated measures mixed model analyses revealed that compared to waitlist controls, participants who used the MBS101 self-compassion module experienced significant improvements in self-compassion, well-being, and body image with effect sizes ranging from medium to large. Attrition was 10.5% for the self-compassion study and 71% for the savoring study. Overall, participants indicated that the modules were generally helpful, and they particularly appreciated the specific strategy menus. Participants requested more structured course activities, more interactive content, and more practice activities overall. Recommendations: Mybestself101.org is an applied positive psychology research program that shows promise as a model for effectively disseminating evidence-based positive psychology resources that are both engaging and easily accessible. Considerable research is still needed, both to test the efficacy and usability of the modules currently available and to improve them based on participant feedback. Feedback received from participants in the randomized controlled trial led to the development of an expanded, 30-day online course called The Gift of Self-Compassion and an online mindfulness course currently in development called Mindfulness For Humans.

Keywords: positive psychology, intervention, online resources, self-compassion, dissemination, online curriculum

Procedia PDF Downloads 204
38 Mathematics Professional Development: Uptake and Impacts on Classroom Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. Included is a close-up examination of a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two US states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data were collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used. The full paper will include the case study of Ana to illustrate the factors involved in what teachers take up and use from participating in the LTG PD.

Keywords: geometry, mathematics professional development, pedagogical content knowledge, teacher learning

Procedia PDF Downloads 125
37 Recognizing Human Actions by Multi-Layer Growing Grid Architecture

Authors: Z. Gharaee

Abstract:

Recognizing actions performed by others is important in our daily lives since it is necessary for communicating with others in a proper way. We perceive an action by observing the kinematics of motions involved in the performance. We use our experience and concepts to make a correct recognition of the actions. Although building the action concepts is a life-long process, which is repeated throughout life, we are very efficient in applying our learned concepts in analyzing motions and recognizing actions. Experiments on the subjects observing the actions performed by an actor show that an action is recognized after only about two hundred milliseconds of observation. In this study, hierarchical action recognition architecture is proposed by using growing grid layers. The first-layer growing grid receives the pre-processed data of consecutive 3D postures of joint positions and applies some heuristics during the growth phase to allocate areas of the map by inserting new neurons. As a result of training the first-layer growing grid, action pattern vectors are generated by connecting the elicited activations of the learned map. The ordered vector representation layer receives action pattern vectors to create time-invariant vectors of key elicited activations. Time-invariant vectors are sent to second-layer growing grid for categorization. This grid creates the clusters representing the actions. Finally, one-layer neural network developed by a delta rule labels the action categories in the last layer. System performance has been evaluated in an experiment with the publicly available MSR-Action3D dataset. There are actions performed by using different parts of human body: Hand Clap, Two Hands Wave, Side Boxing, Bend, Forward Kick, Side Kick, Jogging, Tennis Serve, Golf Swing, Pick Up and Throw. The growing grid architecture was trained by applying several random selections of generalization test data fed to the system during on average 100 epochs for each training of the first-layer growing grid and around 75 epochs for each training of the second-layer growing grid. The average generalization test accuracy is 92.6%. A comparison analysis between the performance of growing grid architecture and self-organizing map (SOM) architecture in terms of accuracy and learning speed show that the growing grid architecture is superior to the SOM architecture in action recognition task. The SOM architecture completes learning the same dataset of actions in around 150 epochs for each training of the first-layer SOM while it takes 1200 epochs for each training of the second-layer SOM and it achieves the average recognition accuracy of 90% for generalization test data. In summary, using the growing grid network preserves the fundamental features of SOMs, such as topographic organization of neurons, lateral interactions, the abilities of unsupervised learning and representing high dimensional input space in the lower dimensional maps. The architecture also benefits from an automatic size setting mechanism resulting in higher flexibility and robustness. Moreover, by utilizing growing grids the system automatically obtains a prior knowledge of input space during the growth phase and applies this information to expand the map by inserting new neurons wherever there is high representational demand.

Keywords: action recognition, growing grid, hierarchical architecture, neural networks, system performance

Procedia PDF Downloads 157
36 The Applications of Zero Water Discharge (ZWD) Systems for Environmental Management

Authors: Walter W. Loo

Abstract:

China declared the “zero discharge rules which leave no toxics into our living environment and deliver blue sky, green land and clean water to many generations to come”. The achievement of ZWD will provide conservation of water, soil and energy and provide drastic increase in Gross Domestic Products (GDP). Our society’s engine needs a major tune up; it is sputtering. ZWD is achieved in world’s space stations – no toxic air emission and the water is totally recycled and solid wastes all come back to earth. This is all done with solar power. These are all achieved under extreme temperature, pressure and zero gravity in space. ZWD can be achieved on earth under much less fluctuations in temperature, pressure and normal gravity environment. ZWD systems are not expensive and will have multiple beneficial returns on investment which are both financially and environmentally acceptable. The paper will include successful case histories since the mid-1970s. ZWD discharge can be applied to the following types of projects: nuclear and coal fire power plants with a closed loop system that will eliminate thermal water discharge; residential communities with wastewater treatment sump and recycle the water use as a secondary water supply; waste water treatment Plants with complete water recycling including water distillation to produce distilled water by very economical 24-hours solar power plant. Landfill remediation is based on neutralization of landfilled gas odor and preventing anaerobic leachate formation. It is an aerobic condition which will render landfill gas emission explosion proof. Desert development is the development of recovering soil moisture from soil and completing a closed loop water cycle by solar energy within and underneath an enclosed greenhouse. Salt-alkali land development can be achieved by solar distillation of salty shallow water into distilled water. The distilled water can be used for soil washing and irrigation and complete a closed loop water cycle with energy and water conservation. Heavy metals remediation can be achieved by precipitation of dissolved toxic metals below the plant or vegetation root zone by solar electricity without pumping and treating. Soil and groundwater remediation - abandoned refineries, chemical and pesticide factories can be remediated by in-situ electrobiochemical and bioventing treatment method without pumping or excavation. Toxic organic chemicals are oxidized into carbon dioxide and heavy metals precipitated below plant and vegetation root zone. New water sources: low temperature distilled water can be recycled for repeated use within a greenhouse environment by solar distillation; nano bubble water can be made from the distilled water with nano bubbles of oxygen, nitrogen and carbon dioxide from air (fertilizer water) and also eliminate the use of pesticides because the nano oxygen will break the insect growth chain in the larvae state. Three dimensional high yield greenhouses can be constructed by complete water recycling using the vadose zone soil as a filter with no farming wastewater discharge.

Keywords: greenhouses, no discharge, remediation of soil and water, wastewater

Procedia PDF Downloads 344
35 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra

Authors: Bitewulign Mekonnen

Abstract:

Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.

Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network

Procedia PDF Downloads 94
34 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 72
33 Simple Finite-Element Procedure for Modeling Crack Propagation in Reinforced Concrete Bridge Deck under Repetitive Moving Truck Wheel Loads

Authors: Rajwanlop Kumpoopong, Sukit Yindeesuk, Pornchai Silarom

Abstract:

Modeling cracks in concrete is complicated by its strain-softening behavior which requires the use of sophisticated energy criteria of fracture mechanics to assure stable and convergent solutions in the finite-element (FE) analysis particularly for relatively large structures. However, for small-scale structures such as beams and slabs, a simpler approach relies on retaining some shear stiffness in the cracking plane has been adopted in literature to model the strain-softening behavior of concrete under monotonically increased loading. According to the shear retaining approach, each element is assumed to be an isotropic material prior to cracking of concrete. Once an element is cracked, the isotropic element is replaced with an orthotropic element in which the new orthotropic stiffness matrix is formulated with respect to the crack orientation. The shear transfer factor of 0.5 is used in parallel to the crack plane. The shear retaining approach is adopted in this research to model cracks in RC bridge deck with some modifications to take into account the effect of repetitive moving truck wheel loads as they cause fatigue cracking of concrete. First modification is the introduction of fatigue tests of concrete and reinforcing steel and the Palmgren-Miner linear criterion of cumulative damage in the conventional FE analysis. For a certain loading, the number of cycles to failure of each concrete or RC element can be calculated from the fatigue or S-N curves of concrete and reinforcing steel. The elements with the minimum number of cycles to failure are the failed elements. For the elements that do not fail, the damage is accumulated according to Palmgren-Miner linear criterion of cumulative damage. The stiffness of the failed element is modified and the procedure is repeated until the deck slab fails. The total number of load cycles to failure of the deck slab can then be obtained from which the S-N curve of the deck slab can be simulated. Second modification is the modification in shear transfer factor. Moving loading causes continuous rubbing of crack interfaces which greatly reduces shear transfer mechanism. It is therefore conservatively assumed in this study that the analysis is conducted with shear transfer factor of zero for the case of moving loading. A customized FE program has been developed using the MATLAB software to accomodate such modifications. The developed procedure has been validated with the fatigue test of the 1/6.6-scale AASHTO bridge deck under the applications of both fixed-point repetitive loading and moving loading presented in the literature. Results are in good agreement both experimental vs. simulated S-N curves and observed vs. simulated crack patterns. Significant contribution of the developed procedure is a series of S-N relations which can now be simulated at any desired levels of cracking in addition to the experimentally derived S-N relation at the failure of the deck slab. This permits the systematic investigation of crack propagation or deterioration of RC bridge deck which is appeared to be useful information for highway agencies to prolong the life of their bridge decks.

Keywords: bridge deck, cracking, deterioration, fatigue, finite-element, moving truck, reinforced concrete

Procedia PDF Downloads 257
32 Use of Zikani’s Ribosome Modulating Agents for Treating Recessive Dystrophic & Junctional Epidermolysis Bullosa with Nonsense Mutations

Authors: Mei Chen, Yingping Hou, Michelle Hao, Soheil Aghamohammadzadeh, Esteban Terzo, Roger Clark, Vijay Modur

Abstract:

Background: Recessive Dystrophic Epidermolysis Bullosa (RDEB) is a genetic skin condition characterized by skin tearing and unremitting blistering upon minimal trauma. Repeated blistering, fibrosis, and scarring lead to aggressive squamous cell carcinoma later in life. RDEB is caused by mutations in the COL7A1 gene encoding collagen type VII (C7), the major component of anchoring fibrils mediating epidermis-dermis adherence. Nonsense mutations in the COL7A1 gene of a subset of RDEB patients leads to premature termination codons (PTC). Similarly, most Junctional Epidermolysis Bullosa (JEB) cases are caused by nonsense mutations in the LAMB3 gene encoding the β3 subunit of laminin 332. Currently, there is an unmet need for the treatment of RDEB and JEB. Zikani Therapeutics has discovered an array of macrocyclic compounds with ring structures similar to macrolide antibiotics that can facilitate readthrough activity of nonsense mutations in the COL7A1 and LAMB3 genes by acting as Ribosome Modulating Agents (RMAs). The medicinal chemistry synthetic advancements of these macrocyclic compounds have allowed targeting the human ribosome while preserving the structural elements responsible for the safety and pharmacokinetic profile of clinically used macrolide antibiotics. Methods: C7 expression was used as a measure of readthrough activity by immunoblot assays in two primary human fibroblasts from RDEB patients (R578X/R578X and R163X/R1683X-COL7A1). Similarly, immunoblot assays in C325X/c.629-12T > A-LAMB3 keratinocytes were used to measure readthrough activity for JEB. The relative readthrough activity of each compound was measured relative to Gentamicin. An imaging-based fibroblast migration assay was used as an assessment of C7 functionality in RDEB-fibroblasts over 16-20 hrs. The incubation period for the above experiments was 48 hrs for RDEB fibroblasts and 72 hours for JEB keratinocytes. Results: 9 RMAs demonstrated increased protein expression in both patient RDEB fibroblasts. The highest readthrough activity at tested concentrations without cytotoxicities increased protein expression up to 179% of Gentamicin (400 µg/ml), with favored readthrough activity in R163X/R1683X-COL7A1 fibroblasts. Concurrent with protein expression, fibroblast hypermotility phenotype observed in RDEB was rescued by reducing motility by ~35% to WT levels (the same level as 690 µM Gentamicin treated cells). Laminin β3 expression was also shown to be increased by 6 RMAs in keratinocytes to 33-83% of (400 µg/ml) Gentamicin. Conclusions: To date, 9 RMAs have been identified that enhance the expression of functional C7 in a mutation-dependent manner in two different RDEB patient fibroblast backgrounds (R578X/R578X and R163X/R1683X-COL7A1). A further 6 RMAs have been identified that enhance the readthrough of C325X-LAMB3 in JEB patient keratinocytes. Based on the clinical trial conducted by us with topical gentamycin in 2017, Zikani’s RMAs achieve clinically significant levels of read-through for the treatment of recessive dystrophic and Junctional Epidermolysis Bullosa.

Keywords: epidermolysis bullosa, nonsense mutation, readthrough, ribosome modulation

Procedia PDF Downloads 98
31 Synthesis by Mechanical Alloying and Characterization of FeNi₃ Nanoalloys

Authors: Ece A. Irmak, Amdulla O. Mekhrabov, M. Vedat Akdeniz

Abstract:

There is a growing interest on the synthesis and characterization of nanoalloys since the unique chemical, and physical properties of nanoalloys can be tuned and, consequently, new structural motifs can be created by varying the type of constituent elements, atomic and magnetic ordering, as well as size and shape of the nanoparticles. Due to the fine size effects, magnetic nanoalloys have considerable attention with their enhanced mechanical, electrical, optical and magnetic behavior. As an important magnetic nanoalloy, the novel application area of Fe-Ni based nanoalloys is expected to be widened in the chemical, aerospace industry and magnetic biomedical applications. Noble metals have been using in biomedical applications for several years because of their surface plasmon properties. In this respect, iron-nickel nanoalloys are promising materials for magnetic biomedical applications because they show novel properties such as superparamagnetism and surface plasmon resonance property. Also, there is great attention for the usage Fe-Ni based nanoalloys as radar absorbing materials in aerospace and stealth industry due to having high Curie temperature, high permeability and high saturation magnetization with good thermal stability. In this study, FeNi₃ bimetallic nanoalloys were synthesized by mechanical alloying in a planetary high energy ball mill. In mechanical alloying, micron size powders are placed into the mill with milling media. The powders are repeatedly deformed, fractured and alloyed by high energy collision under the impact of balls until the desired composition and particle size is achieved. The experimental studies were carried out in two parts. Firstly, dry mechanical alloying with high energy dry planetary ball milling was applied to obtain FeNi₃ nanoparticles. Secondly, dry milling was followed by surfactant-assisted ball milling to observe the surfactant and solvent effect on the structure, size, and properties of the FeNi₃ nanoalloys. In the first part, the powder sample of iron-nickel was prepared according to the 1:3 iron to nickel ratio to produce FeNi₃ nanoparticles and the 1:10 powder to ball weight ratio. To avoid oxidation during milling, the vials had been filled with Ar inert gas before milling started. The powders were milled for 80 hours in total and the synthesis of the FeNi₃ intermetallic nanoparticles was succeeded by mechanical alloying in 40 hours. Also, regarding the particle size, it was found that the amount of nano-sized particles raised with increasing milling time. In the second part of the study, dry milling of the Fe and Ni powders with the same stoichiometric ratio was repeated. Then, to prevent agglomeration and to obtain smaller sized nanoparticles with superparamagnetic behavior, surfactants and solvent are added to the system, after 40-hour milling time, with the completion of the mechanical alloying. During surfactant-assisted ball milling, heptane was used as milling medium, and as surfactants, oleic acid and oleylamine were used in the high energy ball milling processes. The characterization of the alloyed particles in terms of microstructure, morphology, particle size, thermal and magnetic properties with respect to milling time was done by X-ray diffraction, scanning electron microscopy, energy dispersive spectroscopy, vibrating-sample magnetometer, and differential scanning calorimetry.

Keywords: iron-nickel systems, magnetic nanoalloys, mechanical alloying, nanoalloy characterization, surfactant-assisted ball milling

Procedia PDF Downloads 180
30 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control

Authors: Man Ying Kang, Joshua Kin Man Nan

Abstract:

The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.

Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial

Procedia PDF Downloads 109
29 A Peg Board with Photo-Reflectors to Detect Peg Insertion and Pull-Out Moments

Authors: Hiroshi Kinoshita, Yasuto Nakanishi, Ryuhei Okuno, Toshio Higashi

Abstract:

Various kinds of pegboards have been developed and used widely in research and clinics of rehabilitation for evaluation and training of patient’s hand function. A common measure in these peg boards is a total time of performance execution assessed by a tester’s stopwatch. Introduction of electrical and automatic measurement technology to the apparatus, on the other hand, has been delayed. The present work introduces the development of a pegboard with an electric sensor to detect moments of individual peg’s insertion and removal. The work also gives fundamental data obtained from a group of healthy young individuals who performed peg transfer tasks using the pegboard developed. Through trails and errors in pilot tests, two 10-hole peg-board boxes installed with a small photo-reflector and a DC amplifier at the bottom of each hole were designed and built by the present authors. The amplified electric analogue signals from the 20 reflectors were automatically digitized at 500 Hz per channel, and stored in a PC. The boxes were set on a test table at different distances (25, 50, 75, and 125 mm) in parallel to examine the effect of hole-to-hole distance. Fifty healthy young volunteers (25 in each gender) as subjects of the study performed successive fast 80 time peg transfers at each distance using their dominant and non-dominant hands. The data gathered showed a clear-cut light interruption/continuation moment by the pegs, allowing accurately (no tester’s error involved) and precisely (an order of milliseconds) to determine the pull out and insertion times of each peg. This further permitted computation of individual peg movement duration (PMD: from peg-lift-off to insertion) apart from hand reaching duration (HRD: from peg insertion to lift-off). An accidental drop of a peg led to an exceptionally long ( < mean + 3 SD) PMD, which was readily detected from an examination of data distribution. The PMD data were commonly right-skewed, suggesting that the median can be a better estimate of individual PMD than the mean. Repeated measures ANOVA using the median values revealed significant hole-to-hole distance, and hand dominance effects, suggesting that these need to be fixed in the accurate evaluation of PMD. The gender effect was non-significant. Performance consistency was also evaluated by the use of quartile variation coefficient values, which revealed no gender, hole-to-hole, and hand dominance effects. The measurement reliability was further examined using interclass correlation obtained from 14 subjects who performed the 25 and 125 mm hole distance tasks at two 7-10 days separate test sessions. Inter-class correlation values between the two tests showed fair reliability for PMD (0.65-0.75), and for HRD (0.77-0.94). We concluded that a sensor peg board developed in the present study could provide accurate (excluding tester’s errors), and precise (at a millisecond rate) time information of peg movement separated from that used for hand movement. It could also easily detect and automatically exclude erroneous execution data from his/her standard data. These would lead to a better evaluation of hand dexterity function compared to the widely used conventional used peg boards.

Keywords: hand, dexterity test, peg movement time, performance consistency

Procedia PDF Downloads 133
28 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game

Authors: Steven W. Carruthers

Abstract:

The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective  assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.

Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating

Procedia PDF Downloads 192
27 Influence of Dryer Autumn Conditions on Weed Control Based on Soil Active Herbicides

Authors: Juergen Junk, Franz Ronellenfitsch, Michael Eickermann

Abstract:

An appropriate weed management in autumn is a prerequisite for an economically successful harvest in the following year. In Luxembourg oilseed rape, wheat and barley is sown from August until October, accompanied by a chemical weed control with soil active herbicides, depending on the state of the weeds and the meteorological conditions. Based on regular ground and surface water-analysis, high levels of contamination by transformation products of respective herbicide compounds have been found in Luxembourg. The most ideal conditions for incorporating soil active herbicides are single rain events. Weed control may be reduced if application is made when weeds are under drought stress or if repeated light rain events followed by dry spells, because the herbicides tend to bind tightly to the soil particles. These effects have been frequently reported for Luxembourg throughout the last years. In the framework of a multisite long-term field experiment (EFFO) weed monitoring, plants observations and corresponding meteorological measurements were conducted. Long-term time series (1947-2016) from the SYNOP station Findel-Airport (WMO ID = 06590) showed a decrease in the number of days with precipitation. As the total precipitation amount has not significantly changed, this indicates a trend towards rain events with higher intensity. All analyses are based on decades (10-day periods) for September and October of each individual year. To assess the future meteorological conditions for Luxembourg, two different approaches were applied. First, multi-model ensembles from the CORDEX experiments (spatial resolution ~12.5 km; transient projections until 2100) were analysed for two different Representative Concentration Pathways (RCP8.5 and RCP4.5), covering the time span from 2005 until 2100. The multi-model ensemble approach allows for the quantification of the uncertainties and also to assess the differences between the two emission scenarios. Second, to assess smaller scale differences within the country a high resolution model projection using the COSMO-LM model was used (spatial resolution 1.3 km). To account for the higher computational demands, caused by the increased spatial resolution, only 10-year time slices have been simulated (reference period 1991-2000; near future 2041-2050 and far future 2091-2100). Statistically significant trends towards higher air temperatures, +1.6 K for September (+5.3 K far future) and +1.3 K for October (+4.3 K), were predicted for the near future compared to the reference period. Precipitation simultaneously decreased by 9.4 mm (September) and 5.0 mm (October) for the near future and -49 mm (September) and -10 mm (October) in the far future. Beside the monthly values also decades were analyzed for the two future time periods of the CLM model. For all decades of September and October the number of days with precipitation decreased for the projected near and far future. Changes in meteorological variables such as air temperature and precipitation did already induce transformations in weed societies (composition, late-emerging etc.) of arable ecosystems in Europe. Therefore, adaptations of agronomic practices as well as effective weed control strategies must be developed to maintain crop yield.

Keywords: CORDEX projections, dry spells, ensembles, weed management

Procedia PDF Downloads 235
26 The Impact of Riparian Alien Plant Removal on Aquatic Invertebrate Communities in the Upper Reaches of Luvuvhu River Catchment, Limpopo Province

Authors: Rifilwe Victor Modiba, Stefan Hendric Foord

Abstract:

Alien invasive plants (IAP’s) have considerable negative impacts on freshwater habitats and South Africa has implemented an innovative Work for Water (WfW) programme for the systematic removal of these plants aimed at, amongst other objectives, restoring biodiversity and ecosystem services in these threatened habitats. These restoration processes are expensive and have to be evidence-based. In this study in-stream macroinvertebrate and adult Odonata assemblages were used as indicators of restoration success by quantifying the response of biodiversity metrics for these two groups to the removal of IAP’s in a strategic water resource of South Africa that is extensively invaded by invasive alien plants (IAP’s). The study consisted of a replicated design that included 45 sampling units, viz. 15 invaded, 15 uninvaded and 15 cleared sites stratified across the upper reaches of six sub-catchments of the Luvuvhu river catchment, Limpopo Province. Cleared sites were only considered if they received at least two WfW treatments in the last 3 years. The Benthic macroinvertebrate and adult Odonate assemblages in each of these sampling were surveyed from between November and March, 2013/2014 and 2014/2015 respectively. Generalized Linear Models (GLM) with a log link function and Poisson error distribution were done for metrics (invaded, cleared, and uninvaded) whose residuals were not normally distributed or had unequal variance and for abundance. RDA was done for EPTO genera (Ephemeroptera, Plecoptera, Trichoptera and Odonata) and adult Odonata species abundance. GLM was done to for the abundance of Genera and Odonates that had the association with the RDA environmental factors. Sixty four benthic macroinvertebrate families, 57 EPTO genera, and 45 adult Odonata species were recorded across all 45 sampling units. There was no significant difference between the SASS5 total score, ASPT, and family richness of the three invasion classes. Although clearing only had a weak positive effect on the adult Odonate species richness it had a positive impact on DBI scores. These differences were mainly the result of significantly larger DBI scores in the cleared sites as compared to the invaded sites. Results suggest that water quality is positively impacted by repeated clearing pointing to the importance of follow up procedures after initial clearing. Adult Odonate diversity as measured by richness, endemicity, threat and distribution respond positively to all forms of the clearing. The clearing had a significant impact on Odonate assemblage structure but did not affect EPTO structure. Variation partitioning showed that 21.8% of the variation in EPTO assemblage can be explained by spatial and environmental variables, 16% of the variation in Odonate structure was explained by spatial and environmental variables. The response of the diversity metrics to clearing increased in significance at finer taxonomic resolutions, particularly of adult Odonates whose metrics significantly improved with clearing and whose structure responded to both invasion and clearing. The study recommends the use of DBI for surveying river health when hydraulic biotopes are poor.

Keywords: DBI, evidence-based conservation, EPTO, macroinvetebrates

Procedia PDF Downloads 186
25 Development and Evaluation of a Cognitive Behavioural Therapy Based Smartphone App for Low Moods and Anxiety

Authors: David Bakker, Nikki Rickard

Abstract:

Smartphone apps hold immense potential as mental health and wellbeing tools. Support can be made easily accessible and can be used in real-time while users are experiencing distress. Furthermore, data can be collected to enable machine learning and automated tailoring of support to users. While many apps have been developed for mental health purposes, few have adhered to evidence-based recommendations and even fewer have pursued experimental validation. This paper details the development and experimental evaluation of an app, MoodMission, that aims to provide support for low moods and anxiety, help prevent clinical depression and anxiety disorders, and serve as an adjunct to professional clinical supports. MoodMission was designed to deliver cognitive behavioural therapy for specifically reported problems in real-time, momentary interactions. Users report their low moods or anxious feelings to the app along with a subjective units of distress scale (SUDS) rating. MoodMission then provides a choice of 5-10 short, evidence-based mental health strategies called Missions. Users choose a Mission, complete it, and report their distress again. Automated tailoring, gamification, and in-built data collection for analysis of effectiveness was also included in the app’s design. The development process involved construction of an evidence-based behavioural plan, designing of the app, building and testing procedures, feedback-informed changes, and a public launch. A randomized controlled trial (RCT) was conducted comparing MoodMission to two other apps and a waitlist control condition. Participants completed measures of anxiety, depression, well-being, emotional self-awareness, coping self-efficacy and mental health literacy at the start of their app use and 30 days later. At the time of submission (November 2016) over 300 participants have participated in the RCT. Data analysis will begin in January 2017. At the time of this submission, MoodMission has over 4000 users. A repeated-measures ANOVA of 1390 completed Missions reveals that SUDS (0-10) ratings were significantly reduced between pre-Mission ratings (M=6.20, SD=2.39) and post-Mission ratings (M=4.93, SD=2.25), F(1,1389)=585.86, p < .001, np2=.30. This effect was consistent across both low moods and anxiety. Preliminary analyses of the data from the outcome measures surveys reveal improvements across mental health and wellbeing measures as a result of using the app over 30 days. This includes a significant increase in coping self-efficacy, F(1,22)=5.91, p=.024, np2=.21. Complete results from the RCT in which MoodMission was evaluated will be presented. Results will also be presented from the continuous outcome data being recorded by MoodMission. MoodMission was successfully developed and launched, and preliminary analysis suggest that it is an effective mental health and wellbeing tool. In addition to the clinical applications of MoodMission, the app holds promise as a research tool to conduct component analysis of psychological therapies and overcome restraints of laboratory based studies. The support provided by the app is discrete, tailored, evidence-based, and transcends barriers of stigma, geographic isolation, financial limitations, and low health literacy.

Keywords: anxiety, app, CBT, cognitive behavioural therapy, depression, eHealth, mission, mobile, mood, MoodMission

Procedia PDF Downloads 271
24 How Can Personal Protective Equipment Be Best Used and Reused: A Human Factors based Look at Donning and Doffing Procedures

Authors: Devin Doos, Ashley Hughes, Trang Pham, Paul Barach, Rami Ahmed

Abstract:

Over 115,000 Health Care Workers (HCWs) have died from COVID-19, and millions have been infected while caring for patients. HCWs have filed thousands of safety complaints surrounding safety concerns due to Personal Protective Equipment (PPE) shortages, which included concerns around inadequate and PPE reuse. Protocols for donning and doffing PPE remain ambiguous, lacking an evidence-base, and often result in wide deviations in practice. PPE donning and doffing protocol deviations commonly result in self-contamination but have not been thoroughly addressed. No evidence-driven protocols provide guidance on protecting HCW during periods of PPE reuse. Objective: The aim of this study was to examine safety-related threats and risks to Health Care Workers (HCWs) due to the reuse of PPE among Emergency Department personnel. Method: We conducted a prospective observational study to examine the risks of reusing PPE. First, ED personnel were asked to don and doff PPE in a simulation lab. Each participant was asked to don and doff PPE five times, according to the maximum reuse recommendation set by the Centers for Disease Control and Prevention (CDC). Each participant was videorecorded; video recordings were reviewed and coded independently by at least 2 of the 3trained coders for safety behaviors and riskiness of actions. A third coder was brought in when the agreement between the 2 coders could not be reached. Agreement between coders was high (81.9%), and all disagreements (100%) were resolved via consensus. A bowtie risk assessment chart was constructed analyzing the factors that contribute to increased risks HCW are faced with due to PPE use and reuse. Agreement amongst content experts in the field of Emergency Medicine, Human Factors, and Anesthesiology was used to select aspects of health care that both contribute and mitigate risks associated with PPE reuse. Findings: Twenty-eight clinician participants completed five rounds of donning/doffing PPE, yielding 140 PPE donning/doffing sequences. Two emerging threats were associated with behaviors in donning, doffing, and re-using PPE: (i) direct exposure to contaminant, and (ii) transmission/spread of contaminant. Protective behaviors included: hand hygiene, not touching the patient-facing surface of PPE, and ensuring a proper fit and closure of all PPE materials. 100% of participants (n= 28) deviated from the CDC recommended order, and most participants (92.85%, n=26) self-contaminated at least once during reuse. Other frequent errors included failure to tie all ties on the PPE (92.85%, n=26) and failure to wash hands after a contamination event occurred (39.28%, n=11). Conclusions: There is wide variation and regular errors in how HCW don and doffPPE while including in reusing PPE that led to self-contamination. Some errors were deemed “recoverable”, such as hand washing after touching a patient-facing surface to remove the contaminant. Other errors, such as using a contaminated mask and accidentally spreading to the neck and face, can lead to compound risks that are unique to repeated PPE use. A more comprehensive understanding of the contributing threats to HCW safety and complete approach to mitigating underlying risks, including visualizing with risk management toolsmay, aid future PPE designand workflow and space solutions.

Keywords: bowtie analysis, health care, PPE reuse, risk management

Procedia PDF Downloads 90
23 An Empirical Examination of Ethnic Differences in the Use and Experience of Child Healthcare Services in New Zealand

Authors: Terryann Clark, Kabir Dasgupta, Sonia Lewycka, Gail Pacheco, Alexander Plum

Abstract:

This paper focused on two main research aims using data from the Growing Up in New Zealand (GUINZ) birth cohort: 1. To examine ethnic differences in life-course trajectories in the use and experience of healthcare services in early childhood years (namely immunisation, dental checks and use of General Practitioners (GPs)) 2. To quantify the contribution of relevant explanatory factors to ethnic differences. Current policy in New Zealand indicates there should be, in terms of associated direct costs, equitable access by ethnicity for healthcare services. However, empirical evidence points to persistent ethnic gaps in several domains. For example, the data highlighted that Māori have the lowest immunisation rates, across a number of time points in early childhood – despite having a higher antenatal intention to immunise relative to NZ European. Further to that, NZ European are much more likely to have their first-choice lead maternity caregiver (LMC) and use child dental services compared to all ethnicities. Method: This research explored the underlying mechanisms behind ethnic differences in the use and experience of child healthcare services. First, a multivariate regression analysis was used to adjust raw ethnic gaps in child health care utilisation by relevant covariates. This included a range of factors, encompassing mobility, socio-economic status, mother and child characteristics, household characteristics and other social aspects. Second, a decomposition analysis was used to assess the proportion of each ethnic gap that can be explained, as well as the main drivers behind the explained component. The analysis for both econometric approaches was repeated for each data time point available, which included antenatal, 9 months, 2 years and 4 years post-birth. Results: The following findings emerged: There is consistent evidence that Asian and Pacific peoples have a higher likelihood of child immunisation relative to NZ Europeans and Māori. This was evident at all time points except one. Pacific peoples had a lower rate relative to NZ European for receiving all first-year immunisations on time. For a number of potential individual and household predictors of healthcare service utilisation, the association is time-variant across early childhood. For example, socio-economic status appears highly relevant for timely immunisations in a child’s first year, but is then insignificant for the 15 month immunisations and those at age 4. Social factors play a key role. This included discouragement or encouragement regarding child immunisation. When broken down by source, discouragement by family has the largest marginal effect, followed by health professionals; whereas for encouragement, medical professionals have the largest positive influence. Perceived ethnically motivated discrimination by a health professional was significant with respect to both reducing the likelihood of achieving first choice LMC, and also satisfaction levels with child’s GP. Some ethnic gaps were largely unexplained, despite the wealth of factors employed as independent variables in our analysis. This included understanding why Pacific mothers are much less likely to achieve their first choice LMC compared to NZ Europeans; and also the ethnic gaps for both Māori and Pacific peoples relative to NZ Europeans concerning dental service use.

Keywords: child health, cohort analysis, ethnic disparities, primary healthcare

Procedia PDF Downloads 149
22 Higher-Level Return to Female Karate Competition Following Multiple Patella Dislocations

Authors: A. Maso, C. Bellissimo, G. Facchinetti, N. Milani, D. Panzin, D. Pogliana, L. Garlaschelli, L. Rivaroli, S. Rivaroli, M. Zurek, J. Konin

Abstract:

15 year-old female karate athlete experienced two unilateral patella dislocations: one contact and one non-contact. This challenged her from competing as planned at the regional and national competitions as a result of her inability to perform at a high level. Despite these injuries and other complicated factors, she was able to modify her training timeline and successfully perform, winning third at the National Cup. Initial pain numeric rating scale 8/10 during karate training isometric figures, taking the stairs, long walking, a positive rasp test, palpation pain on the lateral patella joint 9/10, pain performing open kinetic chain 0°-45° and close kinetic chain 30°-90°, tensor fascia lata, vastus lateralis, psoas muscles retraction/stiffness. Foot hyper pronation, internally rotated femur, and knee flexion 15° were the postural findings. Exercise prescription for three days/week for three weeks to include exercise-based rehabilitation and soft tissue mobilization with massage and foam rolling. After three weeks, the pain was improved during activity daily living 5/10, and soft tissue stiffness decreased. An additional four weeks of exercise-based rehabilitation was continued. At this time, axial x-rays and TA-GT TAC were taken, and an orthopaedic medical check was recommended to continue conservative treatment. At week seven, she performed 2/4 karate position technique without pain and 2/4 with pain. An isokinetic test was performed at week 12, demonstrating a 10% strength deficit and 6% resistance deficit both to the left hamstrings. Moreover, an 8% strength and resistance surplus to the left quadriceps was found. No pain was present during activity, daily living and sports activity, allowing a return to play training to begin. A plan for the return to play framework collaborated with her trainer, her father, a physiotherapist, a sports scientist, an osteopath, and a nutritionist. Within 4 and 5 months, both non-athlete and athlete movement quality analysis tests were performed. The plan agreed to establish a return to play goal of 7 months and the highest level return to competition goal of 9 months from the start of rehabilitation. This included three days/week of training and repeated testing of movement quality before return to competition with detectable improvements from 77% to 93%. Beginning goals of the rehabilitation plan included the importance of a team approach. The patient’s father and trainer were important to collaborate with to assure a safe and timely return to competition. The possibility of achieving the goals was strongly related to orthopaedic decision-making and progress during the first weeks of rehabilitation. Without complications or setbacks, the patient can successfully return to her highest level of competition. The patient returned to participation after five months of rehabilitation and training, and then she returned to competition at the national level in nine months. The successful return was the result of a team approach and a compliant patient with clear goals.

Keywords: karate, knee, performance, rehabilitation

Procedia PDF Downloads 105
21 A Review on Cyberchondria Based on Bibliometric Analysis

Authors: Xiaoqing Peng, Aijing Luo, Yang Chen

Abstract:

Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.

Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches

Procedia PDF Downloads 122
20 Implementation of Cord- Blood Derived Stem Cells in the Regeneration of Two Experimental Models: Carbon Tetrachloride and S. Mansoni Induced Liver Fibrosis

Authors: Manal M. Kame, Zeinab A. Demerdash, Hanan G. El-Baz, Salwa M. Hassan, Faten M. Salah, Wafaa Mansour, Olfat Hammam

Abstract:

Cord blood (CB) derived Unrestricted Somatic Stem Cells (USSCs) with their multipotentiality hold great promise in liver regeneration. This work aims at evaluation of the therapeutic potentiality of USSCs in two experimental models of chronic liver injury induced either by S. mansoni infection in balb/c mice or CCL4 injection in hamsters. Isolation, propagation, and characterization of USSCs from CB samples were performed. USSCs were induced to differentiate into osteoblasts, adipocytes and hepatocyte-like cells. Cells of the third passage were transplanted in two models of liver fibrosis: (1) Twenty hamsters were induced to liver fibrosis by repeated i. p. injection of 100 μl CCl4 /hamster for 8 weeks. This model was designed as; 10 hamsters with liver fibrosis and treated with i.h. injection of 3x106 USSCs (USSCs transplanted group), 10 hamsters with liver fibrosis (pathological control group), and 10 hamsters with healthy livers (normal control group). (2) Murine chronics S.mansoni model: twenty mice were induced to liver fibrosis with S. mansoni ceracariae (60 cercariae/ mouse) using the tail immersion method and left for 12 weeks. This model was designed as; 10 mice with liver fibrosis were transplanted with i. v. injection of 1×106 USCCs (USSCs transplanted group). Other 2 groups were designed as in hamsters model. Animals were sacrificed 12 weeks after USSCs transplantation, and their liver sections were examined for detection of human hepatocyte-like cells by immunohistochemistry staining. Moreover, liver sections were examined for fibrosis level, and fibrotic indices were calculated. Sera of sacrificed animals were tested for liver functions. CB USSCs, with fibroblast-like morphology, expressed high levels of CD44, CD90, CD73 and CD105 and were negative for CD34, CD45, and HLA-DR. USSCs showed high expression of transcripts for Oct4 and Sox2 and were in vitro differentiated into osteoblasts, adipocytes. In both animal models, in vitro induced hepatocyte-like cells were confirmed by cytoplasmic expression of glycogen, alpha-fetoprotein, and cytokeratin18. Livers of USSCs transplanted group showed engraftment with human hepatocyte-like cells as proved by cytoplasmic expression of human alpha-fetoprotein, cytokeratin18, and OV6. In addition, livers of this group showed less fibrosis than the pathological control group. Liver functions in the form of serum AST & ALT level and serum total bilirubin level were significantly lowered in USSCs transplanted group than pathological control group (p < 0.001). Moreover, the fibrotic index was significantly lower (p< 0.001) in USSCs transplanted group than pathological control group. In addition liver sections, of i. v. injection of 1×106 USCCs of mice, stained with either H&E or sirius red showed diminished granuloma size and a relative decrease in hepatic fibrosis. Our experimental liver fibrosis models transplanted with CB-USSCs showed liver engraftment with human hepatocyte-like cells as well as signs of liver regeneration in the form of improvement in liver function assays and fibrosis level. These data provide hope that human CB- derived USSCs are introduced as multipotent stem cells with great potentiality in regenerative medicine & strengthens the concept of cellular therapy for the treatment of liver fibrosis.

Keywords: cord blood, liver fibrosis, stem cells, transplantation

Procedia PDF Downloads 309
19 A Proposed Treatment Protocol for the Management of Pars Interarticularis Pathology in Children and Adolescents

Authors: Paul Licina, Emma M. Johnston, David Lisle, Mark Young, Chris Brady

Abstract:

Background: Lumbar pars pathology is a common cause of pain in the growing spine. It can be seen in young athletes participating in at-risk sports and can affect sporting performance and long-term health due to its resistance to traditional management. There is a current lack of consensus of classification and treatment for pars injuries. Previous systems used CT to stage pars defects but could not assess early stress reactions. A modified classification is proposed that considers findings on MRI, significantly improving early treatment guidance. The treatment protocol is designed for patients aged 5 to 19 years. Method: Clinical screening identifies patients with a low, medium, or high index of suspicion for lumbar pars injury using patient age, sport participation and pain characteristics. MRI of the at-risk cohort enables augmentation of existing CT-based classification while avoiding ionising radiation. Patients are classified into five categories based on MRI findings. A type 0 lesion (stress reaction) is present when CT is normal and MRI shows high signal change (HSC) in the pars/pedicle on T2 images. A type 1 lesion represents the ‘early defect’ CT classification. The group previously referred to as a 'progressive stage' defect on CT can be split into 2A and 2B categories. 2As have HSC on MRI, whereas 2Bs do not. This distinction is important with regard to healing potential. Type 3 lesions are terminal stage defects on CT, characterised by pseudarthrosis. MRI shows no HSC. Results: Stress reactions (type 0) and acute fractures (1 and 2a) can heal and are treated in a custom-made hard brace for 12 weeks. It is initially worn 23 hours per day. At three weeks, patients commence basic core rehabilitation. At six weeks, in the absence of pain, the brace is removed for sleeping. Exercises are progressed to positions of daily living. Patients with continued pain remain braced 23 hours per day without exercise progression until becoming symptom-free. At nine weeks, patients commence supervised exercises out of the brace for 30 minutes each day. This allows them to re-learn muscular control without rigid support of the brace. At 12 weeks, bracing ceases and MRI is repeated. For patients with near or complete resolution of bony oedema and healing of any cortical defect, rehabilitation is focused on strength and conditioning and sport-specific exercise for the full return to activity. The length of this final stage is approximately nine weeks but depends on factors such as development and level of sports participation. If significant HSC remains on MRI, CT scan is considered to definitively assess cortical defect healing. For these patients, return to high-risk sports is delayed for up to three months. Chronic defects (2b and 3) cannot heal and are not braced, and rehabilitation follows traditional protocols. Conclusion: Appropriate clinical screening and imaging with MRI can identify pars pathology early. In those with potential for healing, we propose hard bracing and appropriate rehabilitation as part of a multidisciplinary management protocol. The validity of this protocol will be tested in future studies.

Keywords: adolescents, MRI classification, pars interticularis, treatment protocol

Procedia PDF Downloads 153
18 Thermal Characterisation of Multi-Coated Lightweight Brake Rotors for Passenger Cars

Authors: Ankit Khurana

Abstract:

The sufficient heat storage capacity or ability to dissipate heat is the most decisive parameter to have an effective and efficient functioning of Friction-based Brake Disc systems. The primary aim of the research was to analyse the effect of multiple coatings on lightweight disk rotors surface which not only alleviates the mass of vehicle & also, augments heat transfer. This research is projected to aid the automobile fraternity with an enunciated view over the thermal aspects in a braking system. The results of the project indicate that with the advent of modern coating technologies a brake system’s thermal curtailments can be removed and together with forced convection, heat transfer processes can see a drastic improvement leading to increased lifetime of the brake rotor. Other advantages of modifying the surface of a lightweight rotor substrate will be to reduce the overall weight of the vehicle, decrease the risk of thermal brake failure (brake fade and fluid vaporization), longer component life, as well as lower noise and vibration characteristics. A mathematical model was constructed in MATLAB which encompassing the various thermal characteristics of the proposed coatings and substrate materials required to approximate the heat flux values in a free and forced convection environment; resembling to a real-time braking phenomenon which could easily be modelled into a full cum scaled version of the alloy brake rotor part in ABAQUS. The finite element of a brake rotor was modelled in a constrained environment such that the nodal temperature between the contact surfaces of the coatings and substrate (Wrought Aluminum alloy) resemble an amalgamated solid brake rotor element. The initial results obtained were for a Plasma Electrolytic Oxidized (PEO) substrate wherein the Aluminum alloy gets a hard ceramic oxide layer grown on its transitional phase. The rotor was modelled and then evaluated in real-time for a constant ‘g’ braking event (based upon the mathematical heat flux input and convective surroundings), which reflected the necessity to deposit a conducting coat (sacrificial) above the PEO layer in order to inhibit thermal degradation of the barrier coating prematurely. Taguchi study was then used to bring out certain critical factors which may influence the maximum operating temperature of a multi-coated brake disc by simulating brake tests: a) an Alpine descent lasting 50 seconds; b) an Autobahn stop lasting 3.53 seconds; c) a Six–high speed repeated stop in accordance to FMVSS 135 lasting 46.25 seconds. Thermal Barrier coating thickness and Vane heat transfer coefficient were the two most influential factors and owing to their design and manufacturing constraints a final optimized model was obtained which survived the 6-high speed stop test as per the FMVSS -135 specifications. The simulation data highlighted the merits for preferring Wrought Aluminum alloy 7068 over Grey Cast Iron and Aluminum Metal Matrix Composite in coherence with the multiple coating depositions.

Keywords: lightweight brakes, surface modification, simulated braking, PEO, aluminum

Procedia PDF Downloads 408