Search results for: mechanical behavior of bone
124 Female Subjectivity in William Faulkner's Light in August
Authors: Azza Zagouani
Abstract:
Introduction: In the work of William Faulkner, characters often evade the boundaries and categories of patriarchal standards of order. Female characters like Lena Grove and Joanna Burden cross thresholds in attempts to gain liberation, while others fail to do so. They stand as non-conformists and refuse established patterns of feminine behavior, such as marriage and motherhood after. They refute submissiveness, domesticity and abstinence to reshape their own identities. The presence of independent and creative women represents new, unconventional images of female subjectivity. This paper will examine the structures of submission and oppression faced by Lena and Joanna, and will show how, in the end, they reshape themselves and their identities, and disrupt or even destroy patriarchal structures. Objectives: Participants will understand through the examples of Lena Grove and Joanna Burden that female subjectivities are constructions, and are constantly subject to change. Approaches: Two approaches will be used in the analysis of the subjectivity formation of Lena Grove and Joanna Burden. Following the arguments propounded by Judith Butler, We explore the ways in which Lena Grove maneuvers around the restrictions and the limitations imposed on her without any physical or psychological violence. She does this by properly performing the roles prescribed to her gendered body. Her repetitious performances of these roles are both the ones that are constructed to confine women and the vehicle for her travel. Her performance parodies the prescriptive roles and thereby reveals that they are cultural constructions. Second, We will explore the argument propounded by Kristeva that subjectivity is always in a state of development because we are always changing in context with changing circumstances. For example, in Light in August, Lena Grove changes the way she defines herself in light of the events of the novel. Also, Kristeva talks about stages of development: the semiotic stage and the symbolic stage. In Light in August, Joanna shows different levels of subjectivity as time passes. Early in the novel, Joanna is very connected to her upbringing. This suggests Kristeva’s concept of the semiotic, in which the daughter identifies closely to her parents. Kristeva relates the semiotic to a strong daughter/mother connection, but in the novel it is strong daughter/father/grandfather identification instead. Then as Joanna becomes sexually involved with Joe, she breaks off, and seems to go into an identity crisis. To me, this represents Kristeva’s move from the semiotic to the symbolic. When Joanna returns to a religious fanaticism, she is returning to a semiotic state. Detailed outline: At the outset of this paper, We will investigate the subjugation of women: social constraints, and the formation of the feminine identity in Light in August. Then, through the examples of Lena Grove’s attempt to cross the boundaries of community moralities and Joanna Burden’s refusal to submit to the standards of submissiveness, domesticity, and obstinance, We will reveal the tension between progressive conceptions of individual freedom and social constraints that limit this freedom. In the second part of the paper, We will underscore the rhetoric of femininity in Light in August: subjugation through naming. The implications of both female’s names offer a powerful contrast between the two different forms of subjectivity. Conclusion: Through Faulkner’s novel, We demonstrate that female subjectivity is an open-ended issue. The spiral shaping of its form maintains its characteristics as a process changing according to different circumstances.Keywords: female subjectivity, Faulkner’s light August, gender, sexuality, diversity
Procedia PDF Downloads 397123 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System
Authors: A. Chávez, A. Rodríguez, F. Pinzón
Abstract:
Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.Keywords: sludge, landfill, leachate, SBR
Procedia PDF Downloads 272122 OpenFOAM Based Simulation of High Reynolds Number Separated Flows Using Bridging Method of Turbulence
Authors: Sagar Saroha, Sawan S. Sinha, Sunil Lakshmipathy
Abstract:
Reynolds averaged Navier-Stokes (RANS) model is the popular computational tool for prediction of turbulent flows. Being computationally less expensive as compared to direct numerical simulation (DNS), RANS has received wide acceptance in industry and research community as well. However, for high Reynolds number flows, the traditional RANS approach based on the Boussinesq hypothesis is incapacitated to capture all the essential flow characteristics, and thus, its performance is restricted in high Reynolds number flows of practical interest. RANS performance turns out to be inadequate in regimes like flow over curved surfaces, flows with rapid changes in the mean strain rate, duct flows involving secondary streamlines and three-dimensional separated flows. In the recent decade, partially averaged Navier-Stokes (PANS) methodology has gained acceptability among seamless bridging methods of turbulence- placed between DNS and RANS. PANS methodology, being a scale resolving bridging method, is inherently more suitable than RANS for simulating turbulent flows. The superior ability of PANS method has been demonstrated for some cases like swirling flows, high-speed mixing environment, and high Reynolds number turbulent flows. In our work, we intend to evaluate PANS in case of separated turbulent flows past bluff bodies -which is of broad aerodynamic research and industrial application. PANS equations, being derived from base RANS, continue to inherit the inadequacies from the parent RANS model based on linear eddy-viscosity model (LEVM) closure. To enhance PANS’ capabilities for simulating separated flows, the shortcomings of the LEVM closure need to be addressed. Inabilities of the LEVMs have inspired the development of non-linear eddy viscosity models (NLEVM). To explore the potential improvement in PANS performance, in our study we evaluate the PANS behavior in conjugation with NLEVM. Our work can be categorized into three significant steps: (i) Extraction of PANS version of NLEVM from RANS model, (ii) testing the model in the homogeneous turbulence environment and (iii) application and evaluation of the model in the canonical case of separated non-homogeneous flow field (flow past prismatic bodies and bodies of revolution at high Reynolds number). PANS version of NLEVM shall be derived and implemented in OpenFOAM -an open source solver. Homogeneous flows evaluation will comprise the study of the influence of the PANS’ filter-width control parameter on the turbulent stresses; the homogeneous analysis performed over typical velocity fields and asymptotic analysis of Reynolds stress tensor. Non-homogeneous flow case will include the study of mean integrated quantities and various instantaneous flow field features including wake structures. Performance of PANS + NLEVM shall be compared against the LEVM based PANS and LEVM based RANS. This assessment will contribute to significant improvement of the predictive ability of the computational fluid dynamics (CFD) tools in massively separated turbulent flows past bluff bodies.Keywords: bridging methods of turbulence, high Re-CFD, non-linear PANS, separated turbulent flows
Procedia PDF Downloads 145121 Comparative Studies on the Needs and Development of Autotronic Maintenance Training Modules for the Training of Automobile Independent Workshop Service Technicians in North – Western Region, Nigeria
Authors: Muhammad Shuaibu Birniwa
Abstract:
Automobile Independent Workshop Service Technicians (popularly called roadside mechanics) are technical personals that repairs most of the automobile vehicles in Nigeria. Majority of these mechanics acquired their skills through apprenticeship training. Modern vehicle imported into the country posed greater challenges to the present automobile technicians particularly in the area of carrying out maintenance repairs of these latest automobile vehicles (autotronics vehicle) due to their inability to possessed autotronic skills competency. To source for solution to the above mentioned problems, therefore a research is carried out in North – Western region of Nigeria to produce a suitable maintenance training modules that can be used to train the technicians for them to upgrade/acquire the needed competencies for successful maintenance repair of the autotronic vehicles that were running everyday on the nation’s roads. A cluster sampling technique is used to obtain a sample from the population. The population of the study is all autotronic inclined lecturers, instructors and independent workshop service technicians that are within North – Western region of Nigeria. There are seven states (Jigawa, Kaduna, Kano, Katsina, Kebbi, Sokoto and Zamfara) in the study area, these serves as clusters in the population. Five (5) states were randomly selected to serve as the sample size. The five states are Jigawa, Kano, Katsina, Kebbi and Zamfara, the entire population of the five states which serves as clusters is (183), lecturers (44), instructors (49) and autotronic independent workshop service technicians (90), all of them were used in the study because of their manageable size. 183 copies of autotronic maintenance training module questionnaires (AMTMQ) with 174 and 149 question items respectively were administered and collected by the researcher with the help of an assistants, they are administered to 44 Polytechnic lecturers in the department of mechanical engineering, 49 instructors in skills acquisition centres/polytechnics and 90 master craftsmen of an independent workshops that are autotronic inclined. Data collected for answering research questions 1, 3, 4 and 5 were analysed using SPSS software version 22, Grand Mean and standard deviation were used to answer the research questions. Analysis of Variance (ANOVA) was used to test null hypotheses one (1) to three (3) and t-test statistical tool is used to analyzed hypotheses four (4) and five (5) all at 0.05 level of significance. The research conducted revealed that; all the objectives, contents/tasks, facilities, delivery systems and evaluation techniques contained in the questionnaire were required for the development of the autotronic maintenance training modules for independent workshop service technicians in the north – western zone of Nigeria. The skills upgrade training conducted by federal government in collaboration with SURE-P, NAC and SMEDEN was not successful because the educational status of the target population was not considered in drafting the needed training modules. The mode of training used does not also take cognizance of the theoretical aspect of the trainees, especially basic science which rendered the programme ineffective and insufficient for the tasks on ground.Keywords: autotronics, roadside, mechanics, technicians, independent
Procedia PDF Downloads 73120 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring
Authors: Katerina Krizova, Inigo Molina
Abstract:
The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content
Procedia PDF Downloads 125119 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms
Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp
Abstract:
The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress
Procedia PDF Downloads 113118 Innovation Outputs from Higher Education Institutions: A Case Study of the University of Waterloo, Canada
Authors: Wendy De Gomez
Abstract:
The University of Waterloo is situated in central Canada in the Province of Ontario- one hour from the metropolitan city of Toronto. For over 30 years, it has held Canada’s top spot as the most innovative university; and has been consistently ranked in the top 25 computer science and top 50 engineering schools in the world. Waterloo benefits from the federal government’s over 100 domestic innovation policies which have assisted in the country’s 15th place global ranking in the World Intellectual Property Organization’s (WIPO) 2022 Global Innovation Index. Yet undoubtedly, the University of Waterloo’s unique characteristics are what propels its innovative creativeness forward. This paper will provide a contextual definition of innovation in higher education and then demonstrate the five operational attributes that contribute to the University of Waterloo’s innovative reputation. The methodology is based on statistical analyses obtained from ranking bodies such as the QS World University Rankings, a secondary literature review related to higher education innovation in Canada, and case studies that exhibit the operationalization of the attributes outlined below. The first attribute is geography. Specifically, the paper investigates the network structure effect of the Toronto-Waterloo high-tech corridor and the resultant industrial relationships built there. The second attribute is University Policy 73-Intellectal Property Rights. This creator-owned policy grants all ownership to the creator/inventor regardless of the use of the University of Waterloo property or funding. Essentially, through the incentivization of IP ownership by all researchers, further commercialization and entrepreneurship are formed. Third, this IP policy works hand in hand with world-renowned business incubators such as the Accelerator Centre in the dedicated research and technology park and velocity, a 14-year-old facility that equips and guides founders to build and scale companies. Communitech, a 25-year-old provincially backed facility in the region, also works closely with the University of Waterloo to build strong teams, access capital, and commercialize products. Fourth, Waterloo’s co-operative education program contributes 31% of all co-op participants to the Canadian economy. Home to the world’s largest co-operative education program, data shows that over 7,000 from around the world recruit Waterloo students for short- and long-term placements- directly contributing to the student’s ability to learn and optimize essential employment skills when they graduate. Finally, the students themselves at Waterloo are exceptional. The entrance average ranges from the low 80s to the mid-90s depending on the program. In computer, electrical, mechanical, mechatronics, and systems design engineering, to have a 66% chance of acceptance, the applicant’s average must be 95% or above. Singularly, none of these five attributes could lead to the university’s outstanding track record of innovative creativity, but when bundled up into a 1000 acre- 100 building main campus with 6 academic faculties, 40,000+ students, and over 1300 world-class faculty, the recipe for success becomes quite evident.Keywords: IP policy, higher education, economy, innovation
Procedia PDF Downloads 70117 Gas-Phase Noncovalent Functionalization of Pristine Single-Walled Carbon Nanotubes with 3D Metal(II) Phthalocyanines
Authors: Vladimir A. Basiuk, Laura J. Flores-Sanchez, Victor Meza-Laguna, Jose O. Flores-Flores, Lauro Bucio-Galindo, Elena V. Basiuk
Abstract:
Noncovalent nanohybrid materials combining carbon nanotubes (CNTs) with phthalocyanines (Pcs) is a subject of increasing research effort, with a particular emphasis on the design of new heterogeneous catalysts, efficient organic photovoltaic cells, lithium batteries, gas sensors, field effect transistors, among other possible applications. The possibility of using unsubstituted Pcs for CNT functionalization is very attractive due to their very moderate cost and easy commercial availability. However, unfortunately, the deposition of unsubstituted Pcs onto nanotube sidewalls through the traditional liquid-phase protocols turns to be very problematic due to extremely poor solubility of Pcs. On the other hand, unsubstituted free-base H₂Pc phthalocyanine ligand, as well as many of its transition metal complexes, exhibit very high thermal stability and considerable volatility under reduced pressure, which opens the possibility for their physical vapor deposition onto solid surfaces, including nanotube sidewalls. In the present work, we show the possibility of simple, fast and efficient noncovalent functionalization of single-walled carbon nanotubes (SWNTs) with a series of 3d metal(II) phthalocyanines Me(II)Pc, where Me= Co, Ni, Cu, and Zn. The functionalization can be performed in a temperature range of 400-500 °C under moderate vacuum and requires about 2-3 h only. The functionalized materials obtained were characterized by means of Fourier-transform infrared (FTIR), Raman, UV-visible and energy-dispersive X-ray spectroscopy (EDS), scanning and transmission electron microscopy (SEM and TEM, respectively) and thermogravimetric analysis (TGA). TGA suggested that Me(II)Pc weight content is 30%, 17% and 35% for NiPc, CuPc, and ZnPc, respectively (CoPc exhibited anomalous thermal decomposition behavior). The above values are consistent with those estimated from EDS spectra, namely, of 24-39%, 27-36% and 27-44% for CoPc, CuPc, and ZnPc, respectively. A strong increase in intensity of D band in the Raman spectra of SWNT‒Me(II)Pc hybrids, as compared to that of pristine nanotubes, implies very strong interactions between Pc molecules and SWNT sidewalls. Very high absolute values of binding energies of 32.46-37.12 kcal/mol and the highest occupied and lowest unoccupied molecular orbital (HOMO and LUMO, respectively) distribution patterns, calculated with density functional theory by using Perdew-Burke-Ernzerhof general gradient approximation correlation functional in combination with the Grimme’s empirical dispersion correction (PBE-D) and the double numerical basis set (DNP), also suggested that the interactions between Me(II) phthalocyanines and nanotube sidewalls are very strong. The authors thank the National Autonomous University of Mexico (grant DGAPA-IN200516) and the National Council of Science and Technology of Mexico (CONACYT, grant 250655) for financial support. The authors are also grateful to Dr. Natalia Alzate-Carvajal (CCADET of UNAM), Eréndira Martínez (IF of UNAM) and Iván Puente-Lee (Faculty of Chemistry of UNAM) for technical assistance with FTIR, TGA measurements, and TEM imaging, respectively.Keywords: carbon nanotubes, functionalization, gas-phase, metal(II) phthalocyanines
Procedia PDF Downloads 130116 The Dark History of American Psychiatry: Racism and Ethical Provider Responsibility
Authors: Mary Katherine Hoth
Abstract:
Despite racial and ethnic disparities in American psychiatry being well-documented, there remains an apathetic attitude among nurses and providers within the field to engage in active antiracism and provide equitable, recovery-oriented care. It is insufficient to be a “colorblind” nurse or provider and state that call care provided is identical for every patient. Maintaining an attitude of “colorblindness” perpetuates the racism prevalent throughout healthcare and leads to negative patient outcomes. The purpose of this literature review is to highlight the how the historical beginnings of psychiatry have evolved into the disparities seen in today’s practice, as well as to provide some insight on methods that providers and nurses can employ to actively participate in challenging these racial disparities. Background The application of psychiatric medicine to White people versus Black, Indigenous, and other People of Color has been distinctly different as a direct result of chattel slavery and the development of pseudoscience “diagnoses” in the 19th century. This weaponization of the mental health of Black people continues to this day. Population The populations discussed are Black, Indigenous, and other People of Color, with a primary focus on Black people’s experiences with their mental health and the field of psychiatry. Methods A literature review was conducted using CINAHL, EBSCO, MEDLINE, and PubMed databases with the following terms: psychiatry, mental health, racism, substance use, suicide, trauma-informed care, disparities and recovery-oriented care. Articles were further filtered based on meeting the criteria of peer-reviewed, full-text availability, written in English, and published between 2018 and 2023. Findings Black patients are more likely to be diagnosed with psychotic disorders and prescribed antipsychotic medications compared to White patients who were more often diagnosed with mood disorders and prescribed antidepressants. This same disparity is also seen in children and adolescents, where Black children are more likely to be diagnosed with behavior problems such as Oppositional Defiant Disorder (ODD) and White children with the same presentation are more likely to be diagnosed with Attention Hyperactivity Disorder. Medications advertisements for antipsychotics like Haldol as recent as 1974 portrayed a Black man, labeled as “agitated” and “aggressive”, a trope we still see today in police violence cases. The majority of nursing and medical school programs do not provide education on racism and how to actively combat it in practice, leaving many healthcare professionals acutely uneducated and unaware of their own biases and racism, as well as structural and institutional racism. Conclusions Racism will continue to grow wherever it is given time, space, and energy. Providers and nurses have an ethical obligation to educate themselves, actively deconstruct their personal racism and bias, and continuously engage in active antiracism by dismantling racism wherever it is encountered, be it structural, institutional, or scientific racism. Agents of change at the patient care level not only improve the outcomes of Black patients, but it will also lead the way in ensuring Black, Indigenous, and other People of Color are included in research of methods and medications in psychiatry in the future.Keywords: disparities, psychiatry, racism, recovery-oriented care, trauma-informed care
Procedia PDF Downloads 129115 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 68114 Wear Resistance in Dry and Lubricated Conditions of Hard-anodized EN AW-4006 Aluminum Alloy
Authors: C. Soffritti, A. Fortini, E. Baroni, M. Merlin, G. L. Garagnani
Abstract:
Aluminum alloys are widely used in many engineering applications due to their advantages such ashigh electrical and thermal conductivities, low density, high strength to weight ratio, and good corrosion resistance. However, their low hardness and poor tribological properties still limit their use in industrial fields requiring sliding contacts. Hard anodizing is one of the most common solution for overcoming issues concerning the insufficient friction resistance of aluminum alloys. In this work, the tribological behavior ofhard-anodized AW-4006 aluminum alloys in dry and lubricated conditions was evaluated. Three different hard-anodizing treatments were selected: a conventional one (HA) and two innovative golden hard-anodizing treatments (named G and GP, respectively), which involve the sealing of the porosity of anodic aluminum oxides (AAO) with silver ions at different temperatures. Before wear tests, all AAO layers were characterized by scanning electron microscopy (VPSEM/EDS), X-ray diffractometry, roughness (Ra and Rz), microhardness (HV0.01), nanoindentation, and scratch tests. Wear tests were carried out according to the ASTM G99-17 standard using a ball-on-disc tribometer. The tests were performed in triplicate under a 2 Hz constant frequency oscillatory motion, a maximum linear speed of 0.1 m/s, normal loads of 5, 10, and 15 N, and a sliding distance of 200 m. A 100Cr6 steel ball10 mm in diameter was used as counterpart material. All tests were conducted at room temperature, in dry and lubricated conditions. Considering the more recent regulations about the environmental hazard, four bio-lubricants were considered after assessing their chemical composition (in terms of Unsaturation Number, UN) and viscosity: olive, peanut, sunflower, and soybean oils. The friction coefficient was provided by the equipment. The wear rate of anodized surfaces was evaluated by measuring the cross-section area of the wear track with a non-contact 3D profilometer. Each area value, obtained as an average of four measurements of cross-section areas along the track, was used to determine the wear volume. The worn surfaces were analyzed by VPSEM/EDS. Finally, in agreement with DoE methodology, a statistical analysis was carried out to identify the most influencing factors on the friction coefficients and wear rates. In all conditions, results show that the friction coefficient increased with raising the normal load. Considering the wear tests in dry sliding conditions, irrespective of the type of anodizing treatments, metal transfer between the mating materials was observed over the anodic aluminum oxides. During sliding at higher loads, the detachment of the metallic film also caused the delamination of some regions of the wear track. For the wear tests in lubricated conditions, the natural oils with high percentages of oleic acid (i.e., olive and peanut oils) maintained high friction coefficients and low wear rates. Irrespective of the type of oil, smallmicrocraks were visible over the AAO layers. Based on the statistical analysis, the type of anodizing treatment and magnitude of applied load were the main factors of influence on the friction coefficient and wear rate values. Nevertheless, an interaction between bio-lubricants and load magnitude could occur during the tests.Keywords: hard anodizing treatment, silver ions, bio-lubricants, sliding wear, statistical analysis
Procedia PDF Downloads 151113 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 134112 Pharmacokinetic Assessment of Antimicrobial Treatment of Acute Exacerbations of Chronic Obstructive Pulmonary Disease in Hospitalized Patients Colonized with Pseudomonas aeruginosa
Authors: Juliette Begin, Juliano Colapelle, Andrea Taratanu, Daniel Thirion, Amelie Marsot, Bryan A. Ross
Abstract:
Chronic obstructive pulmonary disease (COPD), a leading cause of death globally, is characterized by chronic airflow obstruction and acute exacerbations (AECOPDs) that are often triggered by respiratory infections. Pseudomonas aeruginosa (P. aeruginosa), a potentially serious bacterial cause of AECOPDs, is treated with targeted anti-pseudomonal antibiotics. These select few antimicrobials are often used as first-line therapy in patients who are clinically unwell and/or in those suspected of P. aeruginosa-related infection prior to confirmation, potentially contributing to antimicrobial resistance. The present study evaluates prescribing practices in patients with a confirmed sputum history of P. aeruginosa admitted for AECOPD at the McGill University Health Centre (MUHC) and treated with anti-pseudomonal antibiotics. Serum antibiotic concentrations were measured from the same-day peak, trough, and mid-dose blood sampling intervals after reaching steady-state (on or after day 3) and were quantified using ultra-high-performance liquid chromatography (UHPLC). Demographic, clinical, and treatment outcomes were extracted from patient medical charts. Treatment failure was defined by respiratory-related death or mechanical ventilation after ≥3 days of antibiotics; antibiotic therapy extended beyond 2 weeks or a new antibiotic regimen started; or urgent care readmission within 30 days for AECOPD. To date, 9 of 30 planned participants have completed testing: seven received ciprofloxacin, one received meropenem, and one received piperacillin-tazobactam. Due to serum sample batching requirements, the serum ciprofloxacin concentration results for the first 2/8 participants are presented at the time of writing. The first participant had serum levels of 5.45mg/L (T₀), 4.74mg/L (T₅₀), and 4.49mg/L (T₁₀₀), while the second had serum levels of 5mg/L (T₀), 2.6mg/L (T₅₀), and 2.51mg/L (T₁₀₀). Pharmacokinetic parameters Cmax (5.18±0.43mg/L), T₁/₂ (23.56±18.94hours), and AUC (181.9±155.95mg*h/l) were higher than reported monograph values and met target AUC-to-MIC ratio of >125. The patients treated with meropenem and with piperacillin-tazobactam experienced treatment failure. Preliminary results suggest that standard ciprofloxacin dosing in patients experiencing an AECOPD and colonized with P. aeruginosa appears to achieve effective serum concentrations. Final cohort results will inform the pharmacokinetic appropriateness and clinical sufficiency of current AECOPD antimicrobial strategies in P. aeruginosa-colonized patients. This study will guide clinicians in determining the appropriate dosing for AECOPD treatment to achieve therapeutic levels, optimizing outcomes, and minimizing adverse effects. It could also highlight the value of routine antibiotic level monitoring in patients with treatment failure to ensure optimal serum concentrations.Keywords: acute exacerbation, antimicrobial resistance, chronic obstructive pulmonary disease, pharmacokinetics/pharmacodynamics, Pseudomonas aeruginosa
Procedia PDF Downloads 17111 Saline Aspiration Negative Intravascular Test: Mitigating Risk with Injectable Fillers
Authors: Marcelo Lopes Dias Kolling, Felipe Ferreira Laranjeira, Guilherme Augusto Hettwer, Pedro Salomão Piccinini, Marwan Masri, Carlos Oscar Uebel
Abstract:
Introduction: Injectable fillers are among the most common nonsurgical cosmetic procedures, with significant growth yearly. Knowledge of rheological and mechanical characteristics of fillers, facial anatomy, and injection technique is essential for safety. Concepts such as the use of cannula versus needle, aspiration before injection, and facial danger zones have been well discussed. In case of an accidental intravascular puncture, the pressure inside the vessel may not be sufficient to push blood into the syringe due to the characteristics of the filler product; this is especially true for calcium hydroxyapatite (CaHA) or hyaluronic acid (HA) fillers with high G’. Since viscoelastic properties of normal saline are much lower than those of fillers, aspiration with saline prior to filler injection may decrease the risk of a false negative aspiration and subsequent catastrophic effects. We discuss a technique to add an additional safety step to the procedure with saline aspiration prior to injection, a ‘’reverse Seldinger’’ technique for intravascular access, which we term SANIT: Saline Aspiration Negative Intravascular Test. Objectives: To demonstrate the author’s (PSP) technique which adds an additional safety step to the process of filler injection, with both CaHA and HA, in order to decrease the risk of intravascular injection. Materials and Methods: Normal skin cleansing and topical anesthesia with prilocaine/lidocaine cream are performed; the facial subunits to be treated are marked. A 3mL Luer lock syringe is filled with 2mL of 0.9% normal saline and a 27G needle, which is turned one half rotation. When a cannula is to be used, the Luer lock syringe is attached to a 27G 4cm single hole disposable cannula. After skin puncture, the 3mL syringe is advanced with the plunger pulled back (negative pressure). Progress is made to the desired depth, all the while aspirating. Once the desired location of filler injection is reached, the syringe is exchanged for the syringe containing a filler, securely grabbing the hub of the needle and taking care to not dislodge the needle tip. Prior to this, we remove 0.1mL of filler to allow for space inside the syringe for aspiration. We again aspirate and inject retrograde. SANIT is especially useful for CaHA, since the G’ is much higher than HA, and thus reflux of blood into the syringe is less likely to occur. Results: The technique has been used safely for the past two years with no adverse events; the increase in cost is negligible (only the cost of 2mL of normal saline). Over 100 patients (over 300 syringes) have been treated with this technique. The risk of accidental intravascular puncture has been calculated to be between 1:6410 to 1:40882 syringes among expert injectors; however, the consequences of intravascular injection can be catastrophic even with board-certified physicians. Conclusions: While the risk of intravascular filler injection is low, the consequences can be disastrous. We believe that adding the SANIT technique can help further mitigate risk with no significant untoward effects and could be considered by all performing injectable fillers. Further follow-up is ongoing.Keywords: injectable fillers, safety, saline aspiration, injectable filler complications, hyaluronic acid, calcium hydroxyapatite
Procedia PDF Downloads 150110 Admissibility as a Property of Evidence in Modern Conditions
Authors: Iryna Teslenko
Abstract:
According to the provisions of the current criminal procedural legislation of Ukraine, the issue of admissibility of evidence is closely related to both the right to a fair trial and the presumption of innocence. The general rule is that evidence obtained improperly or illegally cannot be taken into account in a court case. Therefore, the evidence base of the prosecution, collected at the stage of the pre-trial investigation, compliance with the requirements of the law during the collection of evidence, is of crucial importance for the criminal process, the violation of which entails the recognition of the relevant evidence as inadmissible, which can nullify all the efforts of the pre-trial investigation body and the prosecution. Therefore, the issue of admissibility of evidence in criminal proceedings is fundamentally important and decisive for the entire process. Research on this issue began in December 2021. At that time, there was still no clear understanding of what needed to be conveyed to the scientific community. In February 2022, the lives of all citizens of Ukraine have totally changed. A war broke out in the country. At a time when the entire world community is on the path of humanizing society, respecting the rights and freedoms of man and citizen, a military conflict has arisen in the middle of Europe - one country attacked another, war crimes are being committed. The world still cannot believe it, but it is happening here and now, people are dying, infrastructure is being destroyed, war crimes are being committed, contrary to the signed and ratified international conventions, and contrary to all the acquisitions and development of world law. At this time, the life of the world has divided into before and after February 24, 2022, the world cannot be the same as it was before, and the approach to solving legal issues in the criminal process, in particular, issues of proving the commission of crimes and the involvement of certain persons in their commission. An international criminal has appeared in the humane European world, who disregards all norms of law and morality, and does not adhere to any principles. Until now, the practice of the European Court of Human Rights and domestic courts of Ukraine treated with certain formalism, such a property of evidence in criminal proceedings as the admissibility of evidence. Currently, we have information that the Office of the Prosecutor of the International Criminal Court in The Hague has started an investigation into war crimes in Ukraine and is documenting them. In our opinion, the world cannot allow formalism in bringing a war criminal to justice. There is a war going on in Ukraine, the cities are under round-the-clock missile fire from the aggressor country, which makes it impossible to carry out certain investigative actions. If due to formal deficiencies, the collected evidence is declared inadmissible, it may lead to the fact that the guilty people will not be punished. And this, in turn, sends a message to other terrorists in the world about the impunity of their actions, the system of deterring criminals from committing criminal offenses (crimes) will collapse due to the understanding of the inevitability of punishment, and this will affect the entire world security and European security in particular. Therefore, we believe that the world cannot allow chaos in the issue of general security, there should be a transformation of the approach in general to such a property of evidence in the criminal process as admissibility in order to ensure the inevitability of the punishment of criminals. We believe that the scientific and legal community should not allow criminals to avoid responsibility. The evil that is destroying Ukraine should be punished. We must all together prove that legal norms are not just words written on paper but rules of behavior of all members of society, their non-observance leads to mandatory responsibility. Everybody who commits crimes will be punished, which is inevitable, and this principle is the guarantor of world security in the future.Keywords: admissibility of evidence, criminal process, war, Ukraine
Procedia PDF Downloads 88109 Synthesis of Smart Materials Based on Polyaniline Coated Fibers
Authors: Mihaela Beregoi, Horia Iovu, Cristina Busuioc, Alexandru Evanghelidis, Elena Matei, Monica Enculescu, Ionut Enculescu
Abstract:
Nanomaterials field is very attractive for all researchers who are attempting to develop new devices with the same or improved properties than the micro-sized ones, while reducing the reagents and power consumptions. In this way, a wide range of nanomaterials were fabricated and integrated in applications for electronics, optoelectronics, solar cells, tissue reconstruction and drug delivery. Obviously, the most appealing ones are those dedicated to the medical domain. Different types of nano-sized materials, such as particles, fibers, films etc., can be synthesized by using physical, chemical or electrochemical methods. One of these techniques is electrospinning, which enable the production of fibers with nanometric dimensions by pumping a polymeric solution in a high electric field; due to the electrostatic charging and solvent evaporation, the precursor mixture is converted into nonwoven meshes with different fiber densities and mechanical properties. Moreover, polyaniline is a conducting polymer with interesting optical properties, suitable for displays and electrochromic windows. Otherwise, polyaniline is an electroactive polymer that can contract/expand by applying electric stimuli, due to the oxidation/reduction reactions which take place in the polymer chains. These two main properties can be exploited in order to synthesize smart materials that change their dimensions, exhibiting in the same time good electrochromic properties. In the context aforesaid, a poly(methyl metacrylate) solution was spun to get webs composed of fibers with diameter values between 500 nm and 1 µm. Further, the polymer meshes were covered with a gold layer in order to make them conductive and also appropriate as working electrode in an electrochemical cell. The gold shell was deposited by DC sputtering. Such metalized fibers can be transformed into smart materials by covering them with a thin layer of conductive polymer. Thus, the webs were coated with a polyaniline film by the electrochemical route, starting from and aqueous solution of aniline and sulfuric acid, where sulfuric acid acts as oxidant agent. For the polymerization of aniline, a saturated calomel electrode was employed as reference, a platinum plate as counter electrode and the gold covered webs as working electrode. Chronoamperometry was selected as deposition method for polyaniline, by modifying the deposition time. Metalized meshes with different fiber densities were used, the transmission ranging between 70 and 80 %. The morphological investigation showed that polyaniline layer has a granular structure for all deposition experiments. As well, some preliminary optical tests were done by using sulfuric acid as electrolyte, which revealed the modification of polyaniline colour from green to dark blue when applying a voltage. In conclusion, new multilayered materials were obtained by a simple approach: the merge of the electrospinning method benefits with polyaniline chemistry. This synthesis method allows the fabrication of structures with reproducible characteristics, suitable for display or tissue substituents.Keywords: electrospinning, fibers, smart materials, polyaniline
Procedia PDF Downloads 294108 Developing and Testing a Questionnaire of Music Memorization and Practice
Authors: Diana Santiago, Tania Lisboa, Sophie Lee, Alexander P. Demos, Monica C. S. Vasconcelos
Abstract:
Memorization has long been recognized as an arduous and anxiety-evoking task for musicians, and yet, it is an essential aspect of performance. Research shows that musicians are often not taught how to memorize. While memorization and practice strategies of professionals have been studied, little research has been done to examine how student musicians learn to practice and memorize music in different cultural settings. We present the process of developing and testing a questionnaire of music memorization and musical practice for student musicians in the UK and Brazil. A survey was developed for a cross-cultural research project aiming at examining how young orchestral musicians (aged 7–18 years) in different learning environments and cultures engage in instrumental practice and memorization. The questionnaire development included members of a UK/US/Brazil research team of music educators and performance science researchers. A pool of items was developed for each aspect of practice and memorization identified, based on literature, personal experiences, and adapted from existing questionnaires. Item development took the varying levels of cognitive and social development of the target populations into consideration. It also considered the diverse target learning environments. Items were initially grouped in accordance with a single underlying construct/behavior. The questionnaire comprised three sections: a demographics section, a section on practice (containing 29 items), and a section on memorization (containing 40 items). Next, the response process was considered and a 5-point Likert scale ranging from ‘always’ to ‘never’ with a verbal label and an image assigned to each response option was selected, following effective questionnaire design for children and youths. Finally, a pilot study was conducted with young orchestral musicians from diverse learning environments in Brazil and the United Kingdom. Data collection took place in either one-to-one or group settings to facilitate the participants. Cognitive interviews were utilized to establish response process validity by confirming the readability and accurate comprehension of the questionnaire items or highlighting the need for item revision. Internal reliability was investigated by measuring the consistency of the item groups using the statistical test Cronbach’s alpha. The pilot study successfully relied on the questionnaire to generate data about the engagement of young musicians of different levels and instruments, across different learning and cultural environments, in instrumental practice and memorization. Interaction analysis of the cognitive interviews undertaken with these participants, however, exposed the fact that certain items, and the response scale, could be interpreted in multiple ways. The questionnaire text was, therefore, revised accordingly. The low Cronbach’s Alpha scores of many item groups indicated another issue with the original questionnaire: its low level of internal reliability. Several reasons for each poor reliability can be suggested, including the issues with item interpretation revealed through interaction analysis of the cognitive interviews, the small number of participants (34), and the elusive nature of the construct in question. The revised questionnaire measures 78 specific behaviors or opinions. It can be seen to provide an efficient means of gathering information about the engagement of young musicians in practice and memorization on a large scale.Keywords: cross-cultural, memorization, practice, questionnaire, young musicians
Procedia PDF Downloads 123107 Observation on the Performance of Heritage Structures in Kathmandu Valley, Nepal during the 2015 Gorkha Earthquake
Authors: K. C. Apil, Keshab Sharma, Bigul Pokharel
Abstract:
Kathmandu Valley, capital city of Nepal houses numerous historical monuments as well as religious structures which are as old as from the 4th century A.D. The city alone is home to seven UNESCO’s world heritage sites including various public squares and religious sanctums which are often regarded as living heritages by various historians and archeological explorers. Recently on April 25, 2015, the capital city including other nearby locations was struck with Gorkha earthquake of moment magnitude (Mw) 7.8, followed by the strongest aftershock of moment magnitude (Mw) 7.3 on May 12. This study reports structural failures and collapse of heritage structures in Kathmandu Valley during the earthquake and presents preliminary findings as to the causes of failures and collapses. Field reconnaissance was carried immediately after the main shock and the aftershock, in major heritage sites: UNESCO world heritage sites, a number of temples and historic buildings in Kathmandu Durbar Square, Patan Durbar Square, and Bhaktapur Durbar Square. Despite such catastrophe, a significant number of heritage structures stood high, performing very well during the earthquake. Preliminary reports from archeological department suggest that 721 of such structures were severely affected, whereas numbers within the valley only were 444 including 76 structures which were completely collapsed. This study presents recorded accelerograms and geology of Kathmandu Valley. Structural typology and architecture of the heritage structures in Kathmandu Valley are briefly described. Case histories of damaged heritage structures, the patterns, and the failure mechanisms are also discussed in this paper. It was observed that performance of heritage structures was influenced by the multiple factors such as structural and architecture typology, configuration, and structural deficiency, local ground site effects and ground motion characteristics, age and maintenance level, material quality etc. Most of such heritage structures are of masonry type using bricks and earth-mortar as a bonding agent. The walls' resistance is mainly compressive, thus capable of withstanding vertical static gravitational load but not horizontal dynamic seismic load. There was no definitive pattern of damage to heritage structures as most of them behaved as a composite structure. Some structures were extensively damaged in some locations, while structures with similar configuration at nearby location had little or no damage. Out of major heritage structures, Dome, Pagoda (2, 3 or 5 tiered temples) and Shikhara structures were studied with similar variables. Studying varying degrees of damages in such structures, it was found that Shikhara structures were most vulnerable one where Dome structures were found to be the most stable one, followed by Pagoda structures. The seismic performance of the masonry-timber and stone masonry structures were slightly better than that of the masonry structures. Regular maintenance and periodic seismic retrofitting seems to have played pivotal role in strengthening seismic performance of the structure. The study also recommends some key functions to strengthen the seismic performance of such structures through study based on structural analysis, building material behavior and retrofitting details. The result also recognises the importance of documentation of traditional knowledge and its revised transformation in modern technology.Keywords: Gorkha earthquake, field observation, heritage structure, seismic performance, masonry building
Procedia PDF Downloads 151106 Digital Transformation in Fashion System Design: Tools and Opportunities
Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci
Abstract:
The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation
Procedia PDF Downloads 72105 A Review on Cyberchondria Based on Bibliometric Analysis
Authors: Xiaoqing Peng, Aijing Luo, Yang Chen
Abstract:
Background: Cyberchondria, as an "emerging risk" accompanied by the information era, is a new abnormal pattern characterized by excessive or repeated online searches for health-related information and escalating health anxiety, which endangers people's physical and mental health and poses a huge threat to public health. Objective: To explore and discuss the research status, hotspots and trends of Cyberchondria. Methods: Based on a total of 77 articles regarding "Cyberchondria" extracted from Web of Science from the beginning till October 2019, the literature trends, countries, institutions, hotspots are analyzed by bibliometric analysis, the concept definition of Cyberchondria, instruments, relevant factors, treatment and intervention are discussed as well. Results: Since "Cyberchondria" was put forward for the first time in 2001, the last two decades witnessed a noticeable increase in the amount of literature, especially during 2014-2019, it quadrupled dramatically at 62 compared with that before 2014 only at 15, which shows that Cyberchondria has become a new theme and hot topic in recent years. The United States was the most active contributor with the largest publication (23), followed by England (11) and Australia (11), while the leading institutions were Baylor University(7) and University of Sydney(7), followed by Florida State University(4) and University of Manchester(4). The WoS categories "Psychiatry/Psychology " and "Computer/ Information Science "were the areas of greatest influence. The concept definition of Cyberchondria is not completely unified in the world, but it is generally considered as an abnormal behavioral pattern and emotional state and has been invoked to refer to the anxiety-amplifying effects of online health-related searches. The first and the most frequently cited scale for measuring the severity of Cyberchondria called “The Cyberchondria Severity Scale (CSS) ”was developed in 2014, which conceptualized Cyberchondria as a multidimensional construct consisting of compulsion, distress, excessiveness, reassurance, and mistrust of medical professionals which was proved to be not necessary for this construct later. Since then, the Brazilian, German, Turkish, Polish and Chinese versions were subsequently developed, improved and culturally adjusted, while CSS was optimized to a simplified version (CSS-12) in 2019, all of which should be worthy of further verification. The hotspots of Cyberchondria mainly focuses on relevant factors as follows: intolerance of uncertainty, anxiety sensitivity, obsessive-compulsive disorder, internet addition, abnormal illness behavior, Whiteley index, problematic internet use, trying to make clear the role played by “associated factors” and “anxiety-amplifying factors” in the development of Cyberchondria, to better understand the aetiological links and pathways in the relationships between hypochondriasis, health anxiety and online health-related searches. Although the treatment and intervention of Cyberchondria are still in the initial stage of exploration, there are kinds of meaningful attempts to seek effective strategies from different aspects such as online psychological treatment, network technology management, health information literacy improvement and public health service. Conclusion: Research on Cyberchondria is in its infancy but should be deserved more attention. A conceptual consensus on Cyberchondria, a refined assessment tool, prospective studies conducted in various populations, targeted treatments for it would be the main research direction in the near future.Keywords: cyberchondria, hypochondriasis, health anxiety, online health-related searches
Procedia PDF Downloads 122104 Childhood Sensory Sensitivity: A Potential Precursor to Borderline Personality Disorder
Authors: Valerie Porr, Sydney A. DeCaro
Abstract:
TARA for borderline personality disorder (BPD), an education and advocacy organization, helps families to compassionately and effectively deal with troubling BPD behaviors. Our psychoeducational programs focus on understanding underlying neurobiological features of BPD and evidence-based methodology integrating dialectical behavior therapy (DBT) and mentalization based therapy (MBT,) clarifying the inherent misunderstanding of BPD behaviors and improving family communication. TARA4BPD conducts online surveys, workshops, and topical webinars. For over 25 years, we have collected data from BPD helpline callers. This data drew our attention to particular childhood idiosyncrasies that seem to characterize many of the children who later met the criteria for BPD. The idiosyncrasies we observed, heightened sensory sensitivity and hypervigilance, were included in Adolf Stern’s 1938 definition of “Borderline.” This aspect of BPD has not been prioritized by personality disorder researchers, presently focused on emotion processing and social cognition in BPD. Parents described sleep reversal problems in infants who, early on, seem to exhibit dysregulation in circadian rhythm. Families describe children as supersensitive to sensory sensations, such as specific sounds, heightened sense of smell, taste, textures of foods, and an inability to tolerate various fabrics textures (i.e., seams in socks). They also exhibit high sensitivity to particular words and voice tones. Many have alexithymia and dyslexia. These children are either hypo- or hypersensitive to sensory sensations, including pain. Many suffer from fibromyalgia. BPD reactions to pain have been studied (C. Schmahl) and confirm the existence of hyper and hypo-reactions to pain stimuli in people with BPD. To date, there is little or no data regarding what comprises a normative range of sensitivity in infants and children. Many parents reported that their children were tested or treated for sensory processing disorder (SPD), learning disorders, and ADHD. SPD is not included in the DSM and is treated by occupational therapists. The overwhelming anecdotal data from thousands of parents of children who later met criteria for BPD led TARA4BPD to develop a sensitivity survey to develop evidence of the possible role of early sensory perception problems as a pre-cursor to BPD, hopefully initiating new directions in BPD research. At present, the research community seems unaware of the role supersensory sensitivity might play as an early indicator of BPD. Parents' observations of childhood sensitivity obtained through family interviews and results of an extensive online survey on sensory responses across various ages of development will be presented. People with BPD suffer from a sense of isolation and otherness that often results in later interpersonal difficulties. Early identification of supersensitive children while brain circuits are developing might decrease the development of social interaction deficits such as rejection sensitivity, self-referential processes, and negative bias, hallmarks of BPD, ultimately minimizing the maladaptive methods of coping with distress that characterizes BPD. Family experiences are an untapped resource for BPD research. It is hoped that this data will give family observations the critical credibility to inform future treatment and research directions.Keywords: alexithymia, dyslexia, hypersensitivity, sensory processing disorder
Procedia PDF Downloads 201103 Improving the Quality of Discussion and Documentation of Advance Care Directives in a Community-Based Resident Primary Care Clinic
Authors: Jason Ceavers, Travis Thompson, Juan Torres, Ramanakumar Anam, Alan Wong, Andrei Carvalho, Shane Quo, Shawn Alonso, Moises Cintron, Ricardo C. Carrero, German Lopez, Vamsi Garimella, German Giese
Abstract:
Introduction: Advance directives (AD) are essential for patients to communicate their wishes when they are not able to. Ideally, these discussions should not occur for the first time when a patient is hospitalized with an acute life-threatening illness. There is a large number of patients who do not have clearly documented ADs, resulting in the misutilization of resources and additional patient harm. This is a nationwide issue, and the Joint Commission has it as one of its national quality metrics. Presented here is a proposed protocol to increase the number of documented AD discussions in a community-based, internal medicine residency primary care clinic in South Florida. Methods: The SMART Aim for this quality improvement project is to increase documentation of AD discussions in the outpatient setting by 25% within three months in medicare patients. A survey was sent to stakeholders (clinic attendings, residents, medical assistants, front desk staff, and clinic managers), asking them for three factors they believed contributed most to the low documentation rate of AD discussions. The two most important factors were time constraints and systems issues (such as lack of a standard method to document ADs and ADs not being uploaded to the chart) which were brought up by 25% and 21.2% of the 32 survey responders, respectively. Pre-intervention data from clinic patients in 2020-2021 revealed 17.05% of patients had clear, actionable ADs documented. To address these issues, an AD pocket card was created to give to patients. One side of the card has a brief explanation of what ADs are. The other side has a column of interventions (cardiopulmonary resuscitation, mechanical ventilation, dialysis, tracheostomy, feeding tube) with boxes patients check off if they want the intervention done, do not want the intervention, do not want to discuss the topic, or need more information. These cards are to be filled out and scanned into their electronic chart to be reviewed by the resident before their appointment. The interventions that patients want more information on will be discussed by the provider. If any changes are made, the card will be re-scanned into their chart. After three months, we will chart review the patients seen in the clinic to determine how many medicare patients have a pocket card uploaded and how many have advance directives discussions documented in a progress note or annual wellness note. If there is not enough time for an AD discussion, a follow-up appointment can be scheduled for that discussion. Discussion: ADs are a crucial part of patient care, and failure to understand a patient’s wishes leads to improper utilization of resources, avoidable litigation, and patient harm. Time constraints and systems issues were identified as two major factors contributing to the lack of advance directive discussion in our community-based resident primary care clinic. Our project aims at increasing the documentation rate for ADs through a simple pocket card intervention. These are self-explanatory, easy to read and allow the patients to clearly express what interventions they desire or what they want to discuss further with their physician.Keywords: advance directives, community-based, pocket card, primary care clinic
Procedia PDF Downloads 164102 Buoyant Gas Dispersion in a Small Fuel Cell Enclosure: A Comparison Study Using Plain and Pressed Louvre Vent Passive Ventilation Schemes
Authors: T. Ghatauray, J. Ingram, P. Holborn
Abstract:
The transition from a ‘carbon rich’ fossil fuel dependent to a ‘sustainable’ and ‘renewable’ hydrogen based society will see the deployment of hydrogen fuel cells (HFC) in transport applications and in the generation of heat and power for buildings, as part of a decentralised power network. Many deployments will be low power HFCs for domestic combined heat and power (CHP) and commercial ‘transportable’ HFCs for environmental situations, such as lighting and telephone towers. For broad commercialisation of small fuel cells to be achieved there needs to be significant confidence in their safety in both domestic and environmental applications. Low power HFCs are housed in protective steel enclosures. Standard enclosures have plain rectangular ventilation openings intended for thermal management of electronics and not the dispersion of a buoyant gas. Degradation of the HFC or supply pipework in use could lead to a low-level leak and a build-up of hydrogen gas in the enclosure. Hydrogen’s wide flammable range (4-75%) is a significant safety concern, with ineffective enclosure ventilation having the potential to cause flammable mixtures to develop with the risk of explosion. Mechanical ventilation is effective at managing enclosure hydrogen concentrations, but drains HFC power and is vulnerable to failure. This is undesirable in low power and remote installations and reliable passive ventilation systems are preferred. Passive ventilation depends upon buoyancy driven flow, with the size, shape and position of ventilation openings critical for producing predictable flows and maintaining low buoyant gas concentrations. With environmentally sited enclosures, ventilation openings with pressed horizontal and angled louvres are preferred to protect the HFC and electronics inside. There is an economic cost to adding louvres, but also a safety concern. A question arises over whether the use of pressed louvre vents impairs enclosure passive ventilation performance, when compared to same opening area plain vents. Comparison small enclosure (0.144m³) tests of same opening area pressed louvre and plain vents were undertaken. A displacement ventilation arrangement was incorporated into the enclosure with opposing upper and lower ventilation openings. A range of vent areas were tested. Helium (used as a safe analogue for hydrogen) was released from a 4mm nozzle at the base of the enclosure to simulate a hydrogen leak at leak rates from 1 to 10 lpm. Helium sensors were used to record concentrations at eight heights in the enclosure. The enclosure was otherwise empty. These tests determined that the use of pressed and angled louvre ventilation openings on the enclosure impaired the passive ventilation flow and increased helium concentrations in the enclosure. High-level stratified buoyant gas layers were also found to be deeper than with plain vent openings and were within the flammable range. The presence of gas within the flammable range is of concern, particularly as the addition of the fuel cell and electronics in the enclosure would further reduce the available volume and increase concentrations. The opening area of louvre vents would need to be greater than equivalent plain vents to achieve comparable ventilation flows or alternative schemes would need to be considered.Keywords: enclosure, fuel cell, helium, hydrogen safety, louvre vent, passive ventilation
Procedia PDF Downloads 274101 Fully Autonomous Vertical Farm to Increase Crop Production
Authors: Simone Cinquemani, Lorenzo Mantovani, Aleksander Dabek
Abstract:
New technologies in agriculture are opening new challenges and new opportunities. Among these, certainly, robotics, vision, and artificial intelligence are the ones that will make a significant leap, compared to traditional agricultural techniques, possible. In particular, the indoor farming sector will be the one that will benefit the most from these solutions. Vertical farming is a new field of research where mechanical engineering can bring knowledge and know-how to transform a highly labor-based business into a fully autonomous system. The aim of the research is to develop a multi-purpose, modular, and perfectly integrated platform for crop production in indoor vertical farming. Activities will be based both on hardware development such as automatic tools to perform different activities on soil and plants, as well as research to introduce an extensive use of monitoring techniques based on machine learning algorithms. This paper presents the preliminary results of a research project of a vertical farm living lab designed to (i) develop and test vertical farming cultivation practices, (ii) introduce a very high degree of mechanization and automation that makes all processes replicable, fully measurable, standardized and automated, (iii) develop a coordinated control and management environment for autonomous multiplatform or tele-operated robots in environments with the aim of carrying out complex tasks in the presence of environmental and cultivation constraints, (iv) integrate AI-based algorithms as decision support system to improve quality production. The coordinated management of multiplatform systems still presents innumerable challenges that require a strongly multidisciplinary approach right from the design, development, and implementation phases. The methodology is based on (i) the development of models capable of describing the dynamics of the various platforms and their interactions, (ii) the integrated design of mechatronic systems able to respond to the needs of the context and to exploit the strength characteristics highlighted by the models, (iii) implementation and experimental tests performed to test the real effectiveness of the systems created, evaluate any weaknesses so as to proceed with a targeted development. To these aims, a fully automated laboratory for growing plants in vertical farming has been developed and tested. The living lab makes extensive use of sensors to determine the overall state of the structure, crops, and systems used. The possibility of having specific measurements for each element involved in the cultivation process makes it possible to evaluate the effects of each variable of interest and allows for the creation of a robust model of the system as a whole. The automation of the laboratory is completed with the use of robots to carry out all the necessary operations, from sowing to handling to harvesting. These systems work synergistically thanks to the knowledge of detailed models developed based on the information collected, which allows for deepening the knowledge of these types of crops and guarantees the possibility of tracing every action performed on each single plant. To this end, artificial intelligence algorithms have been developed to allow synergistic operation of all systems.Keywords: automation, vertical farming, robot, artificial intelligence, vision, control
Procedia PDF Downloads 40100 Single Crystal Growth in Floating-Zone Method and Properties of Spin Ladders: Quantum Magnets
Authors: Rabindranath Bag, Surjeet Singh
Abstract:
Materials in which the electrons are strongly correlated provide some of the most challenging and exciting problems in condensed matter physics today. After the discovery of high critical temperature superconductivity in layered or two-dimensional copper oxides, many physicists got attention in cuprates and it led to an upsurge of interest in the synthesis and physical properties of copper-oxide based material. The quest to understand superconducting mechanism in high-temperature cuprates, drew physicist’s attention to somewhat simpler compounds consisting of spin-chains or one-dimensional lattice of coupled spins. Low-dimensional quantum magnets are of huge contemporary interest in basic sciences as well emerging technologies such as quantum computing and quantum information theory, and heat management in microelectronic devices. Spin ladder is an example of quasi one-dimensional quantum magnets which provides a bridge between one and two dimensional materials. One of the examples of quasi one-dimensional spin-ladder compounds is Sr14Cu24O41, which exhibits a lot of interesting and exciting physical phenomena in low dimensional systems. Very recently, the ladder compound Sr14Cu24O41 was shown to exhibit long-distance quantum entanglement crucial to quantum information theory. Also, it is well known that hole-compensation in this material results in very high (metal-like) anisotropic thermal conductivity at room temperature. These observations suggest that Sr14Cu24O41 is a potential multifunctional material which invites further detailed investigations. To investigate these properties one must needs a large and high quality of single crystal. But these systems are showing incongruently melting behavior, which brings many difficulties to grow a large and quality of single crystals. Hence, we are using TSFZ (Travelling Solvent Floating Zone) method to grow the high quality of single crystals of the low dimensional magnets. Apart from this, it has unique crystal structure (alternating stacks of plane containing edge-sharing CuO2 chains, and the plane containing two-leg Cu2O3 ladder with intermediate Sr layers along the b- axis), which is also incommensurate in nature. It exhibits abundant physical phenomenon such as spin dimerization, crystallization of charge holes and charge density wave. The maximum focus of research so far involved in introducing defects on A-site (Sr). However, apart from the A-site (Sr) doping, there are only few studies in which the B-site (Cu) doping of polycrystalline Sr14Cu24O41 have been discussed and the reason behind this is the possibility of two doping sites for Cu (CuO2 chain and Cu2O3 ladder). Therefore, in our present work, the crystals (pristine and Cu-site doped) were grown by using TSFZ method by tuning the growth parameters. The Laue diffraction images, optical polarized microscopy and Scanning Electron Microscopy (SEM) images confirm the quality of the grown crystals. Here, we report the single crystal growth, magnetic and transport properties of Sr14Cu24O41 and its lightly doped variants (magnetic and non-magnetic) containing less than 1% of Co, Ni, Al and Zn impurities. Since, any real system will have some amount of weak disorder, our studies on these ladder compounds with controlled dilute disorder would be significant in the present context.Keywords: low-dimensional quantum magnets, single crystal, spin-ladder, TSFZ technique
Procedia PDF Downloads 27499 Increased Stability of Rubber-Modified Asphalt Mixtures to Swelling, Expansion and Rebound Effect during Post-Compaction
Authors: Fernando Martinez Soto, Gaetano Di Mino
Abstract:
The application of rubber into bituminous mixtures requires attention and care during mixing and compaction. Rubber modifies the properties because it reacts in the internal structure of bitumen at high temperatures changing the performance of the mixture (interaction process of solvents with binder-rubber aggregate). The main change is the increasing of the viscosity and elasticity of the binder due to the larger sizes of the rubber particles by dry process but, this positive effect is counteracted by short mixing times, compared to wet technology, and due to the transport processes, curing time and post-compaction of the mixtures. Therefore, negative effects as swelling of rubber particles, rebounding effect of the specimens and thermal changes by different expansion of the structure inside the mixtures, can change the mechanical properties of the rubberized blends. Based on the dry technology, different asphalt-rubber binders using devulcanized or natural rubber (truck and bus tread rubber), have served to demonstrate these effects and how to solve them into two dense-gap graded rubber modified asphalt concrete mixes (RUMAC) to enhance the stability, workability and durability of the compacted samples by Superpave gyratory compactor method. This paper specifies the procedures developed in the Department of Civil Engineering of the University of Palermo during September 2016 to March 2017, for characterizing the post-compaction and mix-stability of the one conventional mixture (hot mix asphalt without rubber) and two gap-graded rubberized asphalt mixes according granulometry for rail sub-ballast layers with nominal size of Ø22.4mm of aggregates according European standard. Thus, the main purpose of this laboratory research is the application of ambient ground rubber from scrap tires processed at conventional temperature (20ºC) inside hot bituminous mixtures (160-220ºC) as a substitute for 1.5%, 2% and 3% by weight of the total aggregates (3.2%, 4.2% and, 6.2% respectively by volumetric part of the limestone aggregates of bulk density equal to 2.81g/cm³) considered, not as a part of the asphalt binder. The reference bituminous mixture was designed with 4% of binder and ± 3% of air voids, manufactured for a conventional bitumen B50/70 at 160ºC-145ºC mix-compaction temperatures to guarantee the workability of the mixes. The proportions of rubber proposed are #60-40% for mixtures with 1.5 to 2% of rubber and, #20-80% for mixture with 3% of rubber (as example, a 60% of Ø0.4-2mm and 40% of Ø2-4mm). The temperature of the asphalt cement is between 160-180 ºC for mixing and 145-160 ºC for compaction, according to the optimal values for viscosity using Brookfield viscometer and 'ring and ball' - penetration tests. These crumb rubber particles act as a rubber-aggregate into the mixture, varying sizes between 0.4mm to 2mm in a first fraction, and 2-4mm as second proportion. Ambient ground rubber with a specific gravity of 1.154g/cm³ is used. The rubber is free of loose fabric, wire, and other contaminants. It was found optimal results in real beams and cylindrical specimens with each HMA mixture reducing the swelling effect. Different factors as temperature, particle sizes of rubber, number of cycles and pressures of compaction that affect the interaction process are explained.Keywords: crumb-rubber, gyratory compactor, rebounding effect, superpave mix-design, swelling, sub-ballast railway
Procedia PDF Downloads 24398 Automated End of Sprint Detection for Force-Velocity-Power Analysis with GPS/GNSS Systems
Authors: Patrick Cormier, Cesar Meylan, Matt Jensen, Dana Agar-Newman, Chloe Werle, Ming-Chang Tsai, Marc Klimstra
Abstract:
Sprint-derived horizontal force-velocity-power (FVP) profiles can be developed with adequate validity and reliability with satellite (GPS/GNSS) systems. However, FVP metrics are sensitive to small nuances in data processing procedures such that minor differences in defining the onset and end of the sprint could result in different FVP metric outcomes. Furthermore, in team-sports, there is a requirement for rapid analysis and feedback of results from multiple athletes, therefore developing standardized and automated methods to improve the speed, efficiency and reliability of this process are warranted. Thus, the purpose of this study was to compare different methods of sprint end detection on the development of FVP profiles from 10Hz GPS/GNSS data through goodness-of-fit and intertrial reliability statistics. Seventeen national team female soccer players participated in the FVP protocol which consisted of 2x40m maximal sprints performed towards the end of a soccer specific warm-up in a training session (1020 hPa, wind = 0, temperature = 30°C) on an open grass field. Each player wore a 10Hz Catapult system unit (Vector S7, Catapult Innovations) inserted in a vest in a pouch between the scapulae. All data were analyzed following common procedures. Variables computed and assessed were the model parameters, estimated maximal sprint speed (MSS) and the acceleration constant τ, in addition to horizontal relative force (F₀), velocity at zero (V₀), and relative mechanical power (Pmax). The onset of the sprints was standardized with an acceleration threshold of 0.1 m/s². The sprint end detection methods were: 1. Time when peak velocity (MSS) was achieved (zero acceleration), 2. Time after peak velocity drops by -0.4 m/s, 3. Time after peak velocity drops by -0.6 m/s, and 4. When the integrated distance from the GPS/GNSS signal achieves 40-m. Goodness-of-fit of each sprint end detection method was determined using the residual sum of squares (RSS) to demonstrate the error of the FVP modeling with the sprint data from the GPS/GNSS system. Inter-trial reliability (from 2 trials) was assessed utilizing intraclass correlation coefficients (ICC). For goodness-of-fit results, the end detection technique that used the time when peak velocity was achieved (zero acceleration) had the lowest RSS values, followed by -0.4 and -0.6 velocity decay, and 40-m end had the highest RSS values. For intertrial reliability, the end of sprint detection techniques that were defined as the time at (method 1) or shortly after (method 2 and 3) when MSS was achieved had very large to near perfect ICC and the time at the 40 m integrated distance (method 4) had large to very large ICCs. Peak velocity was reached at 29.52 ± 4.02-m. Therefore, sport scientists should implement end of sprint detection either when peak velocity is determined or shortly after to improve goodness of fit to achieve reliable between trial FVP profile metrics. Although, more robust processing and modeling procedures should be developed in future research to improve sprint model fitting. This protocol was seamlessly integrated into the usual training which shows promise for sprint monitoring in the field with this technology.Keywords: automated, biomechanics, team-sports, sprint
Procedia PDF Downloads 11997 Benefits of High Power Impulse Magnetron Sputtering (HiPIMS) Method for Preparation of Transparent Indium Gallium Zinc Oxide (IGZO) Thin Films
Authors: Pavel Baroch, Jiri Rezek, Michal Prochazka, Tomas Kozak, Jiri Houska
Abstract:
Transparent semiconducting amorphous IGZO films have attracted great attention due to their excellent electrical properties and possible utilization in thin film transistors or in photovoltaic applications as they show 20-50 times higher mobility than that of amorphous silicon. It is also known that the properties of IGZO films are highly sensitive to process parameters, especially to oxygen partial pressure. In this study we have focused on the comparison of properties of transparent semiconducting amorphous indium gallium zinc oxide (IGZO) thin films prepared by conventional sputtering methods and those prepared by high power impulse magnetron sputtering (HiPIMS) method. Furthermore we tried to optimize electrical and optical properties of the IGZO thin films and to investigate possibility to apply these coatings on thermally sensitive flexible substrates. We employed dc, pulsed dc, mid frequency sine wave and HiPIMS power supplies for magnetron deposition. Magnetrons were equipped with sintered ceramic InGaZnO targets. As oxygen vacancies are considered to be the main source of the carriers in IGZO films, it is expected that with the increase of oxygen partial pressure number of oxygen vacancies decreases which results in the increase of film resistivity. Therefore in all experiments we focused on the effect of oxygen partial pressure, discharge power and pulsed power mode on the electrical, optical and mechanical properties of IGZO thin films and also on the thermal load deposited to the substrate. As expected, we have observed a very fast transition between low- and high-resistivity films depending on oxygen partial pressure when deposition using conventional sputtering methods/power supplies have been utilized. Therefore we established and utilized HiPIMS sputtering system for enlargement of operation window for better control of IGZO thin film properties. It is shown that with this system we are able to effectively eliminate steep transition between low and high resistivity films exhibited by DC mode of sputtering and the electrical resistivity can be effectively controlled in the wide resistivity range of 10-² to 10⁵ Ω.cm. The highest mobility of charge carriers (up to 50 cm2/V.s) was obtained at very low oxygen partial pressures. Utilization of HiPIMS also led to significant decrease in thermal load deposited to the substrate which is beneficial for deposition on the thermally sensitive and flexible polymer substrates. Deposition rate as a function of discharge power and oxygen partial pressure was also systematically investigated and the results from optical, electrical and structure analysis will be discussed in detail. Most important result which we have obtained demonstrates almost linear control of IGZO thin films resistivity with increasing of oxygen partial pressure utilizing HiPIMS mode of sputtering and highly transparent films with low resistivity were prepared already at low pO2. It was also found that utilization of HiPIMS technique resulted in significant improvement of surface smoothness in reactive mode of sputtering (with increasing of oxygen partial pressure).Keywords: charge carrier mobility, HiPIMS, IGZO, resistivity
Procedia PDF Downloads 29796 Physical Aspects of Shape Memory and Reversibility in Shape Memory Alloys
Authors: Osman Adiguzel
Abstract:
Shape memory alloys take place in a class of smart materials by exhibiting a peculiar property called the shape memory effect. This property is characterized by the recoverability of two certain shapes of material at different temperatures. These materials are often called smart materials due to their functionality and their capacity of responding to changes in the environment. Shape memory materials are used as shape memory devices in many interdisciplinary fields such as medicine, bioengineering, metallurgy, building industry and many engineering fields. The shape memory effect is performed thermally by heating and cooling after first cooling and stressing treatments, and this behavior is called thermoelasticity. This effect is based on martensitic transformations characterized by changes in the crystal structure of the material. The shape memory effect is the result of successive thermally and stress-induced martensitic transformations. Shape memory alloys exhibit thermoelasticity and superelasticity by means of deformation in the low-temperature product phase and high-temperature parent phase region, respectively. Superelasticity is performed by stressing and releasing the material in the parent phase region. Loading and unloading paths are different in the stress-strain diagram, and the cycling loop reveals energy dissipation. The strain energy is stored after releasing, and these alloys are mainly used as deformation absorbent materials in control of civil structures subjected to seismic events, due to the absorbance of strain energy during any disaster or earthquake. Thermal-induced martensitic transformation occurs thermally on cooling, along with lattice twinning with cooperative movements of atoms by means of lattice invariant shears, and ordered parent phase structures turn into twinned martensite structures, and twinned structures turn into the detwinned structures by means of stress-induced martensitic transformation by stressing the material in the martensitic condition. Thermal induced transformation occurs with the cooperative movements of atoms in two opposite directions, <110 > -type directions on the {110} - type planes of austenite matrix which is the basal plane of martensite. Copper-based alloys exhibit this property in the metastable β-phase region, which has bcc-based structures at high-temperature parent phase field. Lattice invariant shear and twinning is not uniform in copper-based ternary alloys and gives rise to the formation of complex layered structures, depending on the stacking sequences on the close-packed planes of the ordered parent phase lattice. In the present contribution, x-ray diffraction and transmission electron microscopy (TEM) studies were carried out on two copper-based CuAlMn and CuZnAl alloys. X-ray diffraction profiles and electron diffraction patterns reveal that both alloys exhibit superlattice reflections inherited from the parent phase due to the displacive character of martensitic transformation. X-ray diffractograms taken in a long time interval show that diffraction angles and intensities of diffraction peaks change with the aging duration at room temperature. In particular, some of the successive peak pairs providing a special relation between Miller indices come close to each other. This result refers to the rearrangement of atoms in a diffusive manner.Keywords: shape memory effect, martensitic transformation, reversibility, superelasticity, twinning, detwinning
Procedia PDF Downloads 18195 Creation of a Test Machine for the Scientific Investigation of Chain Shot
Authors: Mark McGuire, Eric Shannon, John Parmigiani
Abstract:
Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.Keywords: chain shot, safety, testing, timber harvesters
Procedia PDF Downloads 152