Search results for: Monte Carlo methods
999 In Silico Modeling of Drugs Milk/Plasma Ratio in Human Breast Milk Using Structures Descriptors
Authors: Navid Kaboudi, Ali Shayanfar
Abstract:
Introduction: Feeding infants with safe milk from the beginning of their life is an important issue. Drugs which are used by mothers can affect the composition of milk in a way that is not only unsuitable, but also toxic for infants. Consuming permeable drugs during that sensitive period by mother could lead to serious side effects to the infant. Due to the ethical restrictions of drug testing on humans, especially women, during their lactation period, computational approaches based on structural parameters could be useful. The aim of this study is to develop mechanistic models to predict the M/P ratio of drugs during breastfeeding period based on their structural descriptors. Methods: Two hundred and nine different chemicals with their M/P ratio were used in this study. All drugs were categorized into two groups based on their M/P value as Malone classification: 1: Drugs with M/P>1, which are considered as high risk 2: Drugs with M/P>1, which are considered as low risk Thirty eight chemical descriptors were calculated by ACD/labs 6.00 and Data warrior software in order to assess the penetration during breastfeeding period. Later on, four specific models based on the number of hydrogen bond acceptors, polar surface area, total surface area, and number of acidic oxygen were established for the prediction. The mentioned descriptors can predict the penetration with an acceptable accuracy. For the remaining compounds (N= 147, 158, 160, and 174 for models 1 to 4, respectively) of each model binary regression with SPSS 21 was done in order to give us a model to predict the penetration ratio of compounds. Only structural descriptors with p-value<0.1 remained in the final model. Results and discussion: Four different models based on the number of hydrogen bond acceptors, polar surface area, and total surface area were obtained in order to predict the penetration of drugs into human milk during breastfeeding period About 3-4% of milk consists of lipids, and the amount of lipid after parturition increases. Lipid soluble drugs diffuse alongside with fats from plasma to mammary glands. lipophilicity plays a vital role in predicting the penetration class of drugs during lactation period. It was shown in the logistic regression models that compounds with number of hydrogen bond acceptors, PSA and TSA above 5, 90 and 25 respectively, are less permeable to milk because they are less soluble in the amount of fats in milk. The pH of milk is acidic and due to that, basic compounds tend to be concentrated in milk than plasma while acidic compounds may consist lower concentrations in milk than plasma. Conclusion: In this study, we developed four regression-based models to predict the penetration class of drugs during the lactation period. The obtained models can lead to a higher speed in drug development process, saving energy, and costs. Milk/plasma ratio assessment of drugs requires multiple steps of animal testing, which has its own ethical issues. QSAR modeling could help scientist to reduce the amount of animal testing, and our models are also eligible to do that.Keywords: logistic regression, breastfeeding, descriptors, penetration
Procedia PDF Downloads 72998 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals
Authors: W. Staiano, S. Marcora
Abstract:
Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.Keywords: cognitive training, perception of effort, endurance performance, neuro-performance
Procedia PDF Downloads 120997 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene
Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger
Abstract:
In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water
Procedia PDF Downloads 303996 Development of a Novel Ankle-Foot Orthotic Using a User Centered Approach for Improved Satisfaction
Authors: Ahlad Neti, Elisa Arch, Martha Hall
Abstract:
Studies have shown that individuals who use Ankle-Foot-Orthoses (AFOs) have a high level of dissatisfaction regarding their current AFOs. Studies point to the focus on technical design with little attention given to the user perspective as a source of AFO designs that leave users dissatisfied. To design a new AFO that satisfies users and thereby improves their quality of life, the reasons for their dissatisfaction and their wants and needs for an improved AFO design must be identified. There has been little research into the user perspective on AFO use and desired improvements, so the relationship between AFO design and satisfaction in daily use must be assessed to develop appropriate metrics and constraints prior to designing a novel AFO. To assess the user perspective on AFO design, structured interviews were conducted with 7 individuals (average age of 64.29±8.81 years) who use AFOs. All interviews were transcribed and coded to identify common themes using Grounded Theory Method in NVivo 12. Qualitative analysis of these results identified sources of user dissatisfaction such as heaviness, bulk, and uncomfortable material and overall needs and wants for an AFO. Beyond the user perspective, certain objective factors must be considered in the construction of metrics and constraints to ensure that the AFO fulfills its medical purpose. These more objective metrics are rooted in a common medical device market and technical standards. Given the large body of research concerning these standards, these objective metrics and constraints were derived through a literature review. Through these two methods, a comprehensive list of metrics and constraints accounting for both the user perspective on AFO design and the AFO’s medical purpose was compiled. These metrics and constraints will establish the framework for designing a new AFO that carries out its medical purpose while also improving the user experience. The metrics can be categorized into several overarching areas for AFO improvement. Categories of user perspective related metrics include comfort, discreteness, aesthetics, ease of use, and compatibility with clothing. Categories of medical purpose related metrics include biomechanical functionality, durability, and affordability. These metrics were used to guide an iterative prototyping process. Six concepts were ideated and compared using system-level analysis. From these six concepts, two concepts – the piano wire model and the segmented model – were selected to move forward into prototyping. Evaluation of non-functional prototypes of the piano wire and segmented models determined that the piano wire model better fulfilled the metrics by offering increased stability, longer durability, fewer points for failure, and a strong enough core component to allow a sock to cover over the AFO while maintaining the overall structure. As such, the piano wire AFO has moved forward into the functional prototyping phase, and healthy subject testing is being designed and recruited to conduct design validation and verification.Keywords: ankle-foot orthotic, assistive technology, human centered design, medical devices
Procedia PDF Downloads 156995 Exploring Accessible Filmmaking and Video for Deafblind Audiences through Multisensory Participatory Design
Authors: Aikaterini Tavoulari, Mike Richardson
Abstract:
Objective: This abstract presents a multisensory participatory design project, inspired by a deafblind PhD student's ambition to climb Mount Everest. The project aims to explore accessible routes for filmmaking and video content creation, catering to the needs of individuals with hearing and sight loss. By engaging participants from the Southwest area of England, recruited through multiple networks, the project seeks to gather qualitative data and insights to inform the development of inclusive media practices. Design: It will be a community-based participatory research design. The workshop will feature various stations that stimulate different senses, such as scent, touch, sight, hearing as well as movement. Participants will have the opportunity to engage with these multisensory experiences, providing valuable feedback on their effectiveness and potential for enhancing accessibility in filmmaking and video content. Methods: Brief semi-structured interviews will be conducted to collect qualitative data, allowing participants to share their perspectives, challenges, and suggestions for improvement. The participatory design approach emphasizes the importance of involving the target audience in the creative process. By actively engaging individuals with hearing and sight loss, the project aims to ensure that their needs and preferences are central to the development of accessible filmmaking techniques and video content. This collaborative effort seeks to bridge the gap between content creators and diverse audiences, fostering a more inclusive media landscape. Results: The findings from this study will contribute to the growing body of research on accessible filmmaking and video content creation. Via inductive thematic analysis of the qualitative data collected through interviews and observations, the researchers aim to identify key themes, challenges, and opportunities for creating engaging and inclusive media experiences for deafblind audiences. The insights will inform the development of best practices and guidelines for accessible filmmaking, empowering content creators to produce more inclusive and immersive video content. Conclusion: The abstract targets the hybrid International Conference for Disability and Diversity in Canada (January 2025), as this platform provides an excellent opportunity to share the outcomes of the project with a global audience of researchers, practitioners, and advocates working towards inclusivity and accessibility in various disability domains. By presenting this research at the conference in person, the authors aim to contribute to the ongoing discourse on disability and diversity, highlighting the importance of multisensory experiences and participatory design in creating accessible media content for the deafblind community and the community with sensory impairments more broadly.Keywords: vision impairment, hearing impairment, deafblindness, accessibility, filmmaking
Procedia PDF Downloads 43994 Nursing Experience in the Intensive Care of a Lung Cancer Patient with Pulmonary Embolism on Extracorporeal Membrane Oxygenation
Authors: Huang Wei-Yi
Abstract:
Objective: This article explores the intensive care nursing experience of a lung cancer patient with pulmonary embolism who was placed on ECMO. Following a sudden change in the patient’s condition and a consensus reached during a family meeting, the decision was made to withdraw life-sustaining equipment and collaborate with the palliative care team. Methods: The nursing period was from October 20 to October 27, 2023. The author monitored physiological data, observed, provided direct care, conducted interviews, performed physical assessments, and reviewed medical records. Together with the critical care team and bypass personnel, a comprehensive assessment was conducted using Gordon's Eleven Functional Health Patterns to identify the patient’s health issues, which included pain related to lung cancer and invasive devices, fear of death due to sudden deterioration, and altered tissue perfusion related to hemodynamic instability. Results: The patient was admitted with fever, back pain, and painful urination. During hospitalization, the patient experienced sudden discomfort followed by cardiac arrest, requiring multiple CPR attempts and ECMO placement. A subsequent CT angiogram revealed a pulmonary embolism. The patient's condition was further complicated by severe pain due to compression fractures, and a diagnosis of terminal lung cancer was unexpectedly confirmed, leading to emotional distress and uncertainty about future treatment. Throughout the critical care process, ECMO was removed on October 24, stabilizing the patient’s body temperature between 36.5-37°C and maintaining a mean arterial pressure of 60-80 mmHg. Pain management, including Morphine 8mg in 0.9% N/S 100ml IV drip q6h PRN and Ultracet 37.5 mg/325 mg 1# PO q6h, kept the pain level below 3. The patient was transferred to the ward on October 27 and discharged home on October 30. Conclusion: During the care period, collaboration with the medical team and palliative care professionals was crucial. Adjustments to pain medication, symptom management, and lung cancer-targeted therapy improved the patient’s physical discomfort and pain levels. By applying the unique functions of nursing and the four principles of palliative care, positive encouragement was provided. Family members, along with social workers, clergy, psychologists, and nutritionists, participated in cross-disciplinary care, alleviating anxiety and fear. The consensus to withdraw ECMO and life-sustaining equipment enabled the patient and family to receive high-quality care and maintain autonomy in decision-making. A follow-up call on November 1 confirmed that the patient was emotionally stable, pain-free, and continuing with targeted lung cancer therapy.Keywords: intensive care, lung cancer, pulmonary embolism, ECMO
Procedia PDF Downloads 27993 Cultural Adaptation of an Appropriate Intervention Tool for Mental Health among the Mohawk in Quebec
Authors: Liliana Gomez Cardona, Mary McComber, Kristyn Brown, Arlene Laliberté, Outi Linnaranta
Abstract:
The history of colonialism and more contemporary political issues have resulted in the exposure of Kanien'kehá:ka: non (Kanien'kehá:ka of Kahnawake) to challenging and even traumatic experiences. Colonization, religious missions, residential schools as well as economic and political marginalization are the factors that have challenged the wellbeing and mental health of these populations. In psychiatry, screening for mental illness is often done using questionnaires with which the patient is expected to respond to how often he/she has certain symptoms. However, the Indigenous view of mental wellbeing may not fit well with this approach. Moreover, biomedical treatments do not always meet the needs of Indigenous people because they do not understand the culture and traditional healing methods that persist in many communities. Assess whether the questionnaires used to measure symptoms, commonly used in psychiatry are appropriate and culturally safe for the Mohawk in Quebec. Identify the most appropriate tool to assess and promote wellbeing and follow the process necessary to improve its cultural sensitivity and safety for the Mohawk population. Qualitative, collaborative, and participatory action research project which respects First Nations protocols and the principles of ownership, control, access, and possession (OCAP). Data collection based on five focus groups with stakeholders working with these populations and members of Indigenous communities. Thematic analysis of the data collected and emerging through an advisory group that led a revision of the content, use, and cultural and conceptual relevance of the instruments. The questionnaires measuring psychiatric symptoms face significant limitations in the local indigenous context. We present the factors that make these tools not relevant among Mohawks. Although the scale called Growth and Empowerment Measure (GEM) was originally developed among Indigenous in Australia, the Mohawk in Quebec found that this tool comprehends critical aspects of their mental health and wellbeing more respectfully and accurately than questionnaires focused on measuring symptoms. We document the process of cultural adaptation of this tool which was supported by community members to create a culturally safe tool that helps in growth and empowerment. The cultural adaptation of the GEM provides valuable information about the factors affecting wellbeing and contributes to mental health promotion. This process improves mental health services by giving health care providers useful information about the Mohawk population and their clients. We believe that integrating this tool in interventions can help create a bridge to improve communication between the Indigenous cultural perspective of the patient and the biomedical view of health care providers. Further work is needed to confirm the clinical utility of this tool in psychological and psychiatric intervention along with social and community services.Keywords: cultural adaptation, cultural safety, empowerment, Mohawks, mental health, Quebec
Procedia PDF Downloads 153992 Methodological Deficiencies in Knowledge Representation Conceptual Theories of Artificial Intelligence
Authors: Nasser Salah Eldin Mohammed Salih Shebka
Abstract:
Current problematic issues in AI fields are mainly due to those of knowledge representation conceptual theories, which in turn reflected on the entire scope of cognitive sciences. Knowledge representation methods and tools are driven from theoretical concepts regarding human scientific perception of the conception, nature, and process of knowledge acquisition, knowledge engineering and knowledge generation. And although, these theoretical conceptions were themselves driven from the study of the human knowledge representation process and related theories; some essential factors were overlooked or underestimated, thus causing critical methodological deficiencies in the conceptual theories of human knowledge and knowledge representation conceptions. The evaluation criteria of human cumulative knowledge from the perspectives of nature and theoretical aspects of knowledge representation conceptions are affected greatly by the very materialistic nature of cognitive sciences. This nature caused what we define as methodological deficiencies in the nature of theoretical aspects of knowledge representation concepts in AI. These methodological deficiencies are not confined to applications of knowledge representation theories throughout AI fields, but also exceeds to cover the scientific nature of cognitive sciences. The methodological deficiencies we investigated in our work are: - The Segregation between cognitive abilities in knowledge driven models.- Insufficiency of the two-value logic used to represent knowledge particularly on machine language level in relation to the problematic issues of semantics and meaning theories. - Deficient consideration of the parameters of (existence) and (time) in the structure of knowledge. The latter requires that we present a more detailed introduction of the manner in which the meanings of Existence and Time are to be considered in the structure of knowledge. This doesn’t imply that it’s easy to apply in structures of knowledge representation systems, but outlining a deficiency caused by the absence of such essential parameters, can be considered as an attempt to redefine knowledge representation conceptual approaches, or if proven impossible; constructs a perspective on the possibility of simulating human cognition on machines. Furthermore, a redirection of the aforementioned expressions is required in order to formulate the exact meaning under discussion. This redirection of meaning alters the role of Existence and time factors to the Frame Work Environment of knowledge structure; and therefore; knowledge representation conceptual theories. Findings of our work indicate the necessity to differentiate between two comparative concepts when addressing the relation between existence and time parameters, and between that of the structure of human knowledge. The topics presented throughout the paper can also be viewed as an evaluation criterion to determine AI’s capability to achieve its ultimate objectives. Ultimately, we argue some of the implications of our findings that suggests that; although scientific progress may have not reached its peak, or that human scientific evolution has reached a point where it’s not possible to discover evolutionary facts about the human Brain and detailed descriptions of how it represents knowledge, but it simply implies that; unless these methodological deficiencies are properly addressed; the future of AI’s qualitative progress remains questionable.Keywords: cognitive sciences, knowledge representation, ontological reasoning, temporal logic
Procedia PDF Downloads 113991 Evaluating Impact of Teacher Professional Development Program on Students’ Learning
Authors: S. C. Lin, W. W. Cheng, M. S. Wu
Abstract:
This study attempted to investigate the connection between teacher professional development program and students’ Learning. This study took Readers’ Theater Teaching Program (RTTP) for professional development as an example to inquiry how participants apply their new knowledge and skills learned from RTTP to their teaching practice and how the impact influence students learning. The goals of the RTTP included: 1) to enhance teachers RT content knowledge; 2) to implement RT instruction in teachers’ classrooms in response to their professional development. 2) to improve students’ ability of reading fluency in professional development teachers’ classrooms. This study was a two-year project. The researchers applied mixed methods to conduct this study including qualitative inquiry and one-group pretest-posttest experimental design. In the first year, this study focused on designing and implementing RTTP and evaluating participants’ satisfaction of RTTP, what they learned and how they applied it to design their English reading curriculum. In the second year, the study adopted quasi-experimental design approach and evaluated how participants RT instruction influenced their students’ learning, including English knowledge, skill, and attitudes. The participants in this study composed two junior high school English teachers and their students. Data were collected from a number of different sources including teaching observation, semi-structured interviews, teaching diary, teachers’ professional development portfolio, Pre/post RT content knowledge tests, teacher survey, and students’ reading fluency tests. To analyze the data, both qualitative and quantitative data analysis were used. Qualitative data analysis included three stages: organizing data, coding data, and analyzing and interpreting data. Quantitative data analysis included descriptive analysis. The results indicated that average percentage of correct on pre-tests in RT content knowledge assessment was 40.75% with two teachers ranging in prior knowledge from 35% to 46% in specific RT content. Post-test RT content scores ranged from 70% to 82% correct with an average score of 76.50%. That gives teachers an average gain of 35.75% in overall content knowledge as measured by these pre/post exams. Teachers’ pre-test scores were lowest in script writing and highest in performing. Script writing was also the content area that showed the highest gains in content knowledge. Moreover, participants hold a positive attitude toward RTTP. They recommended that the approach of professional learning community, which was applied in RTTP was benefit to their professional development. Participants also applied the new skills and knowledge which they learned from RTTP to their practices. The evidences from this study indicated that RT English instruction significantly influenced students’ reading fluency and classroom climate. The result indicated that all of the experimental group students had a big progress in reading fluency after RT instruction. The study also found out several obstacles. Suggestions were also made.Keywords: teacher’s professional development, program evaluation, readers’ theater, english reading instruction, english reading fluency
Procedia PDF Downloads 398990 Impact Location From Instrumented Mouthguard Kinematic Data In Rugby
Authors: Jazim Sohail, Filipe Teixeira-Dias
Abstract:
Mild traumatic brain injury (mTBI) within non-helmeted contact sports is a growing concern due to the serious risk of potential injury. Extensive research is being conducted looking into head kinematics in non-helmeted contact sports utilizing instrumented mouthguards that allow researchers to record accelerations and velocities of the head during and after an impact. This does not, however, allow the location of the impact on the head, and its magnitude and orientation, to be determined. This research proposes and validates two methods to quantify impact locations from instrumented mouthguard kinematic data, one using rigid body dynamics, the other utilizing machine learning. The rigid body dynamics technique focuses on establishing and matching moments from Euler’s and torque equations in order to find the impact location on the head. The methodology is validated with impact data collected from a lab test with the dummy head fitted with an instrumented mouthguard. Additionally, a Hybrid III Dummy head finite element model was utilized to create synthetic kinematic data sets for impacts from varying locations to validate the impact location algorithm. The algorithm calculates accurate impact locations; however, it will require preprocessing of live data, which is currently being done by cross-referencing data timestamps to video footage. The machine learning technique focuses on eliminating the preprocessing aspect by establishing trends within time-series signals from instrumented mouthguards to determine the impact location on the head. An unsupervised learning technique is used to cluster together impacts within similar regions from an entire time-series signal. The kinematic signals established from mouthguards are converted to the frequency domain before using a clustering algorithm to cluster together similar signals within a time series that may span the length of a game. Impacts are clustered within predetermined location bins. The same Hybrid III Dummy finite element model is used to create impacts that closely replicate on-field impacts in order to create synthetic time-series datasets consisting of impacts in varying locations. These time-series data sets are used to validate the machine learning technique. The rigid body dynamics technique provides a good method to establish accurate impact location of impact signals that have already been labeled as true impacts and filtered out of the entire time series. However, the machine learning technique provides a method that can be implemented with long time series signal data but will provide impact location within predetermined regions on the head. Additionally, the machine learning technique can be used to eliminate false impacts captured by sensors saving additional time for data scientists using instrumented mouthguard kinematic data as validating true impacts with video footage would not be required.Keywords: head impacts, impact location, instrumented mouthguard, machine learning, mTBI
Procedia PDF Downloads 217989 One-Stage Conversion of Adjustable Gastric Band to One-Anastomosis Gastric Bypass Versus Sleeve Gastrectomy : A Single-Center Experience With a Short and Mid-term Follow-up
Authors: Basma Hussein Abdelaziz Hassan, Kareem Kamel, Philobater Bahgat Adly Awad, Karim Fahmy
Abstract:
Background: Laparoscopic adjustable gastric band was one of the most applied and common bariatric procedures in the last 8 years. However; the failure rate was very high, reaching approximately 60% of the patients not achieving the desired weight loss. Most patients sought another revisional surgery. In which, we compared two of the most common weight loss surgeries performed nowadays: the laparoscopic sleeve gastrectomy and laparoscopic one- anastomosis gastric bypass. Objective: To compare the weight loss and postoperative outcomes among patients undergoing conversion laparoscopic one-anastomosis gastric bypass (cOAGB) and laparoscopic sleeve gastrectomy (cSG) after a failed laparoscopic adjustable gastric band (LAGB). Patients and Methods: A prospective cohort study was conducted from June 2020 to June 2022 at a single medical center, which included 77 patients undergoing single-stage conversion to (cOAGB) vs (cSG). Patients were reassessed for weight loss, comorbidities remission, and post-operative complications at 6, 12, and 18 months. Results: There were 77 patients with failed LAGB in our study. Group (I) was 43 patients who underwent cOAGB and Group (II) was 34 patients who underwent cSG. The mean age of the cOAGB group was 38.58. While in the cSG group, the mean age was 39.47 (p=0.389). Of the 77 patients, 10 (12.99%) were males and 67 (87.01%) were females. Regarding Body mass index (BMI), in the cOAGB group the mean BMI was 41.06 and in the cSG group the mean BMI was 40.5 (p=0.042). The two groups were compared postoperative in relation to EBWL%, BMI, and the co-morbidities remission within 18 months follow-up. The BMI was calculated post-operative at three visits. After 6 months of follow-up, the mean BMI in the cOAGB group was 34.34, and the cSG group was 35.47 (p=0.229). In 12-month follow-up, the mean BMI in the cOAGB group was 32.69 and the cSG group was 33.79 (p=0.2). Finally, the mean BMI after 18 months of follow-up in the cOAGB group was 30.02, and in the cSG group was 31.79 (p=0.001). Both groups had no statistically significant values at 6 and 12 months follow-up with p-values of 0.229, and 0.2 respectively. However, patients who underwent cOAGB after 18 months of follow-up achieved lower BMI than those who underwent cSG with a statistically significant p-value of 0.005. Regarding EBWL% there was a statistically significant difference between the two groups. After 6 months of follow-up, the mean EBWL% in the cOAGB group was 35.9% and the cSG group was 33.14%. In the 12-month follow-up, the EBWL % mean in the cOAGB group was 52.35 and the cSG group was 48.76 (p=0.045). Finally, the mean EBWL % after 18 months of follow-up in the cOAGB group was 62.06 ±8.68 and in the cSG group was 55.58 ±10.87 (p=0.005). Regarding comorbidities remission; Diabetes mellitus remission was found in 22 (88%) patients in the cOAGB group and 10 (71.4%) patients in the cSG group with (p= 0.225). Hypertension remission was found in 20 (80%) patients in the cOAGB group and 14 (82.4%) patients in the cSG group with (p=1). In addition, dyslipidemia remission was found in 27(87%) patients in cOAGB group and 17(70%) patients in the cSG group with (p=0.18). Finally, GERD remission was found in about 15 (88.2%) patients in the cOAGB group and 6 (60%) patients in the cSG group with (p=0.47). There are no statistically significant differences between the two groups in the post-operative data outcomes. Conclusion: This study suggests that the conversion of LAGB to either cOAGB or cSG could be feasibly performed in a single-stage operation. cOAGB had a significant difference as regards the weight loss results than cSG among the mid-term follow-up. However, there is no significant difference in the postoperative complications and the resolution of the co-morbidities. Therefore, cOAGB could provide a reliable alternative but needs to be substantiated in future long-term studies.Keywords: laparoscopic, gastric banding, one-anastomosis gastric bypass, Sleeve gastrectomy, revisional surgery, weight loss
Procedia PDF Downloads 62988 Gender Stereotypes in the Media Content as an Obstacle for Elimination of Discrimination against Women in the Republic of Serbia
Authors: Mirjana Dokmanovic
Abstract:
The main topic of this paper is the analysis of the presence of gender stereotypes in the media content in the Republic of Serbia with respect to the state commitments to eliminate discrimination against women. The research methodology included the analysis of the media content of six daily newspapers and two magazines on the date of 28 December 2015 and the analysis of the reality TV show programs in 2015 from gender perspective. The methods of the research has also included a desk research and a qualitative analysis of the available data, statistics, policy papers, studies, and reports produced by the government, the Ministry of Culture and Information, the Regulatory Body for Electronic Media, the Press Council, the associations of media professionals, the independent human rights bodies, and civil society organizations (CSOs). As a State Signatory to the Convention on the Elimination of All Forms of Discrimination against Women, the Republic of Serbia has adopted numerous measures in this field, including the Law on Equality between Sexes and the national gender equality strategies. Special attention has been paid to eliminating gender stereotypes and prejudices in the media content and portraying of women. This practice has been forbidden by the Law on Electronic Media, the Law on Public Information and Media, the Law on Public Service Broadcasting and the Bylaw on the Protection of Human Rights in the Provision of Media Services. Despite these commitments, there has not been achieved progress regarding eliminating gender stereotypes in the media content. The research indicates that the media perpetuate traditional gender roles and patriarchal patterns. Female politicians, entrepreneurs, academics, scientists, and engineers have been very rarely portrayed in the media. On the other side, women are in their focus as celebrities, singers, and actresses. Women are underrepresented in the pages related to politics and economy, while they are mostly present in the cover stories related to show-business, health care, family and household matters. Women are three times more than men identified on the basis of their family status, as mothers, wives, daughters, etc. Hate speech, misogyny, and violence against women are often present in the reality TV shows. The abuse of women and their bodies in advertising is still widely present. The cases of domestic violence are still presented with sensationalism, although there has been achieved progress in portraying victims of domestic violence with respect and dignity. The issues related to gender equality and the position of the vulnerable groups of women, such as Roma women or rural women, are not visible in the media. This research, as well as warnings of women’s CSOs and independent human rights bodies, indicates the necessity to implement legal and policy measures in this field consistently and with due diligence. The aim of the paper is to contribute eliminating gender stereotypes in the media content and advancing gender equality.Keywords: discrimination against women, gender roles, gender stereotypes, media, misogyny, portraying women in the media, prejudices against women, Republic of Serbia
Procedia PDF Downloads 204987 Prevention of Preterm Birth and Management of Uterine Contractions with Traditional Korean Medicine: Integrative Approach
Authors: Eun-Seop Kim, Eun-Ha Jang, Rana R. Kim, Sae-Byul Jang
Abstract:
Objective: Preterm labor is the most common antecedent of preterm birth(PTB), which is characterized by regular uterine contraction before 37 weeks of pregnancy and cervical change. In acute preterm labor, tocolytics are administered as the first-line medication to suppress uterine contractions but rarely delay pregnancy to 37 weeks of gestation. On the other hand, according to the Korean Traditional Medicine, PTB is caused by the deficiency of Qi and unnecessary energy in the body of the mother. The aim of this study was to demonstrate the benefit of Traditional Korean Medicine as an adjuvant therapy in management of early uterine contractions and the prevention of PTB. Methods: It is a case report of a 38-year-old woman (0-0-6-0) hospitalized for irregular uterine contractions and cervical change at 33+3/7 weeks of gestation. Past history includes chemical pregnancies achieved by Artificial Rroductive Technology(ART), one stillbirth (at 7 weeks) and a laparoscopic surgery for endometriosis. After seven trials of IVF and articificial insemination, she had succeeded in conception via in-vitro fertilization (IVF) with help of Traditional Korean Medicine (TKM) treatments. Due to irregular uterine contractions and cervical changes, 2 TKM were prescribed: Gami-Dangguisan, and Antae-eum, known to nourish blood and clear away heat. 120ml of Gami-Dangguisan was given twice a day monring and evening along with same amount of Antae-eum once a day from 31 August 2013 to 28 November 2013. Tocolytics (Ritodrine) was administered as a first aid for maintenance of pregnancy. Information regarding progress until the delivery was collected during the patient’s visit. Results: On admission, the cervix of 15mm in length and cervical os with 0.5cm-dilated were observed via ultrasonography. 50% cervical effacement was also detected in physical examination. Tocolysis had been temporarily maintained. As a supportive therapy, TKM herbal preparations(gami-dangguisan and Antae-eum) were concomitantly given. As of 34+2/7 weeks of gestation, however intermittent uterine contractions appeared (5-12min) on cardiotocography and vaginal bleeding was also smeared at 34+3/7 weeks. However, enhanced tocolytics and continuous administration of herbal medicine sustained the pregnancy to term. At 37+2/7 weeks, no sign of labor with restored cervical length was confirmed. The woman gave a term birth to a healthy infant via vaginal delivery at 39+3/7 gestational weeks. Conclusions: This is the first successful case report about a preter labor patient administered with conventional tocolytic agents as well as TKM herbal decoctions, delaying delivery to term. This case deserves attention considering it is rare to maintain gestation to term only with tocolytic intervention. Our report implies the potential of herbal medicine as an adjuvant therapy for preterm labor treatment. Further studies are needed to assess the safety and efficacy of TKM herbal medicine as a therapeutic alternative for curing preterm birth.Keywords: preterm labor, traditional Korean medicine, herbal medicine, integrative treatment, complementary and alternative medicine
Procedia PDF Downloads 371986 Artificial Neural Network Approach for GIS-Based Soil Macro-Nutrients Mapping
Authors: Shahrzad Zolfagharnassab, Abdul Rashid Mohamed Shariff, Siti Khairunniza Bejo
Abstract:
Conventional methods for nutrient soil mapping are based on laboratory tests of samples that are obtained from surveys. The time and cost involved in gathering and analyzing soil samples are the reasons that researchers use Predictive Soil Mapping (PSM). PSM can be defined as the development of a numerical or statistical model of the relationship among environmental variables and soil properties, which is then applied to a geographic database to create a predictive map. Kriging is a group of geostatistical techniques to spatially interpolate point values at an unobserved location from observations of values at nearby locations. The main problem with using kriging as an interpolator is that it is excessively data-dependent and requires a large number of closely spaced data points. Hence, there is a need to minimize the number of data points without sacrificing the accuracy of the results. In this paper, an Artificial Neural Networks (ANN) scheme was used to predict macronutrient values at un-sampled points. ANN has become a popular tool for prediction as it eliminates certain difficulties in soil property prediction, such as non-linear relationships and non-normality. Back-propagation multilayer feed-forward network structures were used to predict nitrogen, phosphorous and potassium values in the soil of the study area. A limited number of samples were used in the training, validation and testing phases of ANN (pattern reconstruction structures) to classify soil properties and the trained network was used for prediction. The soil analysis results of samples collected from the soil survey of block C of Sawah Sempadan, Tanjung Karang rice irrigation project at Selangor of Malaysia were used. Soil maps were produced by the Kriging method using 236 samples (or values) that were a combination of actual values (obtained from real samples) and virtual values (neural network predicted values). For each macronutrient element, three types of maps were generated with 118 actual and 118 virtual values, 59 actual and 177 virtual values, and 30 actual and 206 virtual values, respectively. To evaluate the performance of the proposed method, for each macronutrient element, a base map using 236 actual samples and test maps using 118, 59 and 30 actual samples respectively produced by the Kriging method. A set of parameters was defined to measure the similarity of the maps that were generated with the proposed method, termed the sample reduction method. The results show that the maps that were generated through the sample reduction method were more accurate than the corresponding base maps produced through a smaller number of real samples. For example, nitrogen maps that were produced from 118, 59 and 30 real samples have 78%, 62%, 41% similarity, respectively with the base map (236 samples) and the sample reduction method increased similarity to 87%, 77%, 71%, respectively. Hence, this method can reduce the number of real samples and substitute ANN predictive samples to achieve the specified level of accuracy.Keywords: artificial neural network, kriging, macro nutrient, pattern recognition, precision farming, soil mapping
Procedia PDF Downloads 70985 Exploring Drivers and Barriers to Environmental Supply Chain Management in the Pharmaceutical Industry of Ghana
Authors: Gifty Kumadey, Albert Tchey Agbenyegah
Abstract:
(i) Overview and research goal(s): This study aims to address research gaps in the Ghanaian pharmaceutical industry by examining the impact of environmental supply chain management (ESCM) practices on environmental and operational performance. Previous studies have provided inconclusive evidence on the relationship between ESCM practices and environmental and operational performance. The research aims to provide a clearer understanding of the impact of ESCM practices on environmental and operational performance in the context of the Ghanaian pharmaceutical industry. Limited research has been conducted on ESCM practices in developing countries, particularly in Africa. The study aims to bridge this gap by examining the drivers and barriers specific to the pharmaceutical industry in Ghana. The research aims to analyze the impact of ESCM practices on the achievement of Sustainable Development Goals (SDGs) in the Ghanaian pharmaceutical industry, focusing on SDGs 3, 12, 13, and 17. It also explores the potential for partnerships and collaborations to advance ESCM practices in the pharmaceutical industry. The research hypotheses suggest that pressure from stakeholder positively influences the adoption of ESCM practices in the Ghanaian pharmaceutical industry. By addressing these goals, the study aims to contribute to sustainable development initiatives and offer practical recommendations to enhance ESCM A practices in the industry. (ii) Research methods and data: This study uses a quantitative research design to examine the drivers and barriers to environmental supply chain management in the pharmaceutical industry in Accra.The sample size is approximately 150 employees, with senior and middle-level managers from pharmaceutical industry of Ghana. A purposive sampling technique is used to select participants with relevant knowledge and experience in environmental supply chain management. Data will be collected using a structured questionnaire using Likert scale responses. Descriptive statistics will be used to analyze the data and provide insights into current practices and their impact on environmental and operational performance. (iii) Preliminary results and conclusions: Main contributions: Identifying drivers/barriers to ESCM in Ghana's pharmaceutical industry, evaluating current ESCM practices, examining impact on performance, providing practical insights, contributing to knowledge on ESCM in Ghanaian context. The research contributes to SDGs 3, 9, and 12 by promoting sustainable practices and responsible consumption in the industry. The study found that government rules and regulations are the most critical drivers for ESCM adoption, with senior managers playing a significant role. However, employee and competitor pressures have a lesser impact. The industry has made progress in implementing certain ESCM practices, but there is room for improvement in areas like green distribution and reverse logistics. The study emphasizes the importance of government support, management engagement, and comprehensive implementation of ESCM practices in the industry. Future research should focus on overcoming barriers and challenges to effective ESCM implementation.Keywords: environmental supply chain, sustainable development goal, ghana pharmaceutical industry, government regulations
Procedia PDF Downloads 94984 Evaluation of Trabectedin Safety and Effectiveness at a Tertiary Cancer Center at Qatar: A Retrospective Analysis
Authors: Nabil Omar, Farah Jibril, Oraib Amjad
Abstract:
Purpose: Trabecatine is a is a potent marine-derived antineoplastic drug which binds to the minor groove of the DNA, bending DNA towards the major groove resulting in a changed conformation that interferes with several DNA transcription factors, repair pathways and cell proliferation. Trabectedin was approved by the European Medicines Agency (EMA; London, UK) for the treatment of adult patients with advanced stage soft tissue sarcomas in whom treatment with anthracyclines and ifosfamide has failed, or for those who are not candidates for these therapies. The recommended dosing regimen is 1.5 mg/m2 IV over 24 hours every 3 weeks. The purpose of this study was to comprehensively review available data on the safety and efficacy of trabectedin used as indicated for patients at a Tertiary Cancer Center at Qatar. Methods: A medication administration report generated in the electronic health record identified all patients who received trabectedin between November 1, 2015 and November 1, 2017. This retrospective chart review evaluated the indication of trabectedin use, compliance to administration protocol and the recommended monitoring parameters, number of patients improved on the drug and continued treatment, number of patients discontinued treatment due to side-effects and the reported side effects. Progress and discharged notes were utilized to report experienced side effects during trabectedin therapy. A total of 3 patients were reviewed. Results: Total of 2 out of 3 patients who received trabectedin were receiving it for non-FDA and non-EMA, approved indications; metastatic rhabdomyosarcoma and ovarian cancer stage IV with poor prognosis. And only one patient received it as indicated for leiomyosarcoma of left ureter with metastases to liver, lungs and bone. None of the patients has continued the therapy due to development of serious side effects. One patient had stopped the medication after one cycle due to disease progression and transient hepatic toxicity, the other one had disease progression and developed 12 % reduction in LVEF after 12 cycles of trabectedin, and the third patient deceased, had disease progression on trabectedin after the 10th cycle that was received through peripheral line which resulted in developing extravasation and left arm cellulitis requiring debridement. Regarding monitoring parameters, at baseline the three patients had ECHO, and Creatine Phosphokinase (CPK) but it was not monitored during treatment as recommended. Conclusion: Utilizing this medication as indicated with performing the appropriate monitoring parameters as recommended can benefit patients who are receiving it. It is important to reinforce the intravenous administration via central intravenous line, the re-assessment of left ventricular ejection fraction (LVEF) by echocardiogram or multigated acquisition (MUGA) scan at 2- to 3-month intervals thereafter until therapy is discontinued, and CPK and LFTs levels prior to each administration of trabectedin.Keywords: trabectedin, drug-use evaluation, safety, effectiveness, adverse drug reaction, monitoring
Procedia PDF Downloads 143983 Multicomponent Positive Psychology Intervention for Health Promotion of Retirees: A Feasibility Study
Authors: Helen Durgante, Mariana F. Sparremberger, Flavia C. Bernardes, Debora D. DellAglio
Abstract:
Health promotion programmes for retirees, based on Positive Psychology perspectives for the development of strengths and virtues, demand broadened empirical investigation in Brazil. In the case of evidence-based applied research, it is suggested feasibility studies are conducted prior to efficacy trials of the intervention, in order to identify and rectify possible faults in the design and implementation of the intervention. The aim of this study was to evaluate the feasibility of a multicomponent Positive Psychology programme for health promotion of retirees, based on Cognitive Behavioural Therapy and Positive Psychology perspectives. The programme structure included six weekly group sessions (two hours each) encompassing strengths such as Values and self-care, Optimism, Empathy, Gratitude, Forgiveness, and Meaning of life and work. The feasibility criteria evaluated were: Demand, Acceptability, Satisfaction with the programme and with the moderator, Comprehension/Generalization of contents, Evaluation of the moderator (Social Skills and Integrity/Fidelity), Adherence, and programme implementation. Overall, 11 retirees (F=11), age range 54-75, from the metropolitan region of Porto Alegre-RS-Brazil took part in the study. The instruments used were: Qualitative Admission Questionnaire; Moderator Field Diary; the Programme Evaluation Form to assess participants satisfaction with the programme and with the moderator (a six-item 4-point likert scale), and Comprehension/Generalization of contents (a three-item 4-point likert scale); Observers’ Evaluation Form to assess the moderator Social Skills (a five-item 4-point likert scale), Integrity/Fidelity (a 10 item 4-point likert scale), and Adherence (a nine-item 5-point likert scale). Qualitative data were analyzed using content analysis. Descriptive statistics as well as Intraclass Correlations coefficients were used for quantitative data and inter-rater reliability analysis. The results revealed high demand (N = 55 interested people) and acceptability (n = 10 concluded the programme with overall 88.3% frequency rate), satisfaction with the program and with the moderator (X = 3.76, SD = .34), and participants self-report of Comprehension/Generalization of contents provided in the programme (X = 2.82, SD = .51). In terms of the moderator Social Skills (X = 3.93; SD = .40; ICC = .752 [IC = .429-.919]), Integrity/Fidelity (X = 3.93; SD = .31; ICC = .936 [IC = .854-.981]), and participants Adherence (X = 4.90; SD = .29; ICC = .906 [IC = .783-.969]), evaluated by two independent observers present in each session of the programme, descriptive and Intraclass Correlation results were considered adequate. Structural changes were introduced in the intervention design and implementation methods, as well as the removal of items from questionnaires and evaluation forms. The obtained results were satisfactory, allowing changes to be made for further efficacy trials of the programme. Results are discussed taking cultural and contextual demands in Brazil into account.Keywords: feasibility study, health promotion, positive psychology intervention, programme evaluation, retirees
Procedia PDF Downloads 195982 Multicenter Evaluation of the ACCESS HBsAg and ACCESS HBsAg Confirmatory Assays on the DxI 9000 ACCESS Immunoassay Analyzer, for the Detection of Hepatitis B Surface Antigen
Authors: Vanessa Roulet, Marc Turini, Juliane Hey, Stéphanie Bord-Romeu, Emilie Bonzom, Mahmoud Badawi, Mohammed-Amine Chakir, Valérie Simon, Vanessa Viotti, Jérémie Gautier, Françoise Le Boulaire, Catherine Coignard, Claire Vincent, Sandrine Greaume, Isabelle Voisin
Abstract:
Background: Beckman Coulter, Inc. has recently developed fully automated assays for the detection of HBsAg on a new immunoassay platform. The objective of this European multicenter study was to evaluate the performance of the ACCESS HBsAg and ACCESS HBsAg Confirmatory assays† on the recently CE-marked DxI 9000 ACCESS Immunoassay Analyzer. Methods: The clinical specificity of the ACCESS HBsAg and HBsAg Confirmatory assays was determined using HBsAg-negative samples from blood donors and hospitalized patients. The clinical sensitivity was determined using presumed HBsAg-positive samples. Sample HBsAg status was determined using a CE-marked HBsAg assay (Abbott ARCHITECT HBsAg Qualitative II, Roche Elecsys HBsAg II, or Abbott PRISM HBsAg assay) and a CE-marked HBsAg confirmatory assay (Abbott ARCHITECT HBsAg Qualitative II Confirmatory or Abbott PRISM HBsAg Confirmatory assay) according to manufacturer package inserts and pre-determined testing algorithms. False initial reactive rate was determined on fresh hospitalized patient samples. The sensitivity for the early detection of HBV infection was assessed internally on thirty (30) seroconversion panels. Results: Clinical specificity was 99.95% (95% CI, 99.86 – 99.99%) on 6047 blood donors and 99.71% (95%CI, 99.15 – 99.94%) on 1023 hospitalized patient samples. A total of six (6) samples were found false positive with the ACCESS HBsAg assay. None were confirmed for the presence of HBsAg with the ACCESS HBsAg Confirmatory assay. Clinical sensitivity on 455 HBsAg-positive samples was 100.00% (95% CI, 99.19 – 100.00%) for the ACCESS HBsAg assay alone and for the ACCESS HBsAg Confirmatory assay. The false initial reactive rate on 821 fresh hospitalized patient samples was 0.24% (95% CI, 0.03 – 0.87%). Results obtained on 30 seroconversion panels demonstrated that the ACCESS HBsAg assay had equivalent sensitivity performances compared to the Abbott ARCHITECT HBsAg Qualitative II assay with an average bleed difference since first reactive bleed of 0.13. All bleeds found reactive in ACCESS HBsAg assay were confirmed in ACCESS HBsAg Confirmatory assay. Conclusion: The newly developed ACCESS HBsAg and ACCESS HBsAg Confirmatory assays from Beckman Coulter have demonstrated high clinical sensitivity and specificity, equivalent to currently marketed HBsAg assays, as well as a low false initial reactive rate. †Pending achievement of CE compliance; not yet available for in vitro diagnostic use. 2023-11317 Beckman Coulter and the Beckman Coulter product and service marks mentioned herein are trademarks or registered trademarks of Beckman Coulter, Inc. in the United States and other countries. All other trademarks are the property of their respective owners.Keywords: dxi 9000 access immunoassay analyzer, hbsag, hbv, hepatitis b surface antigen, hepatitis b virus, immunoassay
Procedia PDF Downloads 90981 Temporal Estimation of Hydrodynamic Parameter Variability in Constructed Wetlands
Authors: Mohammad Moezzibadi, Isabelle Charpentier, Adrien Wanko, Robert Mosé
Abstract:
The calibration of hydrodynamic parameters for subsurface constructed wetlands (CWs) is a sensitive process since highly non-linear equations are involved in unsaturated flow modeling. CW systems are engineered systems designed to favour natural treatment processes involving wetland vegetation, soil, and their microbial flora. Their significant efficiency at reducing the ecological impact of urban runoff has been recently proved in the field. Numerical flow modeling in a vertical variably saturated CW is here carried out by implementing the Richards model by means of a mixed hybrid finite element method (MHFEM), particularly well adapted to the simulation of heterogeneous media, and the van Genuchten-Mualem parametrization. For validation purposes, MHFEM results were compared to those of HYDRUS (a software based on a finite element discretization). As van Genuchten-Mualem soil hydrodynamic parameters depend on water content, their estimation is subject to considerable experimental and numerical studies. In particular, the sensitivity analysis performed with respect to the van Genuchten-Mualem parameters reveals a predominant influence of the shape parameters α, n and the saturated conductivity of the filter on the piezometric heads, during saturation and desaturation. Modeling issues arise when the soil reaches oven-dry conditions. A particular attention should also be brought to boundary condition modeling (surface ponding or evaporation) to be able to tackle different sequences of rainfall-runoff events. For proper parameter identification, large field datasets would be needed. As these are usually not available, notably due to the randomness of the storm events, we thus propose a simple, robust and low-cost numerical method for the inverse modeling of the soil hydrodynamic properties. Among the methods, the variational data assimilation technique introduced by Le Dimet and Talagrand is applied. To that end, a variational data assimilation technique is implemented by applying automatic differentiation (AD) to augment computer codes with derivative computations. Note that very little effort is needed to obtain the differentiated code using the on-line Tapenade AD engine. Field data are collected for a three-layered CW located in Strasbourg (Alsace, France) at the water edge of the urban water stream Ostwaldergraben, during several months. Identification experiments are conducted by comparing measured and computed piezometric head by means of the least square objective function. The temporal variability of hydrodynamic parameter is then assessed and analyzed.Keywords: automatic differentiation, constructed wetland, inverse method, mixed hybrid FEM, sensitivity analysis
Procedia PDF Downloads 164980 2,7-diazaindole as a Potential Photophysical Probe for Excited State Deactivation Processes
Authors: Simran Baweja, Bhavika Kalal, Surajit Maity
Abstract:
Photoinduced tautomerization reactions have been the centre of attention among scientific community over past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered as a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phase. Derivatives of above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are a few studies in the solution phase which suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy i.e. fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1 whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red shifted transition in case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV, which is significantly higher that the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronic excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.Keywords: photoinduced tautomerization reactions, gas phse spectroscopy, ), IR-UV double resonance spectroscopy, resonant two-photon ionization time of flight mass spectrometry (2C-R2PI)
Procedia PDF Downloads 86979 Human Dental Pulp Stem Cells Attenuate Streptozotocin-Induced Parotid Gland Injury in Rats
Authors: Gehan ElAkabawy
Abstract:
Background: Diabetes mellitus causes severe deteriorations of almost all the organs and systems of the body, as well as significant damage to the oral cavity. The oral changes are mainly related to salivary glands dysfunction characterized by hyposalivation and xerostomia, which significantly reduce diabetic patients’ quality of life. Human dental pulp stem cells represent a promising source for cell-based therapies, owing to their easy, minimally invasive surgical access, and high proliferative capacity. It was reported that the trophic support mediated by dental pulp stem cells can rescue the functional and structural alterations of damaged salivary glands. However, potential differentiation and paracrine effects of human dental pulp stem cells in diabetic-induced parotid gland damage have not been previously investigated. Our study aimed to investigate the therapeutic effects of intravenous transplantation of human dental pulp stem cells (hDPSCs) on parotid gland injury in a rat model of streptozotocin (STZ)-induced type 1 diabetes. Methods: Thirty Sprague-Dawley male rats were randomly categorised into three groups: control, diabetic (STZ), and transplanted (STZ+hDPSCs). hDPSCs or vehicle was injected into the tail vein 7 days after STZ injection. The fasting blood glucose levels were monitored weekly. A glucose tolerance test was performed, and the parotid gland weight, salivary flow rate, oxidative stress indices, parotid gland histology, and caspase-3, vascular endothelial growth factor (VEGF), and proliferating cell nuclear antigen (PCNA) expression in parotid tissues were assessed 28 days post-transplantation. Results: Transplantation of hDPSCs downregulated blood glucose, improved the salivary flow rate, and reduced oxidative stress. The cells migrated to, survived, and differentiated into acinar, ductal, and myoepithelial cells in the STZ-injured parotid gland. Moreover, they downregulated the expression of caspase-3 and upregulated the expression of VEGF and PCNA, likely exerting pro-angiogenetic and antiapoptotic effects and promoting endogenous regeneration. In addition, the transplanted cells enhanced the parotid nitric oxide (NO) -tetrahydrobiopterin (BH4) pathway. Conclusions: Our results show that hDPSCs can migrate to and survive within the STZ-injured parotid gland, where they prevent its functional and morphological damage by restoring normal glucose levels, differentiating into parotid cell populations, and stimulating paracrine-mediated regeneration. Thus, hDPSCs may have therapeutic potential in the treatment of diabetes-induced parotid gland injury.Keywords: dental pulp stem cells, diabetes, streptozotocin, parotid gland
Procedia PDF Downloads 196978 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection
Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa
Abstract:
Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.Keywords: classification, airborne LiDAR, parameters selection, support vector machine
Procedia PDF Downloads 147977 Enhancing Students' Utilization of Written Corrective Feedback through Teacher-Student Writing Conferences: A Case Study in English Writing Instruction
Authors: Tsao Jui-Jung
Abstract:
Previous research findings have shown that most students do not fully utilize the written corrective feedback provided by teachers (Stone, 2014). This common phenomenon results in the ineffective utilization of teachers' written corrective feedback. As Ellis (2010) points out, the effectiveness of written corrective feedback depends on the level of student engagement with it. Therefore, it is crucial to understand how students utilize the written corrective feedback from their teachers. Previous studies have confirmed the positive impact of teacher-student writing conferences on students' engagement in the writing process and their writing abilities (Hum, 2021; Nosratinia & Nikpanjeh, 2019; Wong, 1996; Yeh, 2016, 2019). However, due to practical constraints such as time limitations, this instructional activity is not fully utilized in writing classrooms (Alfalagg, 2020). Therefore, to address this research gap, the purpose of this study was to explore several aspects of teacher-student writing conferences, including the frequency of meaning negotiation (i.e., comprehension checks, confirmation checks, and clarification checks) and teacher scaffolding techniques (i.e., feedback, prompts, guidance, explanations, and demonstrations) in teacher-student writing conferences, examining students’ self-assessment of their writing strengths and weaknesses in post-conference journals and their experiences with teacher-student writing conferences (i.e., interaction styles, communication levels, how teachers addressed errors, and overall perspectives on the conferences), and gathering insights from their responses to open-ended questions in the final stage of the study (i.e., their preferences and reasons for different written corrective feedback techniques used by teachers and their perspectives and suggestions on teacher-student writing conferences). Data collection methods included transcripts of audio recordings of teacher-student writing conferences, students’ post-conference journals, and open-ended questionnaires. The participants of this study were sophomore students enrolled in an English writing course for a duration of one school year. Key research findings are as follows: Firstly, in terms of meaning negotiation, students attempted to clearly understand the corrective feedback provided by the teacher-researcher twice as often as the teacher-researcher attempted to clearly understand the students' writing content. Secondly, the most commonly used scaffolding technique in the conferences was prompting (indirect feedback). Thirdly, the majority of participants believed that teacher-student writing conferences had a positive impact on their writing abilities. Fourthly, most students preferred direct feedback from the teacher-research as it directly pointed out their errors and saved them time in revision. However, some students still preferred indirect feedback, as they believed it encouraged them to think and self-correct. Based on the research findings, this study proposes effective teaching recommendations for English writing instruction aimed at optimizing teaching strategies and enhancing students' writing abilities.Keywords: written corrective feedback, student engagement, teacher-student writing conferences, action research
Procedia PDF Downloads 77976 Validation of Mapping Historical Linked Data to International Committee for Documentation (CIDOC) Conceptual Reference Model Using Shapes Constraint Language
Authors: Ghazal Faraj, András Micsik
Abstract:
Shapes Constraint Language (SHACL), a World Wide Web Consortium (W3C) language, provides well-defined shapes and RDF graphs, named "shape graphs". These shape graphs validate other resource description framework (RDF) graphs which are called "data graphs". The structural features of SHACL permit generating a variety of conditions to evaluate string matching patterns, value type, and other constraints. Moreover, the framework of SHACL supports high-level validation by expressing more complex conditions in languages such as SPARQL protocol and RDF Query Language (SPARQL). SHACL includes two parts: SHACL Core and SHACL-SPARQL. SHACL Core includes all shapes that cover the most frequent constraint components. While SHACL-SPARQL is an extension that allows SHACL to express more complex customized constraints. Validating the efficacy of dataset mapping is an essential component of reconciled data mechanisms, as the enhancement of different datasets linking is a sustainable process. The conventional validation methods are the semantic reasoner and SPARQL queries. The former checks formalization errors and data type inconsistency, while the latter validates the data contradiction. After executing SPARQL queries, the retrieved information needs to be checked manually by an expert. However, this methodology is time-consuming and inaccurate as it does not test the mapping model comprehensively. Therefore, there is a serious need to expose a new methodology that covers the entire validation aspects for linking and mapping diverse datasets. Our goal is to conduct a new approach to achieve optimal validation outcomes. The first step towards this goal is implementing SHACL to validate the mapping between the International Committee for Documentation (CIDOC) conceptual reference model (CRM) and one of its ontologies. To initiate this project successfully, a thorough understanding of both source and target ontologies was required. Subsequently, the proper environment to run SHACL and its shape graphs were determined. As a case study, we performed SHACL over a CIDOC-CRM dataset after running a Pellet reasoner via the Protégé program. The applied validation falls under multiple categories: a) data type validation which constrains whether the source data is mapped to the correct data type. For instance, checking whether a birthdate is assigned to xsd:datetime and linked to Person entity via crm:P82a_begin_of_the_begin property. b) Data integrity validation which detects inconsistent data. For instance, inspecting whether a person's birthdate occurred before any of the linked event creation dates. The expected results of our work are: 1) highlighting validation techniques and categories, 2) selecting the most suitable techniques for those various categories of validation tasks. The next plan is to establish a comprehensive validation model and generate SHACL shapes automatically.Keywords: SHACL, CIDOC-CRM, SPARQL, validation of ontology mapping
Procedia PDF Downloads 253975 Modification of a Commercial Ultrafiltration Membrane by Electrospray Deposition for Performance Adjustment
Authors: Elizaveta Korzhova, Sebastien Deon, Patrick Fievet, Dmitry Lopatin, Oleg Baranov
Abstract:
Filtration with nanoporous ultrafiltration membranes is an attractive option to remove ionic pollutants from contaminated effluents. Unfortunately, commercial membranes are not necessarily suitable for specific applications, and their modification by polymer deposition is a fruitful way to adapt their performances accordingly. Many methods are usually used for surface modification, but a novel technique based on electrospray is proposed here. Various quantities of polymers were deposited on a commercial membrane, and the impact of the deposit is investigated on filtration performances and discussed in terms of charge and hydrophobicity. The electrospray deposition is a technique which has not been used for membrane modification up to now. It consists of spraying small drops of polymer solution under a high voltage between the needle containing the solution and the metallic support on which membrane is stuck. The advantage of this process lies in the small quantities of polymer that can be coated on the membrane surface compared with immersion technique. In this study, various quantities (from 2 to 40 μL/cm²) of solutions containing two charged polymers (13 mmol/L of monomer unit), namely polyethyleneimine (PEI) and polystyrene sulfonate (PSS), were sprayed on a negatively charged polyethersulfone membrane (PLEIADE, Orelis Environment). The efficacy of the polymer deposition was then investigated by estimating ion rejection, permeation flux, zeta-potential and contact angle before and after the polymer deposition. Firstly, contact angle (θ) measurements show that the surface hydrophilicity is notably improved by coating both PEI and PSS. Moreover, it was highlighted that the contact angle decreases monotonously with the amount of sprayed solution. Additionally, hydrophilicity enhancement was proved to be better with PSS (from 62 to 35°) than PEI (from 62 to 53°). Values of zeta-potential (ζ were estimated by measuring the streaming current generated by a pressure difference on both sides of a channel made by clamping two membranes. The ζ-values demonstrate that the deposits of PSS (negative at pH=5.5) allow an increase of the negative membrane charge, whereas the deposits of PEI (positive) lead to a positive surface charge. Zeta-potentials measurements also emphasize that the sprayed quantity has little impact on the membrane charge, except for very low quantities (2 μL/m²). The cross-flow filtration of salt solutions containing mono and divalent ions demonstrate that polymer deposition allows a strong enhancement of ion rejection. For instance, it is shown that rejection of a salt containing a divalent cation can be increased from 1 to 20 % and even to 35% by deposing 2 and 4 μL/cm² of PEI solution, respectively. This observation is coherent with the reversal of the membrane charge induced by PEI deposition. Similarly, the increase of negative charge induced by PSS deposition leads to an increase of NaCl rejection from 5 to 45 % due to electrostatic repulsion of the Cl- ion by the negative surface charge. Finally, a notable fall in the permeation flux due to the polymer layer coated at the surface was observed and the best polymer concentration in the sprayed solution remains to be determined to optimize performances.Keywords: ultrafiltration, electrospray deposition, ion rejection, permeation flux, zeta-potential, hydrophobicity
Procedia PDF Downloads 187974 Development of a One Health and Comparative Medicine Curriculum for Medical Students
Authors: Aliya Moreira, Blake Duffy, Sam Kosinski, Kate Heckman, Erika Steensma
Abstract:
Introduction: The One Health initiative promotes recognition of the interrelatedness between people, animals, plants, and their shared environment. The field of comparative medicine studies the similarities and differences between humans and animals for the purpose of advancing medical sciences. Currently, medical school education is narrowly focused on human anatomy and physiology, but as the COVID-19 pandemic has demonstrated, a holistic understanding of health requires comprehension of the interconnection between health and the lived environment. To prepare future physicians for unique challenges from emerging zoonoses to climate change, medical students can benefit from exposure to and experience with One Health and Comparative Medicine content. Methods: In January 2020, an elective course for medical students on One Health and Comparative Medicine was created to provide medical students with the background knowledge necessary to understand the applicability of animal and environmental health in medical research and practice. The 2-week course was continued in January 2021, with didactic and experiential activities taking place virtually due to the COVID-19 pandemic. In response to student feedback, lectures were added to expand instructional content on zoonotic and wildlife diseases for the second iteration of the course. Other didactic sessions included interprofessional lectures from 20 physicians, veterinarians, public health professionals, and basic science researchers. The first two cohorts of students were surveyed regarding One Health and Comparative Medicine concepts at the beginning and conclusion of the course. Results: 16 medical students have completed the comparative medicine course thus far, with 87.5% (n=14) completing pre-and post-course evaluations. 100% of student respondents indicated little to no exposure to comparative medicine or One Health concepts during medical school. Following the course, 100% of students felt familiar or very familiar with comparative medicine and One Health concepts. To assess course efficacy, questions were evaluated on a five-point Likert scale. 100% agreed or strongly agreed that learning Comparative Medicine and One Health topics augmented their medical education. 100% agreed or strongly agreed that a course covering this content should be regularly offered to medical students. Conclusions: Data from the student evaluation surveys demonstrate that the Comparative Medicine course was successful in increasing medical student knowledge of Comparative Medicine and One Health. Results also suggest that interprofessional training in One Health and Comparative Medicine is applicable and useful for medical trainees. Future iterations of this course could capitalize on the inherently interdisciplinary nature of these topics by enrolling students from veterinary and public health schools into a longitudinal course. Such recruitment may increase the course’s value by offering multidisciplinary student teams the opportunity to conduct research projects, thereby strengthening both the individual learning experience as well as sparking future interprofessional research ventures. Overall, these efforts to educate medical students in One Health topics should be reproducible at other institutions, preparing more future physicians for the diverse challenges they will encounter in practice.Keywords: medical education, interprofessional instruction, one health, comparative medicine
Procedia PDF Downloads 108973 The Impact of Gestational Weight Gain on Subclinical Atherosclerosis, Placental Circulation and Neonatal Complications
Authors: Marina Shargorodsky
Abstract:
Aim: Gestational weight gain (GWG) has been related to altering future weight-gain curves and increased risks of obesity later in life. Obesity may contribute to vascular atherosclerotic changes as well as excess cardiovascular morbidity and mortality observed in these patients. Noninvasive arterial testing, such as ultrasonographic measurement of carotid IMT, is considered a surrogate for systemic atherosclerotic disease burden and is predictive of cardiovascular events in asymptomatic individuals as well as recurrent events in patients with known cardiovascular disease. Currently, there is no consistent evidence regarding the vascular impact of excessive GWG. The present study was designed to investigate the impact of GWG on early atherosclerotic changes during late pregnancy, using intima-media thickness, as well as placental vascular circulation and inflammatory lesions and pregnancy outcomes. Methods: The study group consisted of 59 pregnant women who gave birth and underwent a placental histopathological examination at the Department of Obstetrics and Gynecology, Edith Wolfson Medical Center, Israel, in 2019. According to the IOM guidelines the study group has been divided into two groups: Group 1 included 32 women with pregnancy weight gain within recommended range; Group 2 included 27 women with excessive weight gain during pregnancy. The IMT was measured from non-diseased intimal and medial wall layers of the carotid artery on both sides, visualized by high-resolution 7.5 MHz ultrasound (Apogee CX Color, ATL). Placental histology subdivided placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion according to the criteria of the Society for Pediatric Pathology, subdividing placental findings to lesions consistent with maternal vascular and fetal vascular malperfusion, as well as the inflammatory response of maternal and fetal origin. Results: IMT levels differed between groups and were significantly higher in Group 1 compared to Group 2 (0.7+/-0.1 vs 0.6+/-0/1, p=0.028). Multiple linear regression analysis of IMT included variables based on their associations in univariate analyses with a backward approach. Included in the model were pre-gestational BMI, HDL cholesterol and fasting glucose. The model was significant (p=0.001) and correctly classified 64.7% of study patients. In this model, pre-pregnancy BMI remained a significant independent predictor of subclinical atherosclerosis assessed by IMT (OR 4.314, 95% CI 0.0599-0.674, p=0.044). Among placental lesions related to fetal vascular malperfusion, villous changes consistent with fetal thrombo-occlusive disease (FTOD) were significantly higher in Group 1 than in Group 2, p=0.034). In Conclusion, the present study demonstrated that excessive weight gain during pregnancy is associated with an adverse effect on early stages of subclinical atherosclerosis, placental vascular circulation and neonatal complications. The precise mechanism for these vascular changes, as well as the overall clinical impact of weight control during pregnancy on IMT, placental vascular circulation as well as pregnancy outcomes, deserves further investigation.Keywords: obesity, pregnancy, complications, weight gain
Procedia PDF Downloads 53972 Variability Studies of Seyfert Galaxies Using Sloan Digital Sky Survey and Wide-Field Infrared Survey Explorer Observations
Authors: Ayesha Anjum, Arbaz Basha
Abstract:
Active Galactic Nuclei (AGN) are the actively accreting centers of the galaxies that host supermassive black holes. AGN emits radiation in all wavelengths and also shows variability across all the wavelength bands. The analysis of flux variability tells us about the morphology of the site of emission radiation. Some of the major classifications of AGN are (a) Blazars, with featureless spectra. They are subclassified as BLLacertae objects, Flat Spectrum Radio Quasars (FSRQs), and others; (b) Seyferts with prominent emission line features are classified into Broad Line, Narrow Line Seyferts of Type 1 and Type 2 (c) quasars, and other types. Sloan Digital Sky Survey (SDSS) is an optical telescope based in Mexico that has observed and classified billions of objects based on automated photometric and spectroscopic methods. A sample of blazars is obtained from the third Fermi catalog. For variability analysis, we searched for light curves for these objects in Wide-Field Infrared Survey Explorer (WISE) and Near Earth Orbit WISE (NEOWISE) in two bands: W1 (3.4 microns) and W2 (4.6 microns), reducing the final sample to 256 objects. These objects are also classified into 155 BLLacs, 99 FSRQs, and 2 Narrow Line Seyferts, namely, PMNJ0948+0022 and PKS1502+036. Mid-infrared variability studies of these objects would be a contribution to the literature. With this as motivation, the present work is focused on studying a final sample of 256 objects in general and the Seyferts in particular. Owing to the fact that the classification is automated, SDSS has miclassified these objects into quasars, galaxies, and stars. Reasons for the misclassification are explained in this work. The variability analysis of these objects is done using the method of flux amplitude variability and excess variance. The sample consists of observations in both W1 and W2 bands. PMN J0948+0022 is observed between MJD from 57154.79 to 58810.57. PKS 1502+036 is observed between MJD from 57232.42 to 58517.11, which amounts to a period of over six years. The data is divided into different epochs spanning not more than 1.2 days. In all the epochs, the sources are found to be variable in both W1 and W2 bands. This confirms that the object is variable in mid-infrared wavebands in both long and short timescales. Also, the sources are observed for color variability. Objects either show a bluer when brighter trend (BWB) or a redder when brighter trend (RWB). The possible claim for the object to be BWB (present objects) is that the longer wavelength radiation emitted by the source can be suppressed by the high-energy radiation from the central source. Another result is that the smallest radius of the emission source is one day since the epoch span used in this work is one day. The mass of the black holes at the centers of these sources is found to be less than or equal to 108 solar masses, respectively.Keywords: active galaxies, variability, Seyfert galaxies, SDSS, WISE
Procedia PDF Downloads 129971 Treatment and Diagnostic Imaging Methods of Fetal Heart Function in Radiology
Authors: Mahdi Farajzadeh Ajirlou
Abstract:
Prior evidence of normal cardiac anatomy is desirable to relieve the anxiety of cases with a family history of congenital heart disease or to offer the option of early gestation termination or close follow-up should a cardiac anomaly be proved. Fetal heart discovery plays an important part in the opinion of the fetus, and it can reflect the fetal heart function of the fetus, which is regulated by the central nervous system. Acquisition of ventricular volume and inflow data would be useful to quantify more valve regurgitation and ventricular function to determine the degree of cardiovascular concession in fetal conditions at threat for hydrops fetalis. This study discusses imaging the fetal heart with transvaginal ultrasound, Doppler ultrasound, three-dimensional ultrasound (3DUS) and four-dimensional (4D) ultrasound, spatiotemporal image correlation (STIC), glamorous resonance imaging and cardiac catheterization. Doppler ultrasound (DUS) image is a kind of real- time image with a better imaging effect on blood vessels and soft tissues. DUS imaging can observe the shape of the fetus, but it cannot show whether the fetus is hypoxic or distressed. Spatiotemporal image correlation (STIC) enables the acquisition of a volume of data concomitant with the beating heart. The automated volume accession is made possible by the array in the transducer performing a slow single reach, recording a single 3D data set conforming to numerous 2D frames one behind the other. The volume accession can be done in a stationary 3D, either online 4D (direct volume scan, live 3D ultrasound or a so-called 4D (3D/ 4D)), or either spatiotemporal image correlation-STIC (off-line 4D, which is a circular volume check-up). Fetal cardiovascular MRI would appear to be an ideal approach to the noninvasive disquisition of the impact of abnormal cardiovascular hemodynamics on antenatal brain growth and development. Still, there are practical limitations to the use of conventional MRI for fetal cardiovascular assessment, including the small size and high heart rate of the mortal fetus, the lack of conventional cardiac gating styles to attend data accession, and the implicit corruption of MRI data due to motherly respiration and unpredictable fetal movements. Fetal cardiac MRI has the implicit to complement ultrasound in detecting cardiovascular deformations and extracardiac lesions. Fetal cardiac intervention (FCI), minimally invasive catheter interventions, is a new and evolving fashion that allows for in-utero treatment of a subset of severe forms of congenital heart deficiency. In special cases, it may be possible to modify the natural history of congenital heart disorders. It's entirely possible that future generations will ‘repair’ congenital heart deficiency in utero using nanotechnologies or remote computer-guided micro-robots that work in the cellular layer.Keywords: fetal, cardiac MRI, ultrasound, 3D, 4D, heart disease, invasive, noninvasive, catheter
Procedia PDF Downloads 40970 The Toxic Effects of Kynurenine Metabolites on SH-SY5Y Neuroblastoma Cells
Authors: Susan Hall, Gary D. Grant, Catherine McDermott, Devinder Arora
Abstract:
Introduction /Aim: The kynurenine pathway is thought to play an important role in the pathophysiology of numerous neurodegenerative diseases including depression, Alzheimer’s disease, and Parkinson’s disease. Numerous neuroactive compounds, including the neurotoxic 3-hydroxyanthranilic acid, 3-hydroxykynurenine and quinolinic acid and the neuroprotective kynurenic acid and picolinic acid, are produced through the metabolism of kynurenine and are thought to be the causative agents responsible for neurodegeneration. The toxicity of 3-hydroxykynurenine, 3-hydroxyanthranilic acid and quinolinic acid has been widely evaluated and demonstrated in primary cell cultures but to date only 3-hydroxykynurenine and 3-hydroxyanthranilic acid have been shown to cause toxicity in immortal tumour cells. The aim of this study was to evaluate the toxicity of kynurenine metabolites, both individually and in combination, on SH-SY5Y neuroblastoma cells after 24 and 72 h exposure in order to explore a cost-effective model to study their neurotoxic effects and potential protective agents. Methods: SH-SY5Y neuroblastoma cells were exposed to various concentrations of the neuroactive kynurenine metabolites, both individually and in combination, for 24 and 72 h, and viability was subsequently evaluated using the Resazurin (Alamar blue) proliferation assay. Furthermore, the effects of these compounds, alone and in combination, on specific death pathways including apoptosis, necrosis and free radical production was evaluated using various assays. Results: Consistent with literature, toxicity was shown with short-term 24-hour treatments at 1000 μM concentrations for both 3-hydroxykynurenine and 3-hydroxyanthranilic acid. Combinations of kynurenine metabolites showed modest toxicity towards SH-SY5Y neuroblastoma cells in a concentration-dependent manner. Specific cell death pathways, including apoptosis, necrosis and free radical production were shown to be increased after both 24 and 72 h exposure of SH-SY5Y neuroblastoma cells to 3-hydroxykynurenine and 3-hydroxyanthranilic acid and various combinations of neurotoxic kynurenine metabolites. Conclusion: It is well documented that neurotoxic kynurenine metabolites show toxicity towards primary human neurons in the nanomolar to low micromolar concentration range. Results show that the concentrations required to show significant cell death are in the range of 1000 µM for 3-hydroxykynurenine and 3-hydroxyanthranilic acid and toxicity of quinolinic acid towards SH-SY5Y was unable to be shown. This differs significantly from toxicities observed in primary human neurons. Combinations of the neurotoxic metabolites were shown to have modest toxicity towards these cells with increased toxicity and activation of cell death pathways observed after 72 h exposure. This study suggests that the 24 h model is unsuitable for use in neurotoxicity studies, however, the 72 h model better represents the observations of the studies using primary human neurons and may provide some benefit in providing a cost-effective model to assess possible protective agents against kynurenine metabolite toxicities.Keywords: kynurenine metabolites, neurotoxicity, quinolinic acid, SH-SY5Y neuroblastoma
Procedia PDF Downloads 417