Search results for: academic results
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 38721

Search results for: academic results

201 An Impact Assesment of Festive Events on Sustainable Cultural Heritage: İdrisyayla Village

Authors: Betül Gelengül Eki̇mci̇, Semra Günay Aktaş

Abstract:

Festive, habitual activities celebrated on the specified date by a local community, are conducive to recognition of the region. The main function of festive events is to help gathering people via an annual celebration to create an atmosphere of understanding and the opportunity to participate in the joy of life. At the same time, festive events may serve as special occasions on which immigrants return home to celebrate with their family and community, reaffirming their identity and link to the community’s traditions. Festivals also support the local economy by bringing in different visitors to the region. The tradition of “Beet Brewing-Molasses Production,” which is held in İdrisyayla Village is an intangible cultural heritage with customs, traditions, and rituals carrying impacts of cuisine culture of Rumelian immigrants in the Ottoman. After the harvest of the beet plant in the autumn season of the year, Beet Brewing Molasses syrup is made by traditional production methods with co-op of the local community. Festive occurring brewing paste made process provided transmission of knowledge and experience to the young generations. Making molasses, which is a laborious process, is accompanied by folk games such as "sayacı," which is vital element of the festive performed in İdrisyayla. Performance provides enjoyable time and supporting motivation. Like other forms of intangible cultural heritage, “Beet Brewing-Molasses Festive in İdrasyayla is threatened by rapid urbanisation, young generation migration, industrialisation and environmental change. The festive events are threatened with gradual disappearance due to changes communities undergo in modern societies because it depends on the broad participation of practitioners. Ensuring the continuity of festive events often requires the mobilization of large numbers of individuals and the social, political and legal institutions and mechanisms of society. In 2015, Intangible cultural heritage research project with the title of "İdrisyayla Molasses Process" managed by the Eskişehir Governorship, City Directorate of Culture and Tourism and Anadolu University, project members took part in the festival organization to promote sustainability, making it visible, to encourage the broadest public participation possible, to ensure public awareness on the cultural importance. To preserve the originality of and encourage participation in the festive İdrisyayla, local associations, researchers and institutions created foundation and supports festive events, such as "sayacı" folk game, which is vital element of the festive performed in İdrisyayla. Practitioners find new opportunity to market İdrisyayla Molasses production. Publicity program through the press and exhibition made it possible to stress the cultural importance of the festive in İdrisyayla Village. The research reported here used a survey analysis to evaluate an affect of the festive after the spirit of the 2015 Festive in İdrisyayla Village. Particular attention was paid to the importance of the cultural aspects of the festival. Based on a survey of more than a hundred festival attendees, several recommendations are made to festival planners. Results indicate that the variety of festive activities and products offered for sale very important to attendees. The local participants care product sales rather than cultural heritage.

Keywords: agritourism, cultural tourism, festival, sustainable cultural heritage

Procedia PDF Downloads 221
200 Harnessing the Benefits and Mitigating the Challenges of Neurosensitivity for Learners: A Mixed Methods Study

Authors: Kaaryn Cater

Abstract:

People vary in how they perceive, process, and react to internal, external, social, and emotional environmental factors; some are more sensitive than others. Compassionate people have a highly reactive nervous system and are more impacted by positive and negative environmental conditions (Differential Susceptibility). Further, some sensitive individuals are disproportionately able to benefit from positive and supportive environments without necessarily suffering negative impacts in less supportive environments (Vantage Sensitivity). Environmental sensitivity is underpinned by physiological, genetic, and personality/temperamental factors, and the phenotypic expression of high sensitivity is Sensory Processing Sensitivity. The hallmarks of Sensory Processing Sensitivity are deep cognitive processing, emotional reactivity, high levels of empathy, noticing environmental subtleties, a tendency to observe new and novel situations, and a propensity to become overwhelmed when over-stimulated. Several educational advantages associated with high sensitivity include creativity, enhanced memory, divergent thinking, giftedness, and metacognitive monitoring. High sensitivity can also lead to some educational challenges, particularly managing multiple conflicting demands and negotiating low sensory thresholds. A mixed methods study was undertaken. In the first quantitative study, participants completed the Perceived Success in Study Survey (PSISS) and the Highly Sensitive Person Scale (HSPS-12). Inclusion criteria were current or previous postsecondary education experience. The survey was presented on social media, and snowball recruitment was employed (n=365). The Excel spreadsheets were uploaded to the statistical package for the social sciences (SPSS)26, and descriptive statistics found normal distribution. T-tests and analysis of variance (ANOVA) calculations found no difference in the responses of demographic groups, and Principal Components Analysis and the posthoc Tukey calculations identified positive associations between high sensitivity and three of the five PSISS factors. Further ANOVA calculations found positive associations between the PSISS and two of the three sensitivity subscales. This study included a response field to register interest in further research. Respondents who scored in the 70th percentile on the HSPS-12 were invited to participate in a semi-structured interview. Thirteen interviews were conducted remotely (12 female). Reflexive inductive thematic analysis was employed to analyse data, and a descriptive approach was employed to present data reflective of participant experience. The results of this study found that compassionate students prioritize work-life balance; employ a range of practical metacognitive study and self-care strategies; value independent learning; connect with learning that is meaningful; and are bothered by aspects of the physical learning environment, including lighting, noise, and indoor environmental pollutants. There is a dearth of research investigating sensitivity in the educational context, and these studies highlight the need to promote widespread education sector awareness of environmental sensitivity, and the need to include sensitivity in sector and institutional diversity and inclusion initiatives.

Keywords: differential susceptibility, highly sensitive person, learning, neurosensitivity, sensory processing sensitivity, vantage sensitivity

Procedia PDF Downloads 65
199 Holistic Urban Development: Incorporating Both Global and Local Optimization

Authors: Christoph Opperer

Abstract:

The rapid urbanization of modern societies and the need for sustainable urban development demand innovative solutions that meet both individual and collective needs while addressing environmental concerns. To address these challenges, this paper presents a study that explores the potential of spatial and energetic/ecological optimization to enhance the performance of urban settlements, focusing on both architectural and urban scales. The study focuses on the application of biological principles and self-organization processes in urban planning and design, aiming to achieve a balance between ecological performance, architectural quality, and individual living conditions. The research adopts a case study approach, focusing on a 10-hectare brownfield site in the south of Vienna. The site is surrounded by a small-scale built environment as an appropriate starting point for the research and design process. However, the selected urban form is not a prerequisite for the proposed design methodology, as the findings can be applied to various urban forms and densities. The methodology used in this research involves dividing the overall building mass and program into individual small housing units. A computational model has been developed to optimize the distribution of these units, considering factors such as solar exposure/radiation, views, privacy, proximity to sources of disturbance (such as noise), and minimal internal circulation areas. The model also ensures that existing vegetation and buildings on the site are preserved and incorporated into the optimization and design process. The model allows for simultaneous optimization at two scales, architectural and urban design, which have traditionally been addressed sequentially. This holistic design approach leads to individual and collective benefits, resulting in urban environments that foster a balance between ecology and architectural quality. The results of the optimization process demonstrate a seemingly random distribution of housing units that, in fact, is a densified hybrid between traditional garden settlements and allotment settlements. This urban typology is selected due to its compatibility with the surrounding urban context, although the presented methodology can be extended to other forms of urban development and density levels. The benefits of this approach are threefold. First, it allows for the determination of ideal housing distribution that optimizes solar radiation for each building density level, essentially extending the concept of sustainable building to the urban scale. Second, the method enhances living quality by considering the orientation and positioning of individual functions within each housing unit, achieving optimal views and privacy. Third, the algorithm's flexibility and robustness facilitate the efficient implementation of urban development with various stakeholders, architects, and construction companies without compromising its performance. The core of the research is the application of global and local optimization strategies to create efficient design solutions. By considering both, the performance of individual units and the collective performance of the urban aggregation, we ensure an optimal balance between private and communal benefits. By promoting a holistic understanding of urban ecology and integrating advanced optimization strategies, our methodology offers a sustainable and efficient solution to the challenges of modern urbanization.

Keywords: sustainable development, self-organization, ecological performance, solar radiation and exposure, daylight, visibility, accessibility, spatial distribution, local and global optimization

Procedia PDF Downloads 66
198 Prevalence and Diagnostic Evaluation of Schistosomiasis in School-Going Children in Nelson Mandela Bay Municipality: Insights from Urinalysis and Point-of-Care Testing

Authors: Maryline Vere, Wilma ten Ham-Baloyi, Lucy Ochola, Opeoluwa Oyedele, Lindsey Beyleveld, Siphokazi Tili, Takafira Mduluza, Paula E. Melariri

Abstract:

Schistosomiasis, caused by Schistosoma (S.) haematobium and Schistosoma (S.) mansoni parasites poses a significant public health challenge in low-income regions. Diagnosis typically relies on identifying specific urine biomarkers such as haematuria, protein, and leukocytes for S. haematobium, while the Point-of-Care Circulating Cathodic Antigen (POC-CCA) assay is employed for detecting S. mansoni. Urinalysis and the POC-CCA assay are favoured for their rapid, non-invasive nature and cost-effectiveness. However, traditional diagnostic methods such as Kato-Katz and urine filtration lack sensitivity in low-transmission areas, which can lead to underreporting of cases and hinder effective disease control efforts. Therefore, in this study, urinalysis and the POC-CCA assay was utilised to diagnose schistosomiasis effectively among school-going children in Nelson Mandela Bay Municipality. This was a cross-sectional study with a total of 759 children, aged 5 to 14 years, who provided urine samples. Urinalysis was performed using urinary dipstick tests, which measure multiple parameters, including haematuria, protein, leukocytes, bilirubin, urobilinogen, ketones, pH, specific gravity and other biomarkers. Urinalysis was performed by dipping the strip into the urine sample and observing colour changes on specific reagent pads. The POC-CCA test was conducted by applying a drop of urine onto a cassette containing CCA-specific antibodies, and the presence of a visible test line indicated a positive result for S. mansoni infection. Descriptive statistics were used to summarize urine parameters, and Pearson correlation coefficients (r) were calculated to analyze associations among urine parameters using R software (version 4.3.1). Among the 759 children, the prevalence of S. haematobium using haematuria as a diagnostic marker was 33.6%. Additionally, leukocytes were detected in 21.3% of the samples, and protein was present in 15%. The prevalence of positive POC-CCA test results for S. mansoni was 3.7%. Urine parameters exhibited low to moderate associations, suggesting complex interrelationships. For instance, specific gravity and pH showed a negative correlation (r = -0.37), indicating that higher specific gravity was associated with lower pH. Weak correlations were observed between haematuria and pH (r = -0.10), bilirubin and ketones (r = 0.14), protein and bilirubin (r = 0.13), and urobilinogen and pH (r = 0.12). A mild positive correlation was found between leukocytes and blood (r = 0.23), reflecting some association between these inflammation markers. In conclusion, the study identified a significant prevalence of schistosomiasis among school-going children in Nelson Mandela Bay Municipality, with S. haematobium detected through haematuria and S. mansoni identified using the POC-CCA assay. The detection of leukocytes and protein in urine samples serves as critical biomarkers for schistosomiasis infections, reinforcing the presence of schistosomiasis in the study area when considered alongside haematuria. These urine parameters are indicative of inflammatory responses associated with schistosomiasis, underscoring the necessity for effective diagnostic methodologies. Such findings highlight the importance of comprehensive diagnostic assessments to accurately identify and monitor schistosomiasis prevalence and its associated health impacts. The significant burden of schistosomiasis in this population highlights the urgent need to develop targeted control interventions to effectively reduce its prevalence in the study area.

Keywords: schistosomiasis, urinalysis, haematuria, POC-CCA

Procedia PDF Downloads 20
197 Comprehensive Literature Review of the Humanistic Burden of Clostridium (Clostridiodes) difficile Infection

Authors: Caroline Seo, Jennifer Stephens, Kirstin H. Heinrich

Abstract:

Background: Clostridiodes (formerly Clostridium) difficile infection (CDI) is an anaerobic, spore-forming bacterium with manifestations including diarrhea, pseudomembranous colitis and toxic megacolon. Despite general understanding that CDI may be associated with marked burden on patients’ health, there has been limited information available on the humanistic burden of CDI. The objective of this literature review was to summarize the published data on the humanistic burden of CDI globally, in order to better inform future research efforts and increase awareness of the patient perspective in this disease. Methods: A comprehensive literature review of the past 15 years (2002-2017) was conducted using MEDLINE, Embase and Cumulative Index of Nursing and Allied Health Literature. Additional searches were conducted from conference proceedings (2015-2017). Articles selected were studies specifically designed to examine the humanistic burden of illness associated with adult patients with CDI. Results: Of 3,325 articles or abstracts identified, 33 remained after screening and full text review. Sixty percent (60%) were published in 2016 or 2017. Data from the United States or Western Europe were most common. Data from Brazil, Canada, China and Spain also exist. Thirteen (13) studies used validated patient-reported outcomes instruments, mostly EQ-5D utility and SF-36 generic instruments. Three (3) studies used CDI-specific instruments (CDiff32, CDI-DaySyms). The burden of CDI impacts patients in multiple health-related quality of life (HRQOL) domains. SF-36 domains with the largest decrements compared to other GI diarrheal diseases (IBS-D and Crohn’s) were role physical, physical functioning, vitality, social functioning, and role emotional. Reported EQ-5D utilities for CDI ranged from 0.35-0.42 compared to 0.65 in Crohn’s and 0.72 in IBS-D. The majority of papers addressed physical functioning and mental health domains (67% for both). Across various studies patients reported weakness, lack of appetite, sleep disturbance, functional dependence, and decreased activities of daily lives due to the continuous diarrhea. Due to lack of control over this infection, CDI also impacts the psychological and emotional quality of life of the patients. Patients reported feelings of fear, anxiety, frustration, depression, and embarrassment. Additionally, the type of disease (primary vs. recurrent) may impact mental health. One study indicated that there is a decrement in SF-36 mental scores in patients with recurrent CDI, in comparison to patients with primary CDI. Other domains highlighted by these studies include pain (27%), social isolation (27%), vitality and fatigue (24%), self-care (9%), and caregiver burden (0%). Two studies addressed work productivity, with 1 of these studies reporting that CDI patients had the highest work productivity and activity impairment scores among the gastrointestinal diseases. No study specifically included caregiver self-report. However, 3 studies did provide mention of patients’ worry on how their diagnosis of CDI would impact family, caregivers, and/or friends. Conclusions: Despite being a serious public health issue there has been a paucity of research on the HRQOL among those with CDI. While progress is being made, gaps exist in understanding the burden on patients, caregivers, and families. Future research is warranted to aid understanding of the CDI patient perspective.

Keywords: burden, Clostridiodes, difficile, humanistic, infection

Procedia PDF Downloads 136
196 Feasibility of Implementing Digital Healthcare Technologies to Prevent Disease: A Mixed-Methods Evaluation of a Digital Intervention Piloted in the National Health Service

Authors: Rosie Cooper, Tracey Chantler, Ellen Pringle, Sadie Bell, Emily Edmundson, Heidi Nielsen, Sheila Roberts, Michael Edelstein, Sandra Mounier Jack

Abstract:

Introduction: In line with the National Health Service’s (NHS) long-term plan, the NHS is looking to implement more digital health interventions. This study explores a case study in this area: a digital intervention used by NHS Trusts in London to consent adolescents for Human Papilloma Virus (HPV) immunisation. Methods: The electronic consent intervention was implemented in 14 secondary schools in inner city, London. These schools were statistically matched with 14 schools from the same area that were consenting using paper forms. Schools were matched on deprivation and English as an additional language. Consent form return rates and HPV vaccine uptake were compared quantitatively between intervention and matched schools. Data from observations of immunisation sessions and school feedback forms were analysed thematically. Individual and group interviews were undertaken with implementers parents and adolescents and a focus group with adolescents were undertaken and analysed thematically. Results: Twenty-eight schools (14 e-consent schools and 14 paper consent schools) comprising 3219 girls (1733 in paper consent schools and 1486 in e-consent schools) were included in the study. The proportion of pupils eligible for free school meals, with English as an additional language and students' ethnicity profile, was similar between the e-consent and paper consent schools. Return of consent forms was not increased by the implementation of the e-consent intervention. There was no difference in the proportion of pupils that were vaccinated at the scheduled vaccination session between the paper (n=14) and e-consent (n=14) schools (80.6% vs. 81.3%, p=0.93). The transition to using the system was not straightforward, whilst schools and staff understood the potential benefits, they found it difficult to adapt to new ways of working which removed some level or control from schools. Part of the reason for lower consent form return in e-consent schools was that some parents found the intervention difficult to use due to limited access to the internet, finding it hard to open the weblink, language barriers, and in some cases, the system closed a few days prior to sessions. Adolescents also highlighted the potential for e-consent interventions to by-pass their information needs. Discussion: We would advise caution against dismissing the e-consent intervention because it did not achieve its goal of increasing the return of consent forms. Given the problems embedding a news service, it was encouraging that HPV vaccine uptake remained stable. Introducing change requires stakeholders to understand, buy in, and work together with others. Schools and staff understood the potential benefits of using e-consent but found the new ways of working removed some level of control from schools, which they found hard to adapt to, possibly suggesting implementing digital technology will require an embedding process. Conclusion: The future direction of the NHS will require implementation of digital technology. Obtaining electronic consent from parents could help streamline school-based adolescent immunisation programmes. Findings from this study suggest that when implementing new digital technologies, it is important to allow for a period of embedding to enable them to become incorporated in everyday practice.

Keywords: consent, digital, immunisation, prevention

Procedia PDF Downloads 146
195 National Accreditation Board for Hospitals and Healthcare Reaccreditation, the Challenges and Advantages: A Qualitative Case Study

Authors: Narottam Puri, Gurvinder Kaur

Abstract:

Background: The National Accreditation Board for Hospitals & Healthcare Providers (NABH) is India’s apex standard setting accrediting body in health care which evaluates and accredits healthcare organizations. NABH requires accredited organizations to become reaccredited every three years. It is often though that once the initial accreditation is complete, the foundation is set and reaccreditation is a much simpler process. Fortis Hospital, Shalimar Bagh, a part of the Fortis Healthcare group is a 262 bed, multi-specialty tertiary care hospital. The hospital was successfully accredited in the year 2012. On completion of its first cycle, the hospital underwent a reaccreditation assessment in the year 2015. This paper aims to gain a better understanding of the challenges that accredited hospitals face when preparing for a renewal of their accreditations. Methods: The study was conducted using a cross-sectional mixed methods approach; semi-structured interviews were conducted with senior leadership team and staff members including doctors and nurses. Documents collated by the QA team while preparing for the re-assessment like the data on quality indicators: the method of collection, analysis, trending, continual incremental improvements made over time, minutes of the meetings, amendments made to the existing policies and new policies drafted was reviewed to understand the challenges. Results: The senior leadership had a concern about the cost of accreditation and its impact on the quality of health care services considering the staff effort and time consumed it. The management was however in favor of continuing with the accreditation since it offered competitive advantage, strengthened community confidence besides better pay rates from the payors. The clinicians regarded it as an increased non-clinical workload. Doctors felt accountable within a professional framework, to themselves, the patient and family, their peers and to their profession; but not to accreditation bodies and raised concerns on how the quality indicators were measured. The departmental leaders had a positive perception of accreditation. They agreed that it ensured high standards of care and improved management of their functional areas. However, they were reluctant in sparing people for the QA activities due to staffing issues. With staff turnover, a lot of work was lost as sticky knowledge and had to be redone. Listing the continual quality improvement initiatives over the last 3 years was a challenge in itself. Conclusion: The success of any quality assurance reaccreditation program depends almost entirely on the commitment and interest of the administrators, nurses, paramedical staff, and clinicians. The leader of the Quality Movement is critical in propelling and building momentum. Leaders need to recognize skepticism and resistance and consider ways in which staff can become positively engaged. Involvement of all the functional owners is the start point towards building ownership and accountability for standards compliance. Creativity plays a very valuable role. Communication by Mail Series, WhatsApp groups, Quizzes, Events, and any and every form helps. Leaders must be able to generate interest and commitment without burdening clinical and administrative staff with an activity they neither understand nor believe in.

Keywords: NABH, reaccreditation, quality assurance, quality indicators

Procedia PDF Downloads 224
194 A Study on the Relation among Primary Care Professionals Serving Disadvantaged Community, Socioeconomic Status, and Adverse Health Outcome

Authors: Chau-Kuang Chen, Juanita Buford, Colette Davis, Raisha Allen, John Hughes, James Tyus, Dexter Samuels

Abstract:

During the post-Civil War era, the city of Nashville, Tennessee, had the highest mortality rate in the country. The elevated death and disease among ex-slaves were attributable to the unavailability of healthcare. To address the paucity of healthcare services, the College, an institution with the mission of educating minority professionals and serving the under served population, was established in 1876. This study was designed to assess if the College has accomplished its mission of serving under served communities and contributed to the elimination of health disparities in the United States. The study objective was to quantify the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities, which, in turn, was significantly associated with a health professional shortage score partly designated by the U.S. Department of Health and Human Services. Various statistical methods were used to analyze the alumni data in years 1975 – 2013. K-means cluster analysis was utilized to identify individual medical and dental graduates into the cluster groups of the practice communities (Disadvantaged or Non-disadvantaged Communities). Discriminant analysis was implemented to verify the classification accuracy of cluster analysis. The independent t test was performed to detect the significant mean differences for clustering and criterion variables between Disadvantaged and Non-disadvantaged Communities, which confirms the “content” validity of cluster analysis model. Chi-square test was used to assess if the proportion of cluster groups (Disadvantaged vs Non-disadvantaged Communities) were consistent with that of practicing specialties (primary care vs. non-primary care). Finally, the partial least squares (PLS) path model was constructed to explore the “construct” validity of analytics model by providing the magnitude effects of socioeconomic status and adverse health outcome on primary care professionals serving disadvantaged community. The social ecological theory along with statistical models mentioned was used to establish the relationship between medical and dental graduates (primary care professionals serving disadvantaged communities) and their social environments (socioeconomic status, adverse health outcome, health professional shortage score). Based on social ecological framework, it was hypothesized that the impact of socioeconomic status and adverse health outcomes on primary care professionals serving disadvantaged communities could be quantified. Also, primary care professionals serving disadvantaged communities related to a health professional shortage score can be measured. Adverse health outcome (adult obesity rate, age-adjusted premature mortality rate, and percent of people diagnosed with diabetes) could be affected by the latent variable, namely socioeconomic status (unemployment rate, poverty rate, percent of children who were in free lunch programs, and percent of uninsured adults). The study results indicated that approximately 83% (3,192/3,864) of the College’s medical and dental graduates from 1975 to 2013 were practicing in disadvantaged communities. In addition, the PLS path modeling demonstrated that primary care professionals serving disadvantaged community was significantly associated with socioeconomic status and adverse health outcome (p < .001). In summary, the majority of medical and dental graduates from the College provide primary care services to disadvantaged communities with low socioeconomic status and high adverse health outcomes, which demonstrate that the College has fulfilled its mission.

Keywords: disadvantaged community, K-means cluster analysis, PLS path modeling, primary care

Procedia PDF Downloads 550
193 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 136
192 Sensorless Machine Parameter-Free Control of Doubly Fed Reluctance Wind Turbine Generator

Authors: Mohammad R. Aghakashkooli, Milutin G. Jovanovic

Abstract:

The brushless doubly-fed reluctance generator (BDFRG) is an emerging, medium-speed alternative to a conventional wound rotor slip-ring doubly-fed induction generator (DFIG) in wind energy conversion systems (WECS). It can provide competitive overall performance and similar low failure rates of a typically 30% rated back-to-back power electronics converter in 2:1 speed ranges but with the following important reliability and cost advantages over DFIG: the maintenance-free operation afforded by its brushless structure, 50% synchronous speed with the same number of rotor poles (allowing the use of a more compact, and more efficient two-stage gearbox instead of a vulnerable three-stage one), and superior grid integration properties including simpler protection for the low voltage ride through compliance of the fractional converter due to the comparatively higher leakage inductances and lower fault currents. Vector controlled pulse-width-modulated converters generally feature a much lower total harmonic distortion relative to hysteresis counterparts with variable switching rates and as such have been a predominant choice for BDFRG (and DFIG) wind turbines. Eliminating a shaft position sensor, which is often required for control implementation in this case, would be desirable to address the associated reliability issues. This fact has largely motivated the recent growing research of sensorless methods and developments of various rotor position and/or speed estimation techniques for this purpose. The main limitation of all the observer-based control approaches for grid-connected wind power applications of the BDFRG reported in the open literature is the requirement for pre-commissioning procedures and prior knowledge of the machine inductances, which are usually difficult to accurately identify by off-line testing. A model reference adaptive system (MRAS) based sensor-less vector control scheme to be presented will overcome this shortcoming. The true machine parameter independence of the proposed field-oriented algorithm, offering robust, inherently decoupled real and reactive power control of the grid-connected winding, is achieved by on-line estimation of the inductance ratio, the underlying rotor angular velocity and position MRAS observer being reliant upon. Such an observer configuration will be more practical to implement and clearly preferable to the existing machine parameter dependent solutions, and especially bearing in mind that with very little modifications it can be adapted for commercial DFIGs with immediately obvious further industrial benefits and prospects of this work. The excellent encoder-less controller performance with maximum power point tracking in the base speed region will be demonstrated by realistic simulation studies using large-scale BDFRG design data and verified by experimental results on a small laboratory prototype of the WECS emulation facility.

Keywords: brushless doubly fed reluctance generator, model reference adaptive system, sensorless vector control, wind energy conversion

Procedia PDF Downloads 62
191 Computer-Integrated Surgery of the Human Brain, New Possibilities

Authors: Ugo Galvanetto, Pirto G. Pavan, Mirco Zaccariotto

Abstract:

The discipline of Computer-integrated surgery (CIS) will provide equipment able to improve the efficiency of healthcare systems and, which is more important, clinical results. Surgeons and machines will cooperate in new ways that will extend surgeons’ ability to train, plan and carry out surgery. Patient specific CIS of the brain requires several steps: 1 - Fast generation of brain models. Based on image recognition of MR images and equipped with artificial intelligence, image recognition techniques should differentiate among all brain tissues and segment them. After that, automatic mesh generation should create the mathematical model of the brain in which the various tissues (white matter, grey matter, cerebrospinal fluid …) are clearly located in the correct positions. 2 – Reliable and fast simulation of the surgical process. Computational mechanics will be the crucial aspect of the entire procedure. New algorithms will be used to simulate the mechanical behaviour of cutting through cerebral tissues. 3 – Real time provision of visual and haptic feedback A sophisticated human-machine interface based on ergonomics and psychology will provide the feedback to the surgeon. The present work will address in particular point 2. Modelling the cutting of soft tissue in a structure as complex as the human brain is an extremely challenging problem in computational mechanics. The finite element method (FEM), that accurately represents complex geometries and accounts for material and geometrical nonlinearities, is the most used computational tool to simulate the mechanical response of soft tissues. However, the main drawback of FEM lies in the mechanics theory on which it is based, classical continuum Mechanics, which assumes matter is a continuum with no discontinuity. FEM must resort to complex tools such as pre-defined cohesive zones, external phase-field variables, and demanding remeshing techniques to include discontinuities. However, all approaches to equip FEM computational methods with the capability to describe material separation, such as interface elements with cohesive zone models, X-FEM, element erosion, phase-field, have some drawbacks that make them unsuitable for surgery simulation. Interface elements require a-priori knowledge of crack paths. The use of XFEM in 3D is cumbersome. Element erosion does not conserve mass. The Phase Field approach adopts a diffusive crack model instead of describing true tissue separation typical of surgical procedures. Modelling discontinuities, so difficult when using computational approaches based on classical continuum Mechanics, is instead easy for novel computational methods based on Peridynamics (PD). PD is a non-local theory of mechanics formulated with no use of spatial derivatives. Its governing equations are valid at points or surfaces of discontinuity, and it is, therefore especially suited to describe crack propagation and fragmentation problems. Moreover, PD does not require any criterium to decide the direction of crack propagation or the conditions for crack branching or coalescence; in the PD-based computational methods, cracks develop spontaneously in the way which is the most convenient from an energy point of view. Therefore, in PD computational methods, crack propagation in 3D is as easy as it is in 2D, with a remarkable advantage with respect to all other computational techniques.

Keywords: computational mechanics, peridynamics, finite element, biomechanics

Procedia PDF Downloads 80
190 Impact of Increased Radiology Staffing on After-Hours Radiology Reporting Efficiency and Quality

Authors: Peregrine James Dalziel, Philip Vu Tran

Abstract:

Objective / Introduction: Demand for radiology services from Emergency Departments (ED) continues to increase with greater demands placed on radiology staff providing reports for the management of complex cases. Queuing theory indicates that wide variability of process time with the random nature of request arrival increases the probability of significant queues. This can lead to delays in the time-to-availability of radiology reports (TTA-RR) and potentially impaired ED patient flow. In addition, greater “cognitive workload” of greater volume may lead to reduced productivity and increased errors. We sought to quantify the potential ED flow improvements obtainable from increased radiology providers serving 3 public hospitals in Melbourne Australia. We sought to assess the potential productivity gains, quality improvement and the cost-effectiveness of increased labor inputs. Methods & Materials: The Western Health Medical Imaging Department moved from single resident coverage on weekend days 8:30 am-10:30 pm to a limited period of 2 resident coverage 1 pm-6 pm on both weekend days. The TTA-RR for weekend CT scans was calculated from the PACs database for the 8 month period symmetrically around the date of staffing change. A multivariate linear regression model was developed to isolate the improvement in TTA-RR, between the two 4-months periods. Daily and hourly scan volume at the time of each CT scan was calculated to assess the impact of varying department workload. To assess any improvement in report quality/errors a random sample of 200 studies was assessed to compare the average number of clinically significant over-read addendums to reports between the 2 periods. Cost-effectiveness was assessed by comparing the marginal cost of additional staffing against a conservative estimate of the economic benefit of improved ED patient throughput using the Australian national insurance rebate for private ED attendance as a revenue proxy. Results: The primary resident on call and the type of scan accounted for most of the explained variability in time to report availability (R2=0.29). Increasing daily volume and hourly volume was associated with increased TTA-RR (1.5m (p<0.01) and 4.8m (p<0.01) respectively per additional scan ordered within each time frame. Reports were available 25.9 minutes sooner on average in the 4 months post-implementation of double coverage (p<0.01) with additional 23.6 minutes improvement when 2 residents were on-site concomitantly (p<0.01). The aggregate average improvement in TTA-RR was 24.8 hours per weekend day This represents the increased decision-making time available to ED physicians and potential improvement in ED bed utilisation. 5% of reports from the intervention period contained clinically significant addendums vs 7% in the single resident period but this was not statistically significant (p=0.7). The marginal cost was less than the anticipated economic benefit based assuming a 50% capture of improved TTA-RR inpatient disposition and using the lowest available national insurance rebate as a proxy for economic benefit. Conclusion: TTA-RR improved significantly during the period of increased staff availability, both during the specific period of increased staffing and throughout the day. Increased labor utilisation is cost-effective compared with the potential improved productivity for ED cases requiring CT imaging.

Keywords: workflow, quality, administration, CT, staffing

Procedia PDF Downloads 112
189 The Potential of Rhizospheric Bacteria for Mycotoxigenic Fungi Suppression

Authors: Vanja Vlajkov, Ivana PajčIn, Mila Grahovac, Marta Loc, Dragana Budakov, Jovana Grahovac

Abstract:

The rhizosphere soil refers to the plant roots' dynamic environment characterized by their inhabitants' high biological activity. Rhizospheric bacteria are recognized as effective biocontrol agents and considered cardinal in alternative strategies for securing ecological plant diseases management. The need to suppress fungal pathogens is an urgent task, not only because of the direct economic losses caused by infection but also due to their ability to produce mycotoxins with harmful effects on human health. Aspergillus and Fusarium species are well-known producers of toxigenic metabolites with a high capacity to colonize crops and enter the food chain. The bacteria belonging to the Bacillus genus has been conceded as a plant beneficial species in agricultural practice and identified as plant growth-promoting rhizobacteria (PGPR). Besides incontestable potential, the full commercialization of microbial biopesticides is in the preliminary phase. Thus, there is a constant need for estimating the suitability of novel strains to be used as a central point of viable bioprocess leading to market-ready product development. In the present study, 76 potential producing strains were isolated from the rhizosphere soil, sampled from different localities in the Autonomous Province of Vojvodina, Republic of Serbia. The selective isolation process of strains started by resuspending 1 g of soil samples in 9 ml of saline and incubating at 28° C for 15 minutes at 150 rpm. After homogenization, thermal treatment at 100° C for 7 minutes was performed. Dilution series (10-1-10-3) were prepared, and 500 µl of each was inoculated on nutrient agar plates and incubated at 28° C for 48 h. The pure cultures of morphologically different strains indicating belonging to the Bacillus genus were obtained by the spread-plate technique. The cultivation of the isolated strains was carried out in an Erlenmeyer flask for 96 h, at 28 °C, 170 rpm. The antagonistic activity screening included two phytopathogenic fungi as test microorganisms: Aspergillus sp. and Fusarium sp. The mycelial growth inhibition was estimated based on the antimicrobial activity testing of cultivation broth by the diffusion method. For the Aspergillus sp., the highest antifungal activity was recorded for the isolates Kro-4a and Mah-1a. In contrast, for the Fusarium sp., following 15 isolates exhibited the highest antagonistic effect Par-1, Par-2, Par-3, Par-4, Kup-4, Paš-1b, Pap-3, Kro-2, Kro-3a, Kro-3b, Kra-1a, Kra-1b, Šar-1, Šar-2b and Šar-4. One-way ANOVA was performed to determine the antagonists' effect statistical significance on inhibition zone diameter. Duncan's multiple range test was conducted to define homogenous groups of antagonists with the same level of statistical significance regarding their effect on antimicrobial activity of the tested cultivation broth against tested pathogens. The study results have pointed out the significant in vitro potential of the isolated strains to be used as biocontrol agents for the suppression of the tested mycotoxigenic fungi. Further research should include the identification and detailed characterization of the most promising isolates and mode of action of the selected strains as biocontrol agents. The following research should also involve bioprocess optimization steps to fully reach the selected strains' potential as microbial biopesticides and design cost-effective biotechnological production.

Keywords: Bacillus, biocontrol, bioprocess, mycotoxigenic fungi

Procedia PDF Downloads 196
188 Long-Term Tillage, Lime Matter and Cover Crop Effects under Heavy Soil Conditions in Northern Lithuania

Authors: Aleksandras Velykis, Antanas Satkus

Abstract:

Clay loam and clay soils are typical for northern Lithuania. These soils are susceptible to physical degradation in the case of intensive use of heavy machinery for field operations. However, clayey soils having poor physical properties by origin require more intensive tillage to maintain proper physical condition for grown crops. Therefore not only choice of suitable tillage system is very important for these soils in the region, but also additional search of other measures is essential for good soil physical state maintenance. Research objective: To evaluate the long-term effects of different intensity tillage as well as its combinations with supplementary agronomic practices on improvement of soil physical conditions and environmental sustainability. The experiment examined the influence of deep and shallow ploughing, ploughless tillage, combinations of ploughless tillage with incorporation of lime sludge and cover crop for green manure and application of the same cover crop for mulch without autumn tillage under spring and winter crop growing conditions on clay loam (27% clay, 50% silt, 23% sand) Endocalcaric Endogleyic Cambisol. Methods: The indicators characterizing the impact of investigated measures were determined using the following methods and devices: Soil dry bulk density – by Eijkelkamp cylinder (100 cm3), soil water content – by weighing, soil structure – by Retsch sieve shaker, aggregate stability – by Eijkelkamp wet sieving apparatus, soil mineral nitrogen – in 1 N KCL extract using colorimetric method. Results: Clay loam soil physical state (dry bulk density, structure, aggregate stability, water content) depends on tillage system and its combination with additional practices used. Application of cover crop winter mulch without tillage in autumn, ploughless tillage and shallow ploughing causes the compaction of bottom (15-25 cm) topsoil layer. However, due to ploughless tillage the soil dry bulk density in subsoil (25-35 cm) layer is less compared to deep ploughing. Soil structure in the upper (0-15 cm) topsoil layer and in the seedbed (0-5 cm), prepared for spring crops is usually worse when applying the ploughless tillage or cover crop mulch without autumn tillage. Application of lime sludge under ploughless tillage conditions helped to avoid the compaction and structure worsening in upper topsoil layer, as well as increase aggregate stability. Application of reduced tillage increased soil water content at upper topsoil layer directly after spring crop sowing. However, due to reduced tillage the water content in all topsoil markedly decreased when droughty periods lasted for a long time. Combination of reduced tillage with cover crop for green manure and winter mulch is significant for preserving the environment. Such application of cover crops reduces the leaching of mineral nitrogen into the deeper soil layers and environmental pollution. This work was supported by the National Science Program ‘The effect of long-term, different-intensity management of resources on the soils of different genesis and on other components of the agro-ecosystems’ [grant number SIT-9/2015] funded by the Research Council of Lithuania.

Keywords: clay loam, endocalcaric endogleyic cambisol, mineral nitrogen, physical state

Procedia PDF Downloads 226
187 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment

Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali

Abstract:

This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.

Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis

Procedia PDF Downloads 428
186 Comparative Proteomic Profiling of Planktonic and Biofilms from Staphylococcus aureus Using Tandem Mass Tag-Based Mass Spectrometry

Authors: Arifur Rahman, Ardeshir Amirkhani, Honghua Hu, Mark Molloy, Karen Vickery

Abstract:

Introduction and Objectives: Staphylococcus aureus and coagulase-negative staphylococci comprises approximately 65% of infections associated with medical devices and are well known for their biofilm formatting ability. Biofilm-related infections are extremely difficult to eradicate owing to their high tolerance to antibiotics and host immune defences. Currently, there is no efficient method for early biofilm detection. A better understanding to enable detection of biofilm specific proteins in vitro and in vivo can be achieved by studying planktonic and different growth phases of biofilms using a proteome analysis approach. Our goal was to construct a reference map of planktonic and biofilm associated proteins of S. aureus. Methods: S. aureus reference strain (ATCC 25923) was used to grow 24 hours planktonic, 3-day wet biofilm (3DWB), and 12-day wet biofilm (12DWB). Bacteria were grown in tryptic soy broth (TSB) liquid medium. Planktonic growth was used late logarithmic bacteria, and the Centres for Disease Control (CDC) biofilm reactor was used to grow 3 days, and 12-day hydrated biofilms, respectively. Samples were subjected to reduction, alkylation and digestion steps prior to Multiplex labelling using Tandem Mass Tag (TMT) 10-plex reagent (Thermo Fisher Scientific). The labelled samples were pooled and fractionated by high pH RP-HPLC which followed by loading of the fractions on a nanoflow UPLC system (Eksigent UPLC system, AB SCIEX). Mass spectrometry (MS) data were collected on an Orbitrap Elite (Thermo Fisher Scientific) Mass Spectrometer. Protein identification and relative quantitation of protein levels were performed using Proteome Discoverer (version 1.3, Thermo Fisher Scientific). After the extraction of protein ratios with Proteome Discoverer, additional processing, and statistical analysis was done using the TMTPrePro R package. Results and Discussion: The present study showed that a considerable proteomic difference exists among planktonic and biofilms from S. aureus. We identified 1636 total extracellular secreted proteins, of which 350 and 137 proteins of 3DWB and 12DWB showed significant abundance variation from planktonic preparation, respectively. Of these, simultaneous up-regulation in between 3DWB and 12DWB proteins such as extracellular matrix-binding protein ebh, enolase, transketolase, triosephosphate isomerase, chaperonin, peptidase, pyruvate kinase, hydrolase, aminotransferase, ribosomal protein, acetyl-CoA acetyltransferase, DNA gyrase subunit A, glycine glycyltransferase and others we found in this biofilm producer. On the contrary, simultaneous down-regulation in between 3DWB and 12DWB proteins such as alpha and delta-hemolysin, lipoteichoic acid synthase, enterotoxin I, serine protease, lipase, clumping factor B, regulatory protein Spx, phosphoglucomutase, and others also we found in this biofilm producer. In addition, we also identified a big percentage of hypothetical proteins including unique proteins. Therefore, a comprehensive knowledge of planktonic and biofilm associated proteins identified by S. aureus will provide a basis for future studies on the development of vaccines and diagnostic biomarkers. Conclusions: In this study, we constructed an initial reference map of planktonic and various growth phase of biofilm associated proteins which might be helpful to diagnose biofilm associated infections.

Keywords: bacterial biofilms, CDC bioreactor, S. aureus, mass spectrometry, TMT

Procedia PDF Downloads 171
185 Canadian Undergraduate and Graduate Nursing Students: Interest in Education in Medical and Recreational Cannabis for Practice and Career Development

Authors: Margareth S. Zanchetta, Kateryna Metersky, Valerie Tan, Charissa Cordon, Stephanie Lucchese, Yana Siganevich, Prasha Sivasundaram, Truong Binh Nguyen, Imran Qureshi

Abstract:

Due to a new area of practice, Canadian nurses possess knowledge gaps regarding the use of cannabis-based therapies by clients/patients. Education related to medical cannabis (MC) and recreational cannabis (RC) is required to promote nurses’ competency and confidence in supporting clients/patients using MC/RC toward the improvement of health outcomes. A team composed of nursing researchers and undergraduate/graduate students implemented a national survey to explore this theme with the population of undergraduate, graduate (MN and NP), and Post-Diploma (RN Bridging) nursing students enrolled in Canadian Universities Nursing Programs. Upon Research Ethics Board approval, survey recruitment was supported by major nursing stakeholders. The research questions were : (a) Which are the most preferred sources of information on MC/RC for nursing students? (b) Which are the factors and preferred learning modalities that could increase interest in learning about MC/RC, and (c) What are the future career plans among nursing students, and how would they consider the prospective use of cannabis in their practice? The survey was available from Sept. 2022 to Feb. 2023, hosted by a remote platform. An original questionnaire (English-French) was composed of 18 multiple choice questions and 2 open-ended questions. Sociodemographic information and closed-ended responses were compiled as descriptive statistics, while narrative accounts will be analysed through thematic analysis. Respondents (n=153) were from 7 Canadian provinces, national (99%) and international students (1%); the majority of respondents (61%) were in the age range of 21-30 years old. Results indicated that respondents perceive a gap in the undergraduate curriculum on the topics of MC/RC (91%) and that their learning needs include regulations (90%), data on effectiveness (88%), dosing best practices (86%), contraindications (83%), and clinical and medical indications (76%). Respondents reported motivation to learn more about MC/RC through online lectures/videos (65%), e-learning modules or online interactive training (61%), workshops (51%), webinars (36%), and social media (35%). Their primary career-related motivations regarding MC/RC knowledge include enhancing nursing practice (76%), learning about this growing scope of practice (61%), keeping up-to-date responding to scientific curiosity (59%), learning about evidence-based practice (59%), and utilizing alternative forms of medical treatment (37%). Respondents indicated that the integration of topics on cannabis in any course in the undergraduate and/or graduate curriculum would increase their desire to learn about MC/RC as equally as exposure within a clinical setting (75%). The emerging trend in the set of narrative responses (n=130) suggests that respondents believe educational MC/RC content should be integrated into core nursing courses. Respondents also urged educators to be well-informed about evidence-based practice related to MC/RC and to reflect upon stigma and biases surrounding its use. Future knowledge dissemination and translation activities include scholarly products and presentations to stimulate discussion amongst nursing faculty and students, as well as nurses in clinical settings. The goal is to mobilise talents and build collaboration for the development of a socially responsive curriculum on MC/RC competency to address the education-related expectations of all these social actors.

Keywords: Canada, medical cannabis, nursing education, nursing graduate student, nursing undergraduate student, online survey, recreational cannabis

Procedia PDF Downloads 90
184 La0.80Ag0.15MnO3 Magnetic Nanoparticles for Self-Controlled Magnetic Fluid Hyperthermia

Authors: Marian Mihalik, Kornel Csach, Martin Kovalik, Matúš Mihalik, Martina Kubovčíková, Maria Zentková, Martin Vavra, Vladimír Girman, Jaroslav Briančin, Marija Perovic, Marija Boškovic, Magdalena Fitta, Robert Pelka

Abstract:

Current nanomaterials for use in biomedicine are based mainly on iron oxides and on present knowledge on magnetic nanostructures. Manganites can represent another material which can be used optionally. Manganites and their unique electronic properties have been extensively studied in the last decades not only due to fundamental interest but to possible applications of colossal magnetoresistance, magnetocaloric effect, and ferroelectric properties. It was found that the oxygen-reduction reaction on perovskite oxide is intimately connected with metal ion e.g., orbital occupation. The effect of oxygen deviation from the stoichiometric composition on crystal structure was studied very carefully by many authors on LaMnO₃. Depending on oxygen content, the crystal structure changes from orthorhombic one to rhombohedric for oxygen content 3.1. In the case of hole-doped manganites, the change from the orthorhombic crystal structure, which is typical for La1-xCaxMnO3 based manganites, to the rhombohedric crystal structure (La1-xMxMnO₃ where M = K, Ag, and Sr based materials) results in an enormous increase of the Curie temperature. In our paper, we study the effect of oxygen content on crystal structure, thermal, and magnetic properties (including magnetocaloric effect) of La1-xAgxMnO₃nano particle system. The content of oxygen in samples was tuned by heat treatment in different thermal regimes and in various environment (air, oxygen, argon). Water nanosuspensions based on La0.80Ag0.15MnO₃ magnetic particles with the Curie temperature of about 43oC were prepared by two different approaches. First, by using a laboratory circulation mill for milling of powder in the presence of sodium dodecyl sulphate (SDS) and subsequent centrifugation. Second nanosuspension was prepared using an agate bowl, etching in citric acid and HNO3, ultrasound homogeniser, centrifugation, and dextran 40 kDA or 15 kDA as surfactant. Electrostatic stabilisation obtained by the first approach did not offer long term kinetic and aggregation colloidal stability and was unable to compensate for attractive forces between particles under a magnetic field. By the second approach, we prepared suspension oversaturated by dextran 40 kDA for steric stabilisation, with evidence of the presence of superparamagnetic behaviour. Low concentration of nanoparticles and not ideal coverage of nanoparticles impacting the stability of ferrofluids was the disadvantage of this approach. Strong steric stabilisation was observable at alcaic conditions under pH = ~10. Application of dextran 15 kDA leads to relatively stable ferrofluid with pH around physiological conditions, but desegregation of powder by HNO₃ was not effective enough, and the average size of fragments was to large of about 150 nm, and we did not see any signature of superparamagnetic behaviour. The prepared ferrofluids were characterised by scanning and transition microscope method, thermogravimetry, magnetization, and AC susceptibility measurements. Specific Absorption Rate measurements were undertaken on powder as well on ferrofluids in order to estimate the potential application of La₀.₈₀Ag₀.₁₅MnO₃ magnetic particles based ferrofluid for hyperthermia. Our complex study contains an investigation of biocompatibility and potential biohazard of this material.

Keywords: manganites, magnetic nanoparticles, oxygen content, magnetic phase transition, magnetocaloric effect, ferrofluid, hyperthermia

Procedia PDF Downloads 89
183 A Computational Investigation of Potential Drugs for Cholesterol Regulation to Treat Alzheimer’s Disease

Authors: Marina Passero, Tianhua Zhai, Zuyi (Jacky) Huang

Abstract:

Alzheimer’s disease has become a major public health issue, as indicated by the increasing populations of Americans living with Alzheimer’s disease. After decades of extensive research in Alzheimer’s disease, only seven drugs have been approved by Food and Drug Administration (FDA) to treat Alzheimer’s disease. Five of these drugs were designed to treat the dementia symptoms, and only two drugs (i.e., Aducanumab and Lecanemab) target the progression of Alzheimer’s disease, especially the accumulation of amyloid-b plaques. However, controversial comments were raised for the accelerated approvals of either Aducanumab or Lecanemab, especially with concerns on safety and side effects of these two drugs. There is still an urgent need for further drug discovery to target the biological processes involved in the progression of Alzheimer’s disease. Excessive cholesterol has been found to accumulate in the brain of those with Alzheimer’s disease. Cholesterol can be synthesized in both the blood and the brain, but the majority of biosynthesis in the adult brain takes place in astrocytes and is then transported to the neurons via ApoE. The blood brain barrier separates cholesterol metabolism in the brain from the rest of the body. Various proteins contribute to the metabolism of cholesterol in the brain, which offer potential targets for Alzheimer’s treatment. In the astrocytes, SREBP cleavage-activating protein (SCAP) binds to Sterol Regulatory Element-binding Protein 2 (SREBP2) in order to transport the complex from the endoplasmic reticulum to the Golgi apparatus. Cholesterol is secreted out of the astrocytes by ATP-Binding Cassette A1 (ABCA1) transporter. Lipoprotein receptors such as triggering receptor expressed on myeloid cells 2 (TREM2) internalize cholesterol into the microglia, while lipoprotein receptors such as Low-density lipoprotein receptor-related protein 1 (LRP1) internalize cholesterol into the neuron. Cytochrome P450 Family 46 Subfamily A Member 1 (CYP46A1) converts excess cholesterol to 24S-hydroxycholesterol (24S-OHC). Cholesterol has been approved for its direct effect on the production of amyloid-beta and tau proteins. The addition of cholesterol to the brain promotes the activity of beta-site amyloid precursor protein cleaving enzyme 1 (BACE1), secretase, and amyloid precursor protein (APP), which all aid in amyloid-beta production. The reduction of cholesterol esters in the brain have been found to reduce phosphorylated tau levels in mice. In this work, a computational pipeline was developed to identify the protein targets involved in cholesterol regulation in brain and further to identify chemical compounds as the inhibitors of a selected protein target. Since extensive evidence shows the strong correlation between brain cholesterol regulation and Alzheimer’s disease, a detailed literature review on genes or pathways related to the brain cholesterol synthesis and regulation was first conducted in this work. An interaction network was then built for those genes so that the top gene targets were identified. The involvement of these genes in Alzheimer’s disease progression was discussed, which was followed by the investigation of existing clinical trials for those targets. A ligand-protein docking program was finally developed to screen 1.5 million chemical compounds for the selected protein target. A machine learning program was developed to evaluate and predict the binding interaction between chemical compounds and the protein target. The results from this work pave the way for further drug discovery to regulate brain cholesterol to combat Alzheimer’s disease.

Keywords: Alzheimer’s disease, drug discovery, ligand-protein docking, gene-network analysis, cholesterol regulation

Procedia PDF Downloads 74
182 A Tool to Provide Advanced Secure Exchange of Electronic Documents through Europe

Authors: Jesus Carretero, Mario Vasile, Javier Garcia-Blas, Felix Garcia-Carballeira

Abstract:

Supporting cross-border secure and reliable exchange of data and documents and to promote data interoperability is critical for Europe to enhance sector (like eFinance, eJustice and eHealth). This work presents the status and results of the European Project MADE, a Research Project funded by Connecting Europe facility Programme, to provide secure e-invoicing and e-document exchange systems among Europe countries in compliance with the eIDAS Regulation (Regulation EU 910/2014 on electronic identification and trust services). The main goal of MADE is to develop six new AS4 Access Points and SMP in Europe to provide secure document exchanges using the eDelivery DSI (Digital Service Infrastructure) amongst both private and public entities. Moreover, the project demonstrates the feasibility and interest of the solution provided by providing several months of interoperability among the providers of the six partners in different EU countries. To achieve those goals, we have followed a methodology setting first a common background for requirements in the partner countries and the European regulations. Then, the partners have implemented access points in each country, including their service metadata publisher (SMP), to allow the access to their clients to the pan-European network. Finally, we have setup interoperability tests with the other access points of the consortium. The tests will include the use of each entity production-ready Information Systems that process the data to confirm all steps of the data exchange. For the access points, we have chosen AS4 instead of other existing alternatives because it supports multiple payloads, native web services, pulling facilities, lightweight client implementations, modern crypto algorithms, and more authentication types, like username-password and X.509 authentication and SAML authentication. The main contribution of MADE project is to open the path for European companies to use eDelivery services with cross-border exchange of electronic documents following PEPPOL (Pan-European Public Procurement Online) based on the e-SENS AS4 Profile. It also includes the development/integration of new components, integration of new and existing logging and traceability solutions and maintenance tool support for PKI. Moreover, we have found that most companies are still not ready to support those profiles. Thus further efforts will be needed to promote this technology into the companies. The consortium includes the following 9 partners. From them, 2 are research institutions: University Carlos III of Madrid (Coordinator), and Universidad Politecnica de Valencia. The other 7 (EDICOM, BIZbrains, Officient, Aksesspunkt Norge, eConnect, LMT group, Unimaze) are private entities specialized in secure delivery of electronic documents and information integration brokerage in their respective countries. To achieve cross-border operativity, they will include AS4 and SMP services in their platforms according to the EU Core Service Platform. Made project is instrumental to test the feasibility of cross-border documents eDelivery in Europe. If successful, not only einvoices, but many other types of documents will be securely exchanged through Europe. It will be the base to extend the network to the whole Europe. This project has been funded under the Connecting Europe Facility Agreement number: INEA/CEF/ICT/A2016/1278042. Action No: 2016-EU-IA-0063.

Keywords: security, e-delivery, e-invoicing, e-delivery, e-document exchange, trust

Procedia PDF Downloads 265
181 Pharmacokinetics of First-Line Tuberculosis Drugs in South African Patients from Kwazulu-Natal: Effects of Pharmacogenetic Variation on Rifampicin and Isoniazid Concentrations

Authors: Anushka Naidoo, Veron Ramsuran, Maxwell Chirehwa, Paolo Denti, Kogieleum Naidoo, Helen McIlleron, Nonhlanhla Yende-Zuma, Ravesh Singh, Sinaye Ngcapu, Nesri Padayatachi

Abstract:

Background: Despite efforts to introduce new drugs and shorter drug regimens for drug-susceptible tuberculosis (TB), the standard first-line treatment has not changed in over 50 years. Rifampicin, isoniazid, and pyrazinamide are critical components of the current standard treatment regimens. Some studies suggest that microbiologic failure and acquired drug resistance are primarily driven by low drug concentrations that result from pharmacokinetic (PK) variability independent of adherence to treatment. Wide between-patient pharmacokinetic variability for rifampin, isoniazid, and pyrazinamide has been reported in prior studies. There may be several reasons for this variability. However, genetic variability in genes coding for drug metabolizing and transporter enzymes have been shown to be a contributing factor for variable tuberculosis drug exposures. Objective: We describe the pharmacokinetics of first-line TB drugs rifampicin, isoniazid, and pyrazinamide and assess the effect of genetic variability in relevant selected drug metabolizing and transporter enzymes on pharmacokinetic parameters of isoniazid and rifampicin. Methods: We conducted the randomized-controlled Improving retreatment success TB trial in Durban, South Africa. The drug regimen included rifampicin, isoniazid, and pyrazinamide. Drug concentrations were measured in plasma, and concentration-time data were analysed using nonlinear-mixed-effects models to quantify the effects of relevant covariates and single nucleotide polymorphisms (SNP’s) of drug metabolizing and transporter genes on rifampicin, isoniazid and pyrazinamide exposure. A total of 25 SNP’s: four NAT2 (used to determine acetylator status), four SLCO1B1, three Pregnane X receptor (NR1), six ABCB1 and eight UGT1A, were selected for analysis in this study. Genotypes were determined for each of the SNP’s using a TaqMan® Genotyping OpenArray™. Results: Among fifty-eight patients studied; 41 (70.7%) were male, 97% black African, 42 (72.4%) HIV co-infected and 40 (95%) on efavirenz-based ART. Median weight, fat-free mass (FFM), and age at baseline were 56.9 kg (interquartile range, IQR: 51.1-65.2), 46.8 kg (IQR: 42.5-50.3) and 37 years (IQR: 31-42), respectively. The pharmacokinetics of rifampicin and pyrazinamide was best described using one-compartment models with first-order absorption and elimination, while for isoniazid two-compartment disposition was used. The median (interquartile range: IQR) AUC (h·mg/L) and Cmax (mg/L) for rifampicin, isoniazid, and pyrazinamide were; 25.62 (23.01-28.53) and 4.85 (4.36-5.40), 10.62 (9.20-12.25) and 2.79 (2.61-2.97), 345.74 (312.03-383.10) and 28.06 (25.01-31.52), respectively. Eighteen percent of patients were classified as rapid acetylators, and 34% and 43% as slow and intermediate acetylators, respectively. Rapid and intermediate acetylator status based on NAT 2 genotype resulted in 2.3 and 1.6 times higher isoniazid clearance than slow acetylators. We found no effects of the SLCO1B1 genotypes on rifampicin pharmacokinetics. Conclusion: Plasma concentrations of rifampicin, isoniazid, and pyrazinamide were low overall in our patients. Isoniazid clearance was high overall and as expected higher in rapid and intermediate acetylators resulting in lower drug exposures. In contrast to reports from previous South African or Ugandan studies, we did not find any effects of the SLCO1B1 or other genotypes tested on rifampicin PK. However, our findings are in keeping with more recent studies from Malawi and India emphasizing the need for geographically diverse and adequately powered studies. The clinical relevance of the low tuberculosis drug concentrations warrants further investigation.

Keywords: rifampicin, isoniazid pharmacokinetics, genetics, NAT2, SLCO1B1, tuberculosis

Procedia PDF Downloads 186
180 Impact of Lack of Testing on Patient Recovery in the Early Phase of COVID-19: Narratively Collected Perspectives from a Remote Monitoring Program

Authors: Nicki Mohammadi, Emma Reford, Natalia Romano Spica, Laura Tabacof, Jenna Tosto-Mancuso, David Putrino, Christopher P. Kellner

Abstract:

Introductory Statement: The onset of the COVID-19 pandemic demanded an unprecedented need for the rapid development, dispersal, and application of infection testing. However, despite the impressive mobilization of resources, individuals were incredibly limited in their access to tests, particularly during the initial months of the pandemic (March-April 2020) in New York City (NYC). Access to COVID-19 testing is crucial in understanding patients’ illness experiences and integral to the development of COVID-19 standard-of-care protocols, especially in the context of overall access to healthcare resources. Succinct Description of basic methodologies: 18 Patients in a COVID-19 Remote Patient Monitoring Program (Precision Recovery within the Mount Sinai Health System) were interviewed regarding their experience with COVID-19 during the first wave (March-May 2020) of the COVID-19 pandemic in New York City. Patients were asked about their experiences navigating COVID-19 diagnoses, the health care system, and their recovery process. Transcribed interviews were analyzed for thematic codes, using grounded theory to guide the identification of emergent themes and codebook development through an iterative process. Data coding was performed using NVivo12. References for the domain “testing” were then extracted and analyzed for themes and statistical patterns. Clear Indication of Major Findings of the study: 100% of participants (18/18) referenced COVID-19 testing in their interviews, with a total of 79 references across the 18 transcripts (average: 4.4 references/interview; 2.7% interview coverage). 89% of participants (16/18) discussed the difficulty of access to testing, including denial of testing without high severity of symptoms, geographical distance to the testing site, and lack of testing resources at healthcare centers. Participants shared varying perspectives on how the lack of certainty regarding their COVID-19 status affected their course of recovery. One participant shared that because she never tested positive she was shielded from her anxiety and fear, given the death toll in NYC. Another group of participants shared that not having a concrete status to share with family, friends and professionals affected how seriously onlookers took their symptoms. Furthermore, the absence of a positive test barred some individuals from access to treatment programs and employment support. Concluding Statement: Lack of access to COVID-19 testing in the first wave of the pandemic in NYC was a prominent element of patients’ illness experience, particularly during their recovery phase. While for some the lack of concrete results was protective, most emphasized the invalidating effect this had on the perception of illness for both self and others. COVID-19 testing is now widely accessible; however, those who are unable to demonstrate a positive test result but who are still presumed to have had COVID-19 in the first wave must continue to adapt to and live with the effects of this gap in knowledge and care on their recovery. Future efforts are required to ensure that patients do not face barriers to care due to the lack of testing and are reassured regarding their access to healthcare. Affiliations- 1Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, NY 2Abilities Research Center, Department of Rehabilitation and Human Performance, Icahn School of Medicine at Mount Sinai, New York, NY

Keywords: accessibility, COVID-19, recovery, testing

Procedia PDF Downloads 193
179 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm

Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal

Abstract:

The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.

Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater

Procedia PDF Downloads 268
178 Cluster Randomized Trial of 'Ready to Learn': An After-School Literacy Program for Children Starting School

Authors: Geraldine Macdonald, Oliver Perra, Nina O’Neill, Laura Neeson, Kathryn Higgins

Abstract:

Background: Despite improvements in recent years, almost one in six children in Northern Ireland (NI) leaves primary school without achieving the expected level in English and Maths. By early adolescence, this ratio is one in five. In 2010-11, around 9000 pupils in NI had failed to achieve the required standard in literacy and numeracy by the time they left full-time education. This paper reports the findings of an experimental evaluation of a programmed designed to improve educational outcomes of a cohort of children starting primary school in areas of high social disadvantage in Northern Ireland. The intervention: ‘Ready to Learn’ comprised two key components: a literacy-rich After School programme (one hour after school, three days per week), and a range of activities and support to promote the engagement of parents with their children’s learning, in school and at home. The intervention was delivered between September 2010 and August 2013. Study aims and objectives: The primary aim was to assess whether, and to what extent, ‘Ready to Learn’ improved the literacy of socially disadvantaged children entering primary schools compared with children in schools without access to the programme. Secondary aims included assessing the programme’s impact on children’s social, emotional and behavioural regulation, and parents’ engagement with their children’s learning. In total, 505 children (almost all) participated in the baseline assessment for the study, with good retention over seven sweeps of data collection. Study design: The intervention was evaluated by means of a cluster randomized trial, with schools as the unit of randomization and analysis. It included a qualitative component designed to examine process and implementation, and to explore the concept of parental engagement. Sixteen schools participated, with nine randomized to the experimental group. As well as outcome data relating to children, 134 semi-structured interviews were conducted with parents over the three years of the study, together with 88 interviews with school staff. Results: Given the children’s ages, not all measures used were direct measures of reading. Findings point to a positive impact of “Ready to Learn” on children’s reading achievement (comprehension and fluency), as assessed by the York Assessment of Reading Comprehension (YARC) and decoding, assessed using the Word Recognition and Phonic Skills (WRaPS3). Effects were not large, but evidence suggests that it is unusual for an after school programme to clearly to demonstrate effects on reading skills. No differences were found on three other measures of literacy-related skills: British Picture Vocabulary Scale (BPVS-II), Naming Speed and Non-word Reading Tests from the Phonological Assessment Battery (PhAB) or Concepts about Print (CAP) – the last due to an age-related ceiling effect). No differences were found between the two groups on measures of social, emotional and behavioural regulation, and due to low levels of participation, it was not possible directly to assess the contribution of the parent component to children’s outcomes. The qualitative data highlighted conflicting concepts of engagement between parents and school staff. Ready to Learn is a promising intervention that merits further support and evaluation.

Keywords: after-school, education, literacy, parental engagement

Procedia PDF Downloads 379
177 Salicornia bigelovii, a Promising Halophyte for Biosaline Agriculture: Lessons Learned from a 4-Year Field Study in United Arab Emirates

Authors: Dionyssia Lyra, Shoaib Ismail

Abstract:

Salinization of natural resources constitutes a significant component of the degradation force that leads to depletion of productive lands and fresh water reserves. The global extent of salt-affected soils is approximately 7% of the earth’s land surface and is expanding. The problems of excessive salt accumulation are most widespread in coastal, arid and semi-arid regions, where agricultural production is substantially hindered. The use of crops that can withstand high saline conditions is extremely interesting in such a context. Salt-loving plants or else ‘halophytes’ thrive when grown in hostile saline conditions, where traditional crops cannot survive. Salicornia bigelovii, a halophytic crop with multiple uses (vegetable, forage, biofuel), has demonstrated remarkable adaptability to harsh climatic conditions prevailing in dry areas with great potential for its expansion. Since 2011, the International Center for Biosaline Agriculture (ICBA) with Masdar Institute (MI) and King Abdul Aziz University of Science & Technology (KAUST) to look into the potential for growing S. bigelovii under hot and dry conditions. Through the projects undertaken, 50 different S. bigelovii genotypes were assessed under high saline conditions. The overall goal was to select the best performing S. bigelovii populations in terms of seed and biomass production for future breeding. Specific objectives included: 1) evaluation of selected S. bigelovii genotypes for various agronomic and growth parameters under field conditions, 2) seed multiplication of S. bigelovii using saline groundwater and 3) acquisition of inbred lines for further breeding. Field trials were conducted for four consecutive years at ICBA headquarters. During the first year, one Salicornia population was evaluated for seed and biomass production at different salinity levels, fertilizer treatments and planting methods. All growth parameters and biomass productivity for the salicornia population showed better performance with optimal biomass production in terms of both salinity level and fertilizer application. During the second year, 46 Salicornia populations (obtained from KAUST and Masdar Institute) were evaluated for 24 growth parameters and treated with groundwater through drip irrigation. The plant material originated from wild collections. Six populations were also assessed for their growth performance under full-strength seawater. Salicornia populations were highly variable for all characteristics under study for both irrigation treatments, indicating that there is a large pool of genetic information available for breeding. Irrigation with the highest level of salinity had a negative impact on the agronomic performance. The maximum seed yield obtained was 2 t/ha at 20 dS/m (groundwater treatment) at 25 cm x 25 cm planting distance. The best performing Salicornia populations for fresh biomass and seed yield were selected for the following season. After continuous selection, the best performing salicornia will be adopted for scaling-up options. Taking into account the results of the production field trials, salicornia expansion will be targeted in coastal areas of the Arabian Peninsula. As a crop with high biofuel and forage potential, its cultivation can improve the livelihood of local farmers.

Keywords: biosaline agriculture, genotypes selection, halophytes, Salicornia bigelovii

Procedia PDF Downloads 407
176 Structured-Ness and Contextual Retrieval Underlie Language Comprehension

Authors: Yao-Ying Lai, Maria Pinango, Ashwini Deo

Abstract:

While grammatical devices are essential to language processing, how comprehension utilizes cognitive mechanisms is less emphasized. This study addresses this issue by probing the complement coercion phenomenon: an entity-denoting complement following verbs like begin and finish receives an eventive interpretation. For example, (1) “The queen began the book” receives an agentive reading like (2) “The queen began [reading/writing/etc.…] the book.” Such sentences engender additional processing cost in real-time comprehension. The traditional account attributes this cost to an operation that coerces the entity-denoting complement to an event, assuming that these verbs require eventive complements. However, in closer examination, examples like “Chapter 1 began the book” undermine this assumption. An alternative, Structured Individual (SI) hypothesis, proposes that the complement following aspectual verbs (AspV; e.g. begin, finish) is conceptualized as a structured individual, construed as an axis along various dimensions (e.g. spatial, eventive, temporal, informational). The composition of an animate subject and an AspV such as (1) engenders an ambiguity between an agentive reading along the eventive dimension like (2), and a constitutive reading along the informational/spatial dimension like (3) “[The story of the queen] began the book,” in which the subject is interpreted as a subpart of the complement denotation. Comprehenders need to resolve the ambiguity by searching contextual information, resulting in additional cost. To evaluate the SI hypothesis, a questionnaire was employed. Method: Target AspV sentences such as “Shakespeare began the volume.” were preceded by one of the following types of context sentence: (A) Agentive-biasing, in which an event was mentioned (…writers often read…), (C) Constitutive-biasing, in which a constitutive meaning was hinted (Larry owns collections of Renaissance literature.), (N) Neutral context, which allowed both interpretations. Thirty-nine native speakers of English were asked to (i) rate each context-target sentence pair from a 1~5 scale (5=fully understandable), and (ii) choose possible interpretations for the target sentence given the context. The SI hypothesis predicts that comprehension is harder for the Neutral condition, as compared to the biasing conditions because no contextual information is provided to resolve an ambiguity. Also, comprehenders should obtain the specific interpretation corresponding to the context type. Results: (A) Agentive-biasing and (C) Constitutive-biasing were rated higher than (N) Neutral conditions (p< .001), while all conditions were within the acceptable range (> 3.5 on the 1~5 scale). This suggests that when lacking relevant contextual information, semantic ambiguity decreases comprehensibility. The interpretation task shows that the participants selected the biased agentive/constitutive reading for condition (A) and (C) respectively. For the Neutral condition, the agentive and constitutive readings were chosen equally often. Conclusion: These findings support the SI hypothesis: the meaning of AspV sentences is conceptualized as a parthood relation involving structured individuals. We argue that semantic representation makes reference to spatial structured-ness (abstracted axis). To obtain an appropriate interpretation, comprehenders utilize contextual information to enrich the conceptual representation of the sentence in question. This study connects semantic structure to human’s conceptual structure, and provides a processing model that incorporates contextual retrieval.

Keywords: ambiguity resolution, contextual retrieval, spatial structured-ness, structured individual

Procedia PDF Downloads 333
175 Large-Scale Simulations of Turbulence Using Discontinuous Spectral Element Method

Authors: A. Peyvan, D. Li, J. Komperda, F. Mashayek

Abstract:

Turbulence can be observed in a variety fluid motions in nature and industrial applications. Recent investment in high-speed aircraft and propulsion systems has revitalized fundamental research on turbulent flows. In these systems, capturing chaotic fluid structures with different length and time scales is accomplished through the Direct Numerical Simulation (DNS) approach since it accurately simulates flows down to smallest dissipative scales, i.e., Kolmogorov’s scales. The discontinuous spectral element method (DSEM) is a high-order technique that uses spectral functions for approximating the solution. The DSEM code has been developed by our research group over the course of more than two decades. Recently, the code has been improved to run large cases in the order of billions of solution points. Running big simulations requires a considerable amount of RAM. Therefore, the DSEM code must be highly parallelized and able to start on multiple computational nodes on an HPC cluster with distributed memory. However, some pre-processing procedures, such as determining global element information, creating a global face list, and assigning global partitioning and element connection information of the domain for communication, must be done sequentially with a single processing core. A separate code has been written to perform the pre-processing procedures on a local machine. It stores the minimum amount of information that is required for the DSEM code to start in parallel, extracted from the mesh file, into text files (pre-files). It packs integer type information with a Stream Binary format in pre-files that are portable between machines. The files are generated to ensure fast read performance on different file-systems, such as Lustre and General Parallel File System (GPFS). A new subroutine has been added to the DSEM code to read the startup files using parallel MPI I/O, for Lustre, in a way that each MPI rank acquires its information from the file in parallel. In case of GPFS, in each computational node, a single MPI rank reads data from the file, which is specifically generated for the computational node, and send them to other ranks on the node using point to point non-blocking MPI communication. This way, communication takes place locally on each node and signals do not cross the switches of the cluster. The read subroutine has been tested on Argonne National Laboratory’s Mira (GPFS), National Center for Supercomputing Application’s Blue Waters (Lustre), San Diego Supercomputer Center’s Comet (Lustre), and UIC’s Extreme (Lustre). The tests showed that one file per node is suited for GPFS and parallel MPI I/O is the best choice for Lustre file system. The DSEM code relies on heavily optimized linear algebra operation such as matrix-matrix and matrix-vector products for calculation of the solution in every time-step. For this, the code can either make use of its matrix math library, BLAS, Intel MKL, or ATLAS. This fact and the discontinuous nature of the method makes the DSEM code run efficiently in parallel. The results of weak scaling tests performed on Blue Waters showed a scalable and efficient performance of the code in parallel computing.

Keywords: computational fluid dynamics, direct numerical simulation, spectral element, turbulent flow

Procedia PDF Downloads 133
174 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia

Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi

Abstract:

Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.

Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation

Procedia PDF Downloads 90
173 Optimizing Machine Learning Algorithms for Defect Characterization and Elimination in Liquids Manufacturing

Authors: Tolulope Aremu

Abstract:

The key process steps to produce liquid detergent products will introduce potential defects, such as formulation, mixing, filling, and packaging, which might compromise product quality, consumer safety, and operational efficiency. Real-time identification and characterization of such defects are of prime importance for maintaining high standards and reducing waste and costs. Usually, defect detection is performed by human inspection or rule-based systems, which is very time-consuming, inconsistent, and error-prone. The present study overcomes these limitations in dealing with optimization in defect characterization within the process for making liquid detergents using Machine Learning algorithms. Performance testing of various machine learning models was carried out: Support Vector Machine, Decision Trees, Random Forest, and Convolutional Neural Network on defect detection and classification of those defects like wrong viscosity, color deviations, improper filling of a bottle, packaging anomalies. These algorithms have significantly benefited from a variety of optimization techniques, including hyperparameter tuning and ensemble learning, in order to greatly improve detection accuracy while minimizing false positives. Equipped with a rich dataset of defect types and production parameters consisting of more than 100,000 samples, our study further includes information from real-time sensor data, imaging technologies, and historic production records. The results are that optimized machine learning models significantly improve defect detection compared to traditional methods. Take, for instance, the CNNs, which run at 98% and 96% accuracy in detecting packaging anomaly detection and bottle filling inconsistency, respectively, by fine-tuning the model with real-time imaging data, through which there was a reduction in false positives of about 30%. The optimized SVM model on detecting formulation defects gave 94% in viscosity variation detection and color variation. These values of performance metrics correspond to a giant leap in defect detection accuracy compared to the usual 80% level achieved up to now by rule-based systems. Moreover, this optimization with models can hasten defect characterization, allowing for detection time to be below 15 seconds from an average of 3 minutes using manual inspections with real-time processing of data. With this, the reduction in time will be combined with a 25% reduction in production downtime because of proactive defect identification, which can save millions annually in recall and rework costs. Integrating real-time machine learning-driven monitoring drives predictive maintenance and corrective measures for a 20% improvement in overall production efficiency. Therefore, the optimization of machine learning algorithms in defect characterization optimum scalability and efficiency for liquid detergent companies gives improved operational performance to higher levels of product quality. In general, this method could be conducted in several industries within the Fast moving consumer Goods industry, which would lead to an improved quality control process.

Keywords: liquid detergent manufacturing, defect detection, machine learning, support vector machines, convolutional neural networks, defect characterization, predictive maintenance, quality control, fast-moving consumer goods

Procedia PDF Downloads 18
172 Laboratory and Numerical Hydraulic Modelling of Annular Pipe Electrocoagulation Reactors

Authors: Alejandra Martin-Dominguez, Javier Canto-Rios, Velitchko Tzatchkov

Abstract:

Electrocoagulation is a water treatment technology that consists of generating coagulant species in situ by electrolytic oxidation of sacrificial anode materials triggered by electric current. It removes suspended solids, heavy metals, emulsified oils, bacteria, colloidal solids and particles, soluble inorganic pollutants and other contaminants from water, offering an alternative to the use of metal salts or polymers and polyelectrolyte addition for breaking stable emulsions and suspensions. The method essentially consists of passing the water being treated through pairs of consumable conductive metal plates in parallel, which act as monopolar electrodes, commonly known as ‘sacrificial electrodes’. Physicochemical, electrochemical and hydraulic processes are involved in the efficiency of this type of treatment. While the physicochemical and electrochemical aspects of the technology have been extensively studied, little is known about the influence of the hydraulics. However, the hydraulic process is fundamental for the reactions that take place at the electrode boundary layers and for the coagulant mixing. Electrocoagulation reactors can be open (with free water surface) and closed (pressurized). Independently of the type of rector, hydraulic head loss is an important factor for its design. The present work focuses on the study of the total hydraulic head loss and flow velocity and pressure distribution in electrocoagulation reactors with single or multiple concentric annular cross sections. An analysis of the head loss produced by hydraulic wall shear friction and accessories (minor head losses) is presented, and compared to the head loss measured on a semi-pilot scale laboratory model for different flow rates through the reactor. The tests included laminar, transitional and turbulent flow. The observed head loss was compared also to the head loss predicted by several known conceptual theoretical and empirical equations, specific for flow in concentric annular pipes. Four single concentric annular cross section and one multiple concentric annular cross section reactor configuration were studied. The theoretical head loss resulted higher than the observed in the laboratory model in some of the tests, and lower in others of them, depending also on the assumed value for the wall roughness. Most of the theoretical models assume that the fluid elements in all annular sections have the same velocity, and that flow is steady, uniform and one-dimensional, with the same pressure and velocity profiles in all reactor sections. To check the validity of such assumptions, a computational fluid dynamics (CFD) model of the concentric annular pipe reactor was implemented using the ANSYS Fluent software, demonstrating that pressure and flow velocity distribution inside the reactor actually is not uniform. Based on the analysis, the equations that predict better the head loss in single and multiple annular sections were obtained. Other factors that may impact the head loss, such as the generation of coagulants and gases during the electrochemical reaction, the accumulation of hydroxides inside the reactor, and the change of the electrode material with time, are also discussed. The results can be used as tools for design and scale-up of electrocoagulation reactors, to be integrated into new or existing water treatment plants.

Keywords: electrocoagulation reactors, hydraulic head loss, concentric annular pipes, computational fluid dynamics model

Procedia PDF Downloads 218