Search results for: evaluation of digital evidence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12078

Search results for: evaluation of digital evidence

1368 Biodegradable Polymeric Vesicles Containing Magnetic Nanoparticles, Quantum Dots and Anticancer Drugs for Drug Delivery and Imaging

Authors: Fei Ye, Åsa Barrefelt, Manuchehr Abedi-Valugerdi, Khalid M. Abu-Salah, Salman A. Alrokayan, Mamoun Muhammed, Moustapha Hassan

Abstract:

With appropriate encapsulation in functional nanoparticles drugs are more stable in physiological environment and the kinetics of the drug can be more carefully controlled and monitored. Furthermore, targeted drug delivery can be developed to improve chemotherapy in cancer treatment, not only by enhancing intracellular uptake by target cells but also by reducing the adverse effects in non-target organs. Inorganic imaging agents, delivered together with anti-cancer drugs, enhance the local imaging contrast and provide precise diagnosis as well as evaluation of therapy efficacy. We have developed biodegradable polymeric vesicles as a nanocarrier system for multimodal bio-imaging and anticancer drug delivery. The poly (lactic-co-glycolic acid) PLGA) vesicles were fabricated by encapsulating inorganic imaging agents of superparamagnetic iron oxide nanoparticles (SPION), manganese-doped zinc sulfide (MN:ZnS) quantum dots (QDs) and the anticancer drug busulfan into PLGA nanoparticles via an emulsion-evaporation method. T2-weighted magnetic resonance imaging (MRI) of PLGA-SPION-Mn:ZnS phantoms exhibited enhanced negative contrast with r2 relaxivity of approximately 523 s-1 mM-1 Fe. Murine macrophage (J774A) cellular uptake of PLGA vesicles started fluorescence imaging at 2 h and reached maximum intensity at 24 h incubation. The drug delivery ability PLGA vesicles was demonstrated in vitro by release of busulfan. PLGA vesicles degradation was studied in vitro, showing that approximately 32% was degraded into lactic and glycolic acid over a period of 5 weeks. The biodistribution of PLGA vesicles was investigated in vivo by MRI in a rat model. Change of contrast in the liver could be visualized by MRI after 7 min and maximal signal loss detected after 4 h post-injection of PLGA vesicles. Histological studies showed that the presence of PLGA vesicles in organs was shifted from the lungs to the liver and spleen over time.

Keywords: biodegradable polymers, multifunctional nanoparticles, quantum dots, anticancer drugs

Procedia PDF Downloads 456
1367 Evaluation of Different Food Baits by Using Kill Traps for the Control of Lesser Bandicoot Rat (Bandicota bengalensis) in Field Crops of Pothwar Plateau, Pakistan

Authors: Nadeem Munawar, Iftikhar Hussain, Tariq Mahmood

Abstract:

The lesser bandicoot rat (Bandicota bengalensis) is widely distributed and a serious agricultural pest in Pakistan. It has wide adaptation with rice-wheat-sugarcane cropping systems of Punjab, Sindh and Khyber Pakhtunkhwa and wheat-groundnut cropping system of Pothwar area, thus inflicting heavy losses to these crops. Comparative efficacies of four food baits (onion, guava, potato and peanut butter smeared bread/Chapatti) were tested in multiple feeding tests for kill trapping of this rat species in the Pothwar Plateau between October 2013 to July 2014 at the sowing, tilling, flowering and maturity stages of wheat, groundnut and millet crops. The results revealed that guava was the most preferred bait as compared to the rest of three, presumably due to particular taste and smell of the guava. The relative efficacies of all four tested baits guava also scoring the highest trapping success of 16.94 ± 1.42 percent, followed by peanut butter, potato, and onion with trapping successes of 10.52 ± 1.30, 7.82 ± 1.21 and 4.5 ± 1.10 percent, respectively. In various crop stages and season-wise the highest trapping success was achieved at maturity stages of the crops, presumably due to higher surface activity of the rat because of favorable climatic conditions, good shelter, and food abundance. Moreover, the maturity stage of wheat crop coincided with spring breeding season and maturity stages of millet and groundnut match with monsoon/autumn breeding peak of the lesser bandicoot rat in Pothwar area. The preferred order among four baits tested was guava > peanut butter > potato > onion. The study recommends that the farmers should periodically carry out rodent trapping at the beginning of each crop season and during non-breeding seasons of this rodent pest when the populations are low in numbers and restricted under crop boundary vegetation, particularly during very hot and cold months.

Keywords: Bandicota bengalensis, efficacy, food baits, Pothwar

Procedia PDF Downloads 251
1366 An Ethno-Scientific Approach for Restoration of South Indian Heritage Rice Varieties

Authors: A. Sathya, C. Manojkumar, D. Visithra

Abstract:

The South Indian peninsula has rich diversity of both heritage and conventional rice varieties. With the prime focus set on high yield and increased productivity, a number of traditional/heritage rice varieties have dwindled into the forgotten past. At present, in the face of climate change, the hybrids and conventional varieties struggle for sustainable yield. The need of copious irrigation and high nutrient inputs for the hybrids and conventional varieties have cornered the farming and research community to resort to heritage rice varieties for their sturdy survival capability. An ethno-scientific effort has been taken in the Cauvery delta tracts of South India to restore these traditional/heritage rice varieties. A closer field level performance evaluation under organic condition has been undertaken for 10 heritage rice varieties. The morpho-agronomic characterization across vegetative and reproductive stages have revealed a pattern of variation in duration, plant height, number of tillers, productive tillers, etc. The shortest duration was recorded for a variety with the vernacular name of ‘Arubadaam kuruvai’. A traditional rice variety called ‘Maapillai samba’ is claimed to impart instant energy. The supernatant water of the overnight soaked cooked rice of Maapillai samba is a source of instant energy. The physico-chemical analysis of this variety is being explored for its instant nutritional boosting ability. Wide spectrum of nutritional characters including palatability and marketability preferences has also been analyzed for all these 10 heritage rice varieties. A ‘Farmer’s harvest day festival’ was organized, providing opportunity for the ‘Cauvery delta farmers’ to identify the special features and exchange their views on these standing golden ripe paddy varieties directly. The airing of their ethnic knowledge pooled with interesting scientific investigations of these 10 rich heritage rice varieties of South India undertaken will be elaborately discussed enlightening the perspectives on the pathway of resurrection and restoration of this heritage of the past.

Keywords: biodiversity, conservation, heritage, rice, traditional, varieties

Procedia PDF Downloads 410
1365 Incidence of Lymphoma and Gonorrhea Infection: A Retrospective Study

Authors: Diya Kohli, Amalia Ardeljan, Lexi Frankel, Jose Garcia, Lokesh Manjani, Omar Rashid

Abstract:

Gonorrhea is the second most common sexually transmitted disease (STDs) in the United States of America. Gonorrhea affects the urethra, rectum, or throat and the cervix in females. Lymphoma is a cancer of the immune network called the lymphatic system that includes the lymph nodes/glands, spleen, thymus gland, and bone marrow. Lymphoma can affect many organs in the body. When a lymphocyte develops a genetic mutation, it signals other cells into rapid proliferation that causes many mutated lymphocytes. Multiple studies have explored the incidence of cancer in people infected with STDs such as Gonorrhea. For instance, the studies conducted by Wang Y-C and Co., as well as Caini, S and Co. established a direct co-relationship between Gonorrhea infection and incidence of prostate cancer. We hypothesize that Gonorrhea infection also increases the incidence of Lymphoma in patients. This research study aimed to evaluate the correlation between Gonorrhea infection and the incidence of Lymphoma. The data for the research was provided by a Health Insurance Portability and Accountability Act (HIPAA) compliant national database. This database was utilized to evaluate patients infected with Gonorrhea versus the ones who were not infected to establish a correlation with the prevalence of Lymphoma using ICD-10 and ICD-9 codes. Access to the database was granted by the Holy Cross Health, Fort Lauderdale for academic research. Standard statistical methods were applied throughout. Between January 2010 and December 2019, the query was analyzed and resulted in 254 and 808 patients in both the infected and control group, respectively. The two groups were matched by Age Range and CCI score. The incidence of Lymphoma was 0.998% (254 patients out of 25455) in the Gonorrhea group (patients infected with Gonorrhea that was Lymphoma Positive) compared to 3.174% and 808 patients in the control group (Patients negative for Gonorrhea but with Lymphoma). This was statistically significant by a p-value < 2.210-16 with an OR= 0.431 (95% CI 0.381-0.487). The patients were then matched by antibiotic treatment to avoid treatment bias. The incidence of Lymphoma was 1.215% (82 patients out of 6,748) in the Gonorrhea group compared to 2.949% (199 patients out of 6748) in the control group. This was statistically significant by a p-value <5.410-10 with an OR= 0.468 (95% CI 0.367-0.596). The study shows a statistically significant correlation between Gonorrhea and a reduced incidence of Lymphoma. Further evaluation is recommended to assess the potential of Gonorrhea in reducing Lymphoma.

Keywords: gonorrhea, lymphoma, STDs, cancer, ICD

Procedia PDF Downloads 175
1364 The Impact of Inconclusive Results of Thin Layer Chromatography for Marijuana Analysis and It’s Implication on Forensic Laboratory Backlog

Authors: Ana Flavia Belchior De Andrade

Abstract:

Forensic laboratories all over the world face a great challenge to overcame waiting time and backlog in many different areas. Many aspects contribute to this situation, such as an increase in drug complexity, increment in the number of exams requested and cuts in funding limiting laboratories hiring capacity. Altogether, those facts pose an essential challenge for forensic chemistry laboratories to keep both quality and time of response within an acceptable period. In this paper we will analyze how the backlog affects test results and, in the end, the whole judicial system. In this study data from marijuana samples seized by the Federal District Civil Police in Brazil between the years 2013 and 2017 were tabulated and the results analyzed and discussed. In the last five years, the number of petitioned exams increased from 822 in February 2013 to 1358 in March 2018, representing an increase of 32% in 5 years, a rise of more than 6% per year. Meanwhile, our data shows that the number of performed exams did not grow at the same rate. Product numbers are stationed as using the actual technology scenario and analyses routine the laboratory is running in full capacity. Marijuana detection is the most prevalence exam required, representing almost 70% of all exams. In this study, data from 7,110 (seven thousand one hundred and ten) marijuana samples were analyzed. Regarding waiting time, most of the exams were performed not later than 60 days after receipt (77%). Although some samples waited up to 30 months before being examined (0,65%). When marijuana´s exam is delayed we notice the enlargement of inconclusive results using thin-layer chromatography (TLC). Our data shows that if a marijuana sample is stored for more than 18 months, inconclusive results rise from 2% to 7% and when if storage exceeds 30 months, inconclusive rates increase to 13%. This is probably because Cannabis plants and preparations undergo oxidation under storage resulting in a decrease in the content of Δ9-tetrahydrocannabinol ( Δ9-THC). An inconclusive result triggers other procedures that require at least two more working hours of our analysts (e.g., GC/MS analysis) and the report would be delayed at least one day. Those new procedures increase considerably the running cost of a forensic drug laboratory especially when the backlog is significant as inconclusive results tend to increase with waiting time. Financial aspects are not the only ones to be observed regarding backlog cases; there are also social issues as legal procedures can be delayed and prosecution of serious crimes can be unsuccessful. Delays may slow investigations and endanger public safety by giving criminals more time on the street to re-offend. This situation also implies a considerable cost to society as at some point, if the exam takes a long time to be performed, an inconclusive can turn into a negative result and a criminal can be absolved by flawed expert evidence.

Keywords: backlog, forensic laboratory, quality management, accreditation

Procedia PDF Downloads 101
1363 Hands on Tools to Improve Knowlege, Confidence and Skill of Clinical Disaster Providers

Authors: Lancer Scott

Abstract:

Purpose: High quality clinical disaster medicine requires providers working collaboratively to care for multiple patients in chaotic environments; however, many providers lack adequate training. To address this deficit, we created a competency-based, 5-hour Emergency Preparedness Training (EPT) curriculum using didactics, small-group discussion, and kinetic learning. The goal was to evaluate the effect of a short course on improving provider knowledge, confidence and skills in disaster scenarios. Methods: Diverse groups of medical university students, health care professionals, and community members were enrolled between 2011 and 2014. The course consisted of didactic lectures, small group exercises, and two live, multi-patient mass casualty incident (MCI) scenarios. The outcome measures were based on core competencies and performance objectives developed by a curriculum task force and assessed via trained facilitator observation, pre- and post-testing, and a course evaluation. Results: 708 participants completed were trained between November 2011 and August 2014, including 49.9% physicians, 31.9% medical students, 7.2% nurses, and 11% various other healthcare professions. 100% of participants completed the pre-test and 71.9% completed the post-test, with average correct answers increasing from 39% to 60%. Following didactics, trainees met 73% and 96% of performance objectives for the two small group exercises and 68.5% and 61.1% of performance objectives for the two MCI scenarios. Average trainee self-assessment of both overall knowledge and skill with clinical disasters improved from 33/100 to 74/100 (overall knowledge) and 33/100 to 77/100 (overall skill). The course assessment was completed by 34.3% participants, of whom 91.5% highly recommended the course. Conclusion: A relatively short, intensive EPT course can improve the ability of a diverse group of disaster care providers to respond effectively to mass casualty scenarios.

Keywords: clinical disaster medicine, training, hospital preparedness, surge capacity, education, curriculum, research, performance, training, student, physicians, nurses, health care providers, health care

Procedia PDF Downloads 180
1362 Family Medicine Residents in End-of-Life Care

Authors: Goldie Lynn Diaz, Ma. Teresa Tricia G. Bautista, Elisabeth Engeljakob, Mary Glaze Rosal

Abstract:

Introduction: Residents are expected to convey unfavorable news, discuss prognoses, and relieve suffering, and address do-not-resuscitate orders, yet some report a lack of competence in providing this type of care. Recognizing this need, Family Medicine residency programs are incorporating end-of-life care from symptom and pain control, counseling, and humanistic qualities as core proficiencies in training. Objective: This study determined the competency of Family Medicine Residents from various institutions in Metro Manila on rendering care for the dying. Materials and Methods: Trainees completed a Palliative Care Evaluation tool to assess their degree of confidence in patient and family interactions, patient management, and attitudes towards hospice care. Results: Remarkably, only a small fraction of participants were confident in performing independent management of terminal delirium and dyspnea. Fewer than 30% of residents can do the following without supervision: discuss medication effects and patient wishes after death, coping with pain, vomiting and constipation, and reacting to limited patient decision-making capacity. Half of the respondents had confidence in supporting the patient or family member when they become upset. Majority expressed confidence in many end-of-life care skills if supervision, coaching and consultation will be provided. Most trainees believed that pain medication should be given as needed to terminally ill patients. There was also uncertainty as to the most appropriate person to make end-of-life decisions. These attitudes may be influenced by personal beliefs rooted in cultural upbringing as well as by personal experiences with death in the family, which may also affect their participation and confidence in caring for the dying. Conclusion: Enhancing the quality and quantity of end-of-life care experiences during residency with sufficient supervision and role modeling may lead to knowledge and skill improvement to ensure quality of care. Fostering bedside learning opportunities during residency is an appropriate venue for teaching interventions in end-of-life care education.

Keywords: end of life care, geriatrics, palliative care, residency training skill

Procedia PDF Downloads 243
1361 Diversity of Rhopalocera in Different Vegetation Types of PC Hills, Philippines

Authors: Sean E. Gregory P. Igano, Ranz Brendan D. Gabor, Baron Arthur M. Cabalona, Numeriano Amer E. Gutierrez

Abstract:

Distribution patterns and abundance of butterflies respond in the long term to variations in habitat quality. Studying butterfly populations would give evidence on how vegetation types influence their diversity. In this research, the Rhopalocera diversity of PC Hills was assessed to provide information on diversity trends in varying vegetation types. PC Hills, located in Palo, Leyte, Philippines, is a relatively undisturbed area having forests and rivers. Despite being situated nearby inhabited villages; the area is observed to have a possible rich butterfly population. To assess the Rhopalocera species richness and diversity, transect sampling technique was applied to monitor and document butterflies. Transects were placed in locations that can be mapped, described and relocated easily. Three transects measuring three hundred meters each with a 5-meter diameter were established based on the different vegetation types present. The three main vegetation types identified were the agroecosystem (transect 1), dipterocarp forest (transect 2), and riparian (transect 3). Sample collections were done only from 9:00 A.M to 3:00 P.M. under warm and bright weather, with no more than moderate winds and when it was not raining. When weather conditions did not permit collection, it was moved to another day. A GPS receiver was used to record the location of the selected sample sites and the coordinates of where each sample was collected. Morphological analysis was done for the first phase of the study to identify the voucher specimen to the lowest taxonomic level possible using books about butterfly identification guides and species lists as references. For the second phase, DNA barcoding will be used to further identify the voucher specimen into the species taxonomic level. After eight (8) sampling sessions, seven hundred forty-two (742) individuals were seen, and twenty-two (22) Rhopalocera genera were identified through morphological identification. Nymphalidae family of genus Ypthima and the Pieridae family of genera Eurema and Leptosia were the most dominant species observed. Twenty (20) of the thirty-one (31) voucher specimen were already identified to their species taxonomic level using DNA Barcoding. Shannon-Weiner index showed that the highest diversity level was observed in the third transect (H’ = 2.947), followed by the second transect (H’ = 2.6317) and the lowest being in the first transect (H’ = 1.767). This indicates that butterflies are likely to inhabit dipterocarp and riparian vegetation types than agroecosystem, which influences their species composition and diversity. Moreover, the appearance of a river in the riparian vegetation supported its diversity value since butterflies have the tendency to fly into areas near rivers. Species identification of other voucher specimen will be done in order to compute the overall species richness in PC Hills. Further butterfly sampling sessions of PC Hills is recommended for a more reliable diversity trend and to discover more butterfly species. Expanding the research by assessing the Rhopalocera diversity in other locations should be considered along with studying factors that affect butterfly species composition other than vegetation types.

Keywords: distribution patterns, DNA barcoding, morphological analysis, Rhopalocera

Procedia PDF Downloads 130
1360 An Approach to Determine Proper Daylighting Design Solution Considering Visual Comfort and Lighting Energy Efficiency in High-Rise Residential Building

Authors: Zehra Aybike Kılıç, Alpin Köknel Yener

Abstract:

Daylight is a powerful driver in terms of improving human health, enhancing productivity and creating sustainable solutions by minimizing energy demand. A proper daylighting system allows not only a pleasant and attractive visual and thermal environment, but also reduces lighting energy consumption and heating/cooling energy load with the optimization of aperture size, glazing type and solar control strategy, which are the major design parameters of daylighting system design. Particularly, in high-rise buildings where large openings that allow maximum daylight and view out are preferred, evaluation of daylight performance by considering the major parameters of the building envelope design becomes crucial in terms of ensuring occupants’ comfort and improving energy efficiency. Moreover, it is increasingly necessary to examine the daylighting design of high-rise residential buildings, considering the share of residential buildings in the construction sector, the duration of occupation and the changing space requirements. This study aims to identify a proper daylighting design solution considering window area, glazing type and solar control strategy for a high-residential building in terms of visual comfort and lighting energy efficiency. The dynamic simulations are carried out/conducted by DIVA for Rhino version 4.1.0.12. The results are evaluated with Daylight Autonomy (DA) to demonstrate daylight availability in the space and Daylight Glare Probability (DGP) to describe the visual comfort conditions related to glare. Furthermore, it is also analyzed that the lighting energy consumption occurred in each scenario to determine the optimum solution reducing lighting energy consumption by optimizing daylight performance. The results revealed that it is only possible that reduction in lighting energy consumption as well as providing visual comfort conditions in buildings with the proper daylighting design decision regarding glazing type, transparency ratio and solar control device.

Keywords: daylighting , glazing type, lighting energy efficiency, residential building, solar control strategy, visual comfort

Procedia PDF Downloads 162
1359 Partnerships for Environmental Sustainability: An Effective Multistakeholder Governance Regime for Oil and Gas Producing Areas

Authors: Joy Debski

Abstract:

Due to the varying degrees of the problem posed by global warming, environmental sustainability dominates international discourse. International initiatives' aims and expectations have proven particularly challenging to put into practice in developing nations. To reduce human exploitation of the environment, stricter measures are urgently needed. However, putting them into practice has proven more difficult. Relatively recent information from the Climate Accountability Institute and academic researchers shows that fossil fuel companies are major contributors to the climate crisis. Host communities in oil and gas-producing areas, particularly in developing nations, have grown hostile toward both oil and gas companies and government policies. It is now essential that the three main stakeholders—government, the oil and gas sector, and host communities—cooperate to achieve the shared objective of environmental sustainability. This research, therefore, advocates a governance system for Nigeria that facilitates the achieving the goal of environmental sustainability. This objective is achieved by the research's examination of the main institutional framework for environmental sustainability, evaluation of the strategies used by major oil companies to increase stakeholder engagement in environmental sustainability, and examination of the involvement of host communities in environmental sustainability. The study reveals that while environmental sustainability is important to the identified stakeholders, it's challenging to accomplish without an informed synergy. Hence the research advocates the centralisation of CSR through a CSR commission for environmental sustainability. The commission’s mandate is to facilitate, partner with, and endorse companies. The commission is strongly advised to incorporate host community liaison offices into the process of negotiating contracts with oil and gas firms, as well as to play a facilitative role in helping firms adhere to both domestic and international regulations. The recommendations can benefit Nigerian policymakers in enhancing their unsuccessful efforts to pass CSR legislation. Through the research-proposed CSR department, which has competent training and stakeholder engagement strategies, oil and gas companies can enhance and centralise their goals for environmental sustainability. Finally, the CSR Commission's expertise would give host communities more leverage when negotiating their memorandum of understanding with oil and gas companies.

Keywords: environmental sustainability, corporate social responsibility, CSR, oil and gas, nigeria

Procedia PDF Downloads 69
1358 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 56
1357 Bariatric Surgery Referral as an Alternative to Fundoplication in Obese Patients Presenting with GORD: A Retrospective Hospital-Based Cohort Study

Authors: T. Arkle, D. Pournaras, S. Lam, B. Kumar

Abstract:

Introduction: Fundoplication is widely recognised as the best surgical option for gastro-oesophageal reflux disease (GORD) in the general population. However, there is controversy surrounding the use of conventional fundoplication in obese patients. Whilst the intra-operative failure of fundoplication, including wrap disruption, is reportedly higher in obese individuals, the more significant issue surrounds symptom recurrence post-surgery. Could a bariatric procedure be considered in obese patients for weight management, to treat the GORD, and to also reduce the risk of recurrence? Roux-en-Y gastric bypass, a widely performed bariatric procedure, has been shown to be highly successful both in controlling GORD symptoms and in weight management in obese patients. Furthermore, NICE has published clear guidelines on eligibility for bariatric surgery, with the main criteria being type 3 obesity or type 2 obesity with the presence of significant co-morbidities that would improve with weight loss. This study aims to identify the proportion of patients who undergo conventional fundoplication for GORD and/or hiatus hernia, which would have been eligible for bariatric surgery referral according to NICE guidelines. Methods: All patients who underwent fundoplication procedures for GORD and/or hiatus hernia repair at a single NHS foundation trust over a 10-year period will be identified using the Trust’s health records database. Pre-operative patient records will be used to find BMI and the presence of significant co-morbidities at the time of consideration for surgery. This information will be compared to NICE guidelines to determine potential eligibility for the bariatric surgical referral at the time of initial surgical intervention. Results: A total of 321 patients underwent fundoplication procedures between January 2011 and December 2020; 133 (41.4%) had available data for BMI or to allow BMI to be estimated. Of those 133, 40 patients (30%) had a BMI greater than 30kg/m², and 7 (5.3%) had BMI >35kg/m². One patient (0.75%) had a BMI >40 and would therefore be automatically eligible according to NICE guidelines. 4 further patients had significant co-morbidities, such as hypertension and osteoarthritis, which likely be improved by weight management surgery and therefore also indicated eligibility for referral. Overall, 3.75% (5/133) of patients undergoing conventional fundoplication procedures would have been eligible for bariatric surgical referral, these patients were all female, and the average age was 60.4 years. Conclusions: Based on this Trust’s experience, around 4% of obese patients undergoing fundoplication would have been eligible for bariatric surgical intervention. Based on current evidence, in class 2/3 obese patients, there is likely to have been a notable proportion with recurrent disease, potentially requiring further intervention. These patient’s may have benefitted more through undergoing bariatric surgery, for example a Roux-en-Y gastric bypass, addressing both their obesity and GORD. Use of patient written notes to obtain BMI data for the 188 patients with missing BMI data and further analysis to determine outcomes following fundoplication in all patients, assessing for incidence of recurrent disease, will be undertaken to strengthen conclusions.

Keywords: bariatric surgery, GORD, Nissen fundoplication, nice guidelines

Procedia PDF Downloads 48
1356 Identification of Three Strategies to Enhance University Students’ Professional Identity, Using Hierarchical Regression Analysis

Authors: Alba Barbara-i-Molinero, Rosalia Cascon-Pereira, Ana Beatriz Hernandez

Abstract:

Students’ transitions from high school to the university have been challenged by the lack of continuity between both contexts. This mismatch directly affects students by generating feelings of anxiety and uncertainty, which increases the dropout rates and reduces students’ academic success. This discontinuity emanates because ‘transitions concern a restructuring of what the person does and who the person perceives him or herself to be’. Hence, identity becomes essential in these transitions. Generally, identity is the answer to questions such as who am I? or who are we? This is integrated by personal identity, and as many social identities as groups, the individual feels he/she is a part. A case in point to construct a social identity is the identification with a profession. For this reason, a way to lighten the generated tension during transitions is applying strategies orientated to enhance students’ professional identity in their point of entry to the higher education institution. That would create a sense of continuity between high school and higher education contexts, increasing their Professional Identity Strength. To develop the strategies oriented to enhance students Professional Identity, it is important to analyze what influences it. There exist several influencing factors that influence Professional Identity (e.g., professional status, the recommendation of family and peers, the academic environment, or the chosen bachelor degree). There is a gap in the literature analyzing the impact of these factors on more than one bachelor degree. In this regards, our study takes an additional step with the aim of evaluating the influence of several factors on Professional Identity using a cohort of university students from multiple degrees between the ages of 17-19 years. To do so, we used hierarchical regression analyses to assess the impact of the following factors: External Motivation Conditionals (EMC), Educational Experience Conditionals (EEC) and Personal Motivational Conditional (PMP). After conducting the analyses, we found that the assessed factors influenced students’ professional identity differently according to their bachelor degree and discipline. For example, PMC and EMC positively affected science students, while architecture, law and economics and engineering students were just influenced by PMC. Basing on that influences, we proposed three different strategies aimed to enhance students’ professional identity, in the short and long term. These strategies are: to enhance students’ professional identity before the incorporation to university through campuses and icebreaker activities; to apply recruitment strategies aimed to provide realistic information of the bachelor degree; and to incorporate different activities, such as in-vitro, in situ and self-directed activities aimed to enhance longitudinally students’ professional identity from the university. From these results, theoretical contributions and practical implications arise. First, we contribute to the literature by identifying which factors influence students from different bachelor degrees since there is still no evidence. And, second, using as a benchmark the obtained results, we contribute from a practical perspective, by proposing several alternative strategies to increase students’ professional identity strength aiming to lighten their transition from high school to higher education.

Keywords: professional identity, higher education, educational strategies , students

Procedia PDF Downloads 130
1355 Drug Delivery Nanoparticles of Amino Acid Based Biodegradable Polymers

Authors: Sophio Kobauri, Tengiz Kantaria, Temur Kantaria, David Tugushi, Nina Kulikova, Ramaz Katsarava

Abstract:

Nanosized environmentally responsive materials are of special interest for various applications, including targeted drug to a considerable potential for treatment of many human diseases. The important technological advantages of nanoparticles (NPs) usage as drug carriers (nanocontainers) are their high stability, high carrier capacity, feasibility of encapsulation of both hydrophilic or hydrophobic substances, as well as a high variety of possible administration routes, including oral application and inhalation. NPs can also be designed to allow controlled (sustained) drug release from the matrix. These properties of NPs enable improvement of drug bioavailability and might allow drug dosage decrease. The targeted and controlled administration of drugs using NPs might also help to overcome drug resistance, which is one of the major obstacles in the control of epidemics. Various degradable and non-degradable polymers of both natural and synthetic origin have been used for NPs construction. One of the most promising for the design of NPs are amino acid-based biodegradable polymers (AABBPs) which can clear from the body after the fulfillment of their function. The AABBPs are composed of naturally occurring and non-toxic building blocks such as α-amino acids, fatty diols and dicarboxylic acids. The particles designed from these polymers are expected to have an improved bioavailability along with a high biocompatibility. The present work deals with a systematic study of the preparation of NPs by cost-effective polymer deposition/solvent displacement method using AABBPs. The influence of the nature and concentration of surfactants, concentration of organic phase (polymer solution), and the ratio organic phase/inorganic (water) phase, as well as of some other factors on the size of the fabricated NPs have been studied. It was established that depending on the used conditions the NPs size could be tuned within 40-330 nm. As the next step of this research an evaluation of biocompatibility and bioavailability of the synthesized NPs has been performed, using two stable human cell culture lines – HeLa and A549. This part of study is still in progress now.

Keywords: amino acids, biodegradable polymers, nanoparticles (NPs), non-toxic building blocks

Procedia PDF Downloads 419
1354 Interdigitated Flexible Li-Ion Battery by Aerosol Jet Printing

Authors: Yohann R. J. Thomas, Sébastien Solan

Abstract:

Conventional battery technology includes the assembly of electrode/separator/electrode by standard techniques such as stacking or winding, depending on the format size. In that type of batteries, coating or pasting techniques are only used for the electrode process. The processes are suited for large scale production of batteries and perfectly adapted to plenty of application requirements. Nevertheless, as the demand for both easier and cost-efficient production modes, flexible, custom-shaped and efficient small sized batteries is rising. Thin-film, printable batteries are one of the key areas for printed electronics. In the frame of European BASMATI project, we are investigating the feasibility of a new design of lithium-ion battery: interdigitated planar core design. Polymer substrate is used to produce bendable and flexible rechargeable accumulators. Direct fully printed batteries lead to interconnect the accumulator with other electronic functions for example organic solar cells (harvesting function), printed sensors (autonomous sensors) or RFID (communication function) on a common substrate to produce fully integrated, thin and flexible new devices. To fulfill those specifications, a high resolution printing process have been selected: Aerosol jet printing. In order to fit with this process parameters, we worked on nanomaterials formulation for current collectors and electrodes. In addition, an advanced printed polymer-electrolyte is developed to be implemented directly in the printing process in order to avoid the liquid electrolyte filling step and to improve safety and flexibility. Results: Three different current collectors has been studied and printed successfully. An ink of commercial copper nanoparticles has been formulated and printed, then a flash sintering was applied to the interdigitated design. A gold ink was also printed, the resulting material was partially self-sintered and did not require any high temperature post treatment. Finally, carbon nanotubes were also printed with a high resolution and well defined patterns. Different electrode materials were formulated and printed according to the interdigitated design. For cathodes, NMC and LFP were efficaciously printed. For anodes, LTO and graphite have shown to be good candidates for the fully printed battery. The electrochemical performances of those materials have been evaluated in a standard coin cell with lithium-metal counter electrode and the results are similar with those of a traditional ink formulation and process. A jellified plastic crystal solid state electrolyte has been developed and showed comparable performances to classical liquid carbonate electrolytes with two different materials. In our future developments, focus will be put on several tasks. In a first place, we will synthesize and formulate new specific nano-materials based on metal-oxyde. Then a fully printed device will be produced and its electrochemical performance will be evaluated.

Keywords: high resolution digital printing, lithium-ion battery, nanomaterials, solid-state electrolytes

Procedia PDF Downloads 230
1353 An Automatic Large Classroom Attendance Conceptual Model Using Face Counting

Authors: Sirajdin Olagoke Adeshina, Haidi Ibrahim, Akeem Salawu

Abstract:

large lecture theatres cannot be covered by a single camera but rather by a multicamera setup because of their size, shape, and seating arrangements. Although, classroom capture is achievable through a single camera. Therefore, a design and implementation of a multicamera setup for a large lecture hall were considered. Researchers have shown emphasis on the impact of class attendance taken on the academic performance of students. However, the traditional method of carrying out this exercise is below standard, especially for large lecture theatres, because of the student population, the time required, sophistication, exhaustiveness, and manipulative influence. An automated large classroom attendance system is, therefore, imperative. The common approach in this system is face detection and recognition, where known student faces are captured and stored for recognition purposes. This approach will require constant face database updates due to constant changes in the facial features. Alternatively, face counting can be performed by cropping the localized faces on the video or image into a folder and then count them. This research aims to develop a face localization-based approach to detect student faces in classroom images captured using a multicamera setup. A selected Haar-like feature cascade face detector trained with an asymmetric goal to minimize the False Rejection Rate (FRR) relative to the False Acceptance Rate (FAR) was applied on Raspberry Pi 4B. A relationship between the two factors (FRR and FAR) was established using a constant (λ) as a trade-off between the two factors for automatic adjustment during training. An evaluation of the proposed approach and the conventional AdaBoost on classroom datasets shows an improvement of 8% TPR (output result of low FRR) and 7% minimization of the FRR. The average learning speed of the proposed approach was improved with 1.19s execution time per image compared to 2.38s of the improved AdaBoost. Consequently, the proposed approach achieved 97% TPR with an overhead constraint time of 22.9s compared to 46.7s of the improved Adaboost when evaluated on images obtained from a large lecture hall (DK5) USM.

Keywords: automatic attendance, face detection, haar-like cascade, manual attendance

Procedia PDF Downloads 58
1352 The Evaluation of Antioxidant and Antimicrobial Activities of Essential Oil and Aqueous, Methanol, Ethanol, Ethyl Acetate and Acetone Extract of Hypericum scabrum

Authors: A. Heshmati, M. Y Alikhani, M. T. Godarzi, M. R. Sadeghimanesh

Abstract:

Herbal essential oil and extracts are a good source of natural antioxidants and antimicrobial compounds. Hypericum is one of the potential sources of these compounds. In this study, the antioxidant and antimicrobial activity of essential oil and aqueous, methanol, ethanol, ethyl acetate and acetone extract of Hypericum scabrum was assessed. Flowers of Hypericum scabrum were collected from the surrounding mountains of Hamadan province and after drying in the shade, the essential oil of the plant was extracted by Clevenger and water, methanol, ethanol, ethyl acetate and acetone extract was obtained by maceration method. Essential oil compounds were identified using the GC-Mass. The Folin-Ciocalteau and aluminum chloride (AlCl3) colorimetric method was used to measure the amount of phenolic acid and flavonoids, respectively. Antioxidant activity was evaluated using DPPH and FRAP. The minimum inhibitory concentration (MIC) and the minimum bacterial/fungicide concentration (MBC/MFC) of essential oil and extracts were evaluated against Staphylococcus aureus, Bacillus cereus, Pseudomonas aeruginosa, Salmonella typhimurium, Aspergillus flavus and Candida albicans. The essential oil yield of was 0.35%, the lowest and highest extract yield was related to ethyl acetate and water extract. The most component of essential oil was α-Pinene (46.35%). The methanol extracts had the highest phenolic acid (95.65 ± 4.72 µg galic acid equivalent/g dry plant) and flavonoids (25.39 ± 2.73 µg quercetin equivalent/g dry plant). The percentage of DPPH radical inhibition showed positive correlation with concentrations of essential oil or extract. The methanol and ethanol extract had the highest DDPH radical inhibitory. Essential oil and extracts of Hypericum had antimicrobial activity against the microorganisms studied in this research. The MIC and MBC values for essential oils were in the range of 25-25.6 and 25-50 μg/mL, respectively. For the extracts, these values were 1.5625-100 and 3.125-100 μg/mL, respectively. Methanol extracts had the highest antimicrobial activity. Essential oil and extract of Hypericum scabrum, especially methanol extract, have proper antimicrobial and antioxidant activity, and it can be used to control the oxidation and inhibit the growth of pathogenic and spoilage microorganisms. In addition, it can be used as a substitute for synthetic antioxidant and antimicrobial compounds.

Keywords: antimicrobial, antioxidant, extract, hypericum

Procedia PDF Downloads 306
1351 Experimental Study of Infill Walls with Joint Reinforcement Subjected to In-Plane Lateral Load

Authors: J. Martin Leal-Graciano, Juan J. Pérez-Gavilán, A. Reyes-Salazar, J. H. Castorena, J. L. Rivera-Salas

Abstract:

The experimental results about the global behavior of twelve 1:2 scaled reinforced concrete frames subject to in-plane lateral load are presented. The main objective was to generate experimental evidence about the use of steel bars within mortar bed joints as shear reinforcement in infill walls. Similar to the Canadian and New Zealand standards, the Mexican code includes specifications for this type of reinforcement. However, these specifications were obtained through experimental studies of load-bearing walls, mainly confined walls. Little information is found in the existing literature about the effects of joint reinforcement on the seismic behavior of infill masonry walls. Consequently, the Mexican code establishes the same equations to estimate the contribution of joint reinforcement for both confined walls and infill walls. Confined masonry construction and a reinforced concrete frame infilled with masonry walls have similar appearances. However, substantial differences exist between these two construction systems, which are mainly related to the sequence of construction and to how these structures support vertical and lateral loads. To achieve the objective established, ten reinforced concrete frames with masonry infill walls were built and tested in pairs, having both specimens in the pair identical characteristics except that one of them included joint reinforcement. The variables between pairs were the type of units, the size of the columns of the frame, and the aspect ratio of the wall. All cases included tie columns and tie beams on the perimeter of the wall to anchor the joint reinforcement. Also, two bare frames with identical characteristics to the infilled frames were tested. The purpose was to investigate the effects of the infill wall on the behavior of the system to in-plane lateral load. In addition, the experimental results were compared with the prediction of the Mexican code. All the specimens were tested in a cantilever under reversible cyclic lateral load. To simulate gravity load, constant vertical load was applied on the top of the columns. The results indicate that the contribution of the joint reinforcement to lateral strength depends on the size of the columns of the frame. Larger size columns produce a failure mode that is predominantly a sliding mode. Sliding inhibits the production of new inclined cracks, which are necessary to activate (deform) the joint reinforcement. Regarding the effects of joint reinforcement in the performance of confined masonry walls, many facts were confirmed for infill walls. This type of reinforcement increases the lateral strength of the wall, produces a more distributed cracking, and reduces the width of the cracks. Moreover, it reduces the ductility demand of the system at maximum strength. The prediction of the lateral strength provided by the Mexican code is a property in some cases; however, the effect of the size of the columns on the contribution of joint reinforcement needs to be better understood.

Keywords: experimental study, infill wall, infilled frame, masonry wall

Procedia PDF Downloads 163
1350 Locally Produced Solid Biofuels – Carbon Dioxide Emissions and Competitiveness with Conventional Ways of Individual Space Heating

Authors: Jiri Beranovsky, Jaroslav Knapek, Tomas Kralik, Kamila Vavrova

Abstract:

The paper deals with the results of research focused on the complex aspects of the use of intentionally grown biomass on agricultural land for the production of solid biofuels as an alternative for individual household heating. . The study primarily deals with the analysis of CO2 emissions of the logistics cycle of biomass for the production of energy pellets. Growing, harvesting, transport and storage are evaluated in the pellet production cycle. The aim is also to take into account the consumption profile during the year in terms of heating of common family houses, which are typical end-market segment for these fuels. It is assumed that in family houses, bio-pellets are able to substitute typical fossil fuels, such as brown coal and old wood burning heating devices and also electric boilers. One of the competing technology with the pellets are heat pumps. The results show the CO2 emissions related with considered fuels and technologies for their utilization. Comparative analysis is aimed biopellets from intentionally grown biomass, brown coal, natural gas and electricity used in electric boilers and heat pumps. Analysis combines CO2 emissions related with individual fuels utilization with costs of these fuels utilization. Cost of biopellets from intentionally grown biomass is derived from the economic models of individual energy crop plantations. At the same time, the restrictions imposed by EU legislation on Ecodesign's fuel and combustion equipment requirements and NOx emissions are discussed. Preliminary results of analyzes show that to achieve the competitiveness of pellets produced from specifically grown biomass, it would be necessary to either significantly ecological tax on coal (from about 0.3 to 3-3.5 EUR/GJ), or to multiply the agricultural subsidy per area. In addition to the Czech Republic, the results are also relevant for other countries, such as Bulgaria and Poland, which also have a high proportion of solid fuels for household heating.

Keywords: CO2 emissions, heating costs, energy crop, pellets, brown coal, heat pumps, economical evaluation

Procedia PDF Downloads 98
1349 Fuzzy Decision Making to the Construction Project Management: Glass Facade Selection

Authors: Katarina Rogulj, Ivana Racetin, Jelena Kilic

Abstract:

In this study, the fuzzy logic approach (FLA) was developed for construction project management (CPM) under uncertainty and duality. The focus was on decision making in selecting the type of the glass facade for a residential-commercial building in the main design. The adoption of fuzzy sets was capable of reflecting construction managers’ reliability level over subjective judgments, and thus the robustness of the system can be achieved. An α-cuts method was utilized for discretizing the fuzzy sets in FLA. This method can communicate all uncertain information in the optimization process, taking into account the values of this information. Furthermore, FLA provides in-depth analyses of diverse policy scenarios that are related to various levels of economic aspects when it comes to the construction projects' valid decision making. The developed approach is applied to CPM to demonstrate its applicability. Analyzing the materials of glass facades, variants were defined. The development of the FLA for the CPM included relevant construction projec'ts stakeholders that were involved in the criteria definition to evaluate each variant. Using fuzzy Decision-Making Trial and Evaluation Laboratory Method (DEMATEL) comparison of the glass facade was conducted. This way, a rank, according to the priorities for inclusion into the main design, of variants is obtained. The concept was tested on a residential-commercial building in the city of Rijeka, Croatia. The newly developed methodology was then compared with the existing one. The aim of the research was to define an approach that will improve current judgments and decisions when it comes to the material selection of buildings facade as one of the most important architectural and engineering tasks in the main design. The advantage of the new methodology compared to the old one is that it includes the subjective side of the managers’ decisions, as an inevitable factor in each decision making. The proposed approach can help construction projects managers to identify the desired type of glass facade according to their preference and practical conditions, as well as facilitate in-depth analyses of tradeoffs between economic efficiency and architectural design.

Keywords: construction projects management, DEMATEL, fuzzy logic approach, glass façade selection

Procedia PDF Downloads 118
1348 Impact of 6-Week Brain Endurance Training on Cognitive and Cycling Performance in Highly Trained Individuals

Authors: W. Staiano, S. Marcora

Abstract:

Introduction: It has been proposed that acute negative effect of mental fatigue (MF) could potentially become a training stimulus for the brain (Brain endurance training (BET)) to adapt and improve its ability to attenuate MF states during sport competitions. Purpose: The aim of this study was to test the efficacy of 6 weeks of BET on cognitive and cycling tests in a group of well-trained subjects. We hypothesised that combination of BET and standard physical training (SPT) would increase cognitive capacity and cycling performance by reducing rating of perceived exertion (RPE) and increase resilience to fatigue more than SPT alone. Methods: In a randomized controlled trial design, 26 well trained participants, after a familiarization session, cycled to exhaustion (TTE) at 80% peak power output (PPO) and, after 90 min rest, at 65% PPO, before and after random allocation to a 6 week BET or active placebo control. Cognitive performance was measured using 30 min of STROOP coloured task performed before cycling performance. During the training, BET group performed a series of cognitive tasks for a total of 30 sessions (5 sessions per week) with duration increasing from 30 to 60 min per session. Placebo engaged in a breathing relaxation training. Both groups were monitored for physical training and were naïve to the purpose of the study. Physiological and perceptual parameters of heart rate, lactate (LA) and RPE were recorded during cycling performances, while subjective workload (NASA TLX scale) was measured during the training. Results: Group (BET vs. Placebo) x Test (Pre-test vs. Post-test) mixed model ANOVA’s revealed significant interaction for performance at 80% PPO (p = .038) or 65% PPO (p = .011). In both tests, groups improved their TTE performance; however, BET group improved significantly more compared to placebo. No significant differences were found for heart rate during the TTE cycling tests. LA did not change significantly at rest in both groups. However, at completion of 65% TTE, it was significantly higher (p = 0.043) in the placebo condition compared to BET. RPE measured at ISO-time in BET was significantly lower (80% PPO, p = 0.041; 65% PPO p= 0.021) compared to placebo. Cognitive results in the STROOP task showed that reaction time in both groups decreased at post-test. However, BET decreased significantly (p = 0.01) more compared to placebo despite no differences accuracy. During training sessions, participants in the BET showed, through NASA TLX questionnaires, constantly significantly higher (p < 0.01) mental demand rates compared to placebo. No significant differences were found for physical demand. Conclusion: The results of this study provide evidences that combining BET and SPT seems to be more effective than SPT alone in increasing cognitive and cycling performance in well trained endurance participants. The cognitive overload produced during the 6-week training of BET can induce a reduction in perception of effort at a specific power, and thus improving cycling performance. Moreover, it provides evidence that including neurocognitive interventions will benefit athletes by increasing their mental resilience, without affecting their physical training load and routine.

Keywords: cognitive training, perception of effort, endurance performance, neuro-performance

Procedia PDF Downloads 106
1347 Contextual Toxicity Detection with Data Augmentation

Authors: Julia Ive, Lucia Specia

Abstract:

Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.

Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing

Procedia PDF Downloads 150
1346 Changes in Kidney Tissue at Postmortem Magnetic Resonance Imaging Depending on the Time of Fetal Death

Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh

Abstract:

All cases of stillbirth undoubtedly subject to postmortem examination, since it is necessary to find out the cause of the stillbirths, as well as a forecast of future pregnancies and their outcomes. Determination of the time of death is an important issue which is addressed during the examination of the body of a stillborn. It is mean the period from the time of death until the birth of the fetus. The time for fetal deaths determination is based on the assessment of the severity of the processes of maceration. To study the possibilities of postmortem magnetic resonance imaging (MRI) for determining the time of intrauterine fetal death based on the evaluation of maceration in the kidney. We have conducted MRI morphological comparisons of 7 dead fetuses (18-21 gestational weeks) and 26 stillbirths (22-39 gestational weeks), and 15 bodies of died newborns at the age of 2 hours – 36 days. Postmortem MRI 3T was performed before the autopsy. The signal intensity of the kidney tissue (SIK), pleural fluid (SIF), external air (SIA) was determined on T1-WI and T2-WI. Macroscopic and histological signs of maceration severity and time of death were evaluated in the autopsy. Based on the results of the morphological study, the degree of maceration varied from 0 to 4. In 13 cases, the time of intrauterine death was up to 6 hours, in 2 cases - 6-12 hours, in 4 -12-24 hours, in 9 -2-3 days, in 3 -1 week, in 2 -1,5-2 weeks. At 15 dead newborns, signs of maceration were absent, naturally. Based on the data from SIK, SIF, SIA on MR-tomograms, we calculated the coefficient of MR-maceration (M). The calculation of the time of intrauterine death (MP-t) (hours) was performed by our formula: МR-t = 16,87+95,38×М²-75,32×М. A direct positive correlation of MR-t and autopsy data from the dead at the gestational ages 22-40 weeks, with a dead time, not more than 1 week, was received. The maceration at the antenatal fetal death is characterized by changes in T1-WI and T2-WI signals at postmortem MRI. The calculation of MP-t allows defining accurately the time of intrauterine death within one week at the stillbirths who died on 22-40 gestational weeks. Thus, our study convincingly demonstrates that radiological methods can be used for postmortem study of the bodies, in particular, the bodies of stillborn to determine the time of intrauterine death. Postmortem MRI allows for an objective and sufficiently accurate analysis of pathological processes with the possibility of their documentation, storage, and analysis after the burial of the body.

Keywords: intrauterine death, maceration, postmortem MRI, stillborn

Procedia PDF Downloads 111
1345 The Ethical Imperative of Corporate Social Responsibility Practice and Disclosure by Firms in Nigeria Delta Swamplands: A Qualitative Analysis

Authors: Augustar Omoze Ehighalua, Itotenaan Henry Ogiri

Abstract:

As a mono-product economy, Nigeria relies largely on oil revenues for its foreign exchange earnings and the exploration activities of firms operating in the Niger Delta region have left in its wake tales of environmental degradation, poverty and misery. This, no doubt, have created corporate social responsibility issues in the region. The focus of this research is the critical evaluation of the ethical response to Corporate Social Responsibility (CSR) practice by firms operating in Nigeria Delta Swamplands. While CSR is becoming more popular in developed society with effective practice guidelines and reporting benchmark, there is a relatively low level of awareness and selective applicability of existing international guidelines to effectively support CSR practice in Nigeria. This study, haven identified the lack of CSR institutional framework attempts to develop an ethically-driven CSR transparency benchmark laced within a regulatory framework based on international best practices. The research adopts a qualitative methodology and makes use of primary data collected through semi-structured interviews conducted across the six core states of the Niger Delta Region. More importantly, the study adopts an inductive, interpretivist philosophical paradigm that reveal deep phenomenological insights into what local communities, civil society and government officials consider as good ethical benchmark for responsible CSR practice by organizations. The institutional theory provides for the main theoretical foundation, complemented by the stakeholder and legitimacy theories. The Nvivo software was used to analyze the data collected. This study shows that ethical responsibility is lacking in CSR practice by firms in the Niger Delta Region of Nigeria. Furthermore, findings of the study indicate key issues of environmental, health and safety, human rights, and labour as fundamental in developing an effective CSR practice guideline for Nigeria. The study has implications for public policy formulation as well as managerial perspective.

Keywords: corporate social responsibility, CSR, ethics, firms, Niger-Delta Swampland, Nigeria

Procedia PDF Downloads 92
1344 Efficacy of Botulinum Toxin in Alleviating Pain Syndrome in Stroke Patients with Upper Limb Spasticity

Authors: Akulov M. A., Zaharov V. O., Jurishhev P. E., Tomskij A. A.

Abstract:

Introduction: Spasticity is a severe consequence of stroke, leading to profound disability, decreased quality of life and decrease of rehabilitation efficacy [4]. Spasticity is often associated with pain syndrome, arising from joint damage of paretic limbs (postural arthropathy) or painful spasm of paretic limb muscles. It is generally accepted that injection of botulinum toxin into a cramped muscle leads to decrease of muscle tone and improves motion range in paretic limb, which is accompanied by pain alleviation. Study aim: To evaluate the change in pain syndrome intensity after incections of botulinum toxin A (Xeomin) in stroke patients with upper limb spasticity. Patients and methods. 21 patients aged 47-74 years were evaluated. Inclusion criteria were: acute stroke 4-7 months before the inclusion into the study, leading to spasticity of wrist and/or finger flexors, elbow flexor or forearm pronator, associated with severe pain syndrome. Patients received Xeomin as monotherapy 90-300 U, according to spasticity pattern. Efficacy evaluation was performed using Ashworth scale, disability assessment scale (DAS), caregiver burden scale and global treatment benefit assessment on weeks 2, 4, 8 and 12. Efficacy criterion was the decrease of pain syndrome by week 4 on PQLS and VAS. Results: The study revealed a significant improvement of measured indices after 4 weeks of treatment, which persisted until the 12 week of treatment. Xeomin is effective in reducing muscle tone of flexors of wrist, fingers and elbow, forearm pronators. By the 4th week of treatment we observed a significant improvement on DAS (р < 0,05), Ashworth scale (1-2 points) in all patients (р < 0,05), caregiver burden scale (р < 0,05). A significant decrease of pain syndrome by the 4th week of treatment on PQLS (р < 0,05) и VAS (р < 0,05) was observed. No adverse effect were registered. Conclusion: Xeomin is an effective treatment of pain syndrome in postural upper limb spasticity after stroke. Xeomin treatment leads to a significant improvement on PQLS and VAS.

Keywords: botulinum toxin, pain syndrome, spasticity, stroke

Procedia PDF Downloads 295
1343 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 120
1342 Evaluation of Indoor Radon as Air Pollutant in Schools and Control of Exposure of the Children

Authors: Kremena Ivanona, Bistra Kunovska, Jana Djunova, Desislava Djunakova, Zdenka Stojanovska

Abstract:

In recent decades, the general public has become increasingly interested in the impact of air pollutions on their health. Currently, numerous studies are aimed at identifying pollutants in the indoor environment where they carry out daily activities. Internal pollutants can be of both natural and artificial origin. With regard to natural pollutants, special attention is paid to natural radioactivity. In recent years, radon has been one of the most studied indoor pollutants because it has the greatest contribution to human exposure to natural radionuclides. It is a known fact that lung cancer can be caused by radon radiation and it is the second risk factor after smoking for the onset of the disease. The main objective of the study under the National Science Fund of Bulgaria, in the framework of grant No КП-06-Н23/1/07.12.2018 is to evaluate the indoor radon as an important air pollutant in school buildings in order to reduce the exposure to children. The measurements were performed in 48 schools located in 55 buildings in one Bulgarian administrative district (Kardjaly). The nuclear track detectors (CR-39) were used for measurements. The arithmetic and geometric means of radon concentrations are AM = 140 Bq/m3, and GM = 117 Bq/m3 respectively. In 51 school rooms, the radon levels were greater than 200 Bq/m3, and in 28 rooms, located in 17 school buildings, it exceeded the national reference level of 300 Bq/m3, defined in the Bulgarian ordinance on radiation protection (or 30% of the investigated buildings). The statistically significant difference in the values of radon concentration by municipalities (KW, р < 0.001) obtained showed that the most likely reason for the differences between the groups is the geographical location of the buildings and the possible influence of the geological composition. The combined effect of the year of construction (technical condition of the buildings) and the energy efficiency measures was considered. The values of the radon concentration in the buildings where energy efficiency measures have been implemented are higher than those in buildings where they have not been performed. This result confirms the need for investigation of radon levels before conducting the energy efficiency measures in buildings. Corrective measures for reducing the radon levels have been recommended in school buildings with high radon levels in order to decrease the children's exposure.

Keywords: air pollution, indoor radon, children exposure, schools

Procedia PDF Downloads 154
1341 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 35
1340 Global Supply Chain Tuning: Role of National Culture

Authors: Aleksandr S. Demin, Anastasiia V. Ivanova

Abstract:

Purpose: The current economy tends to increase the influence of digital technologies and diminish the human role in management. However, it is impossible to deny that a person still leads a business with its own set of values and priorities. The article presented aims to incorporate the peculiarities of the national culture and the characteristics of the supply chain using the quantitative values of the national culture obtained by the scholars of comparative management (Hofstede, House, and others). Design/Methodology/Approach: The conducted research is based on the secondary data in the field of cross-country comparison achieved by Prof. Hofstede and received in the GLOBE project. The data mentioned are used to design different aspects of the supply chain both on the cross-functional and inter-organizational levels. The connection between a range of principles in general (roles assignment, customer service prioritization, coordination of supply chain partners) and in comparative management (acknowledgment of the national peculiarities of the country in which the company operates) is shown over economic and mathematical models, mainly linear programming models. Findings: The combination of the team management wheel concept, the business processes of the global supply chain, and the national culture characteristics let a transnational corporation to form a supply chain crew balanced in costs, functions, and personality. To elaborate on an effective customer service policy and logistics strategy in goods and services distribution in the country under review, two approaches are offered. The first approach relies exceptionally on the customer’s interest in the place of operation, while the second one takes into account the position of the transnational corporation and its previous experience in order to accord both organizational and national cultures. The effect of integration practice on the achievement of a specific supply chain goal in a specific location is advised to assess via types of correlation (positive, negative, non) and the value of national culture indices. Research Limitations: The models developed are intended to be used by transnational companies and business forms located in several nationally different areas. Some of the inputs to illustrate the application of the methods offered are simulated. That is why the numerical measurements should be used with caution. Practical Implications: The research can be of great interest for the supply chain managers who are responsible for the engineering of global supply chains in a transnational corporation and the further activities in doing business on the international area. As well, the methods, tools, and approaches suggested can be used by top managers searching for new ways of competitiveness and can be suitable for all staff members who are keen on the national culture traits topic. Originality/Value: The elaborated methods of decision-making with regard to the national environment suggest the mathematical and economic base to find a comprehensive solution.

Keywords: logistics integration, logistics services, multinational corporation, national culture, team management, service policy, supply chain management

Procedia PDF Downloads 91
1339 A Complex Network Approach to Structural Inequality of Educational Deprivation

Authors: Harvey Sanchez-Restrepo, Jorge Louca

Abstract:

Equity and education are major focus of government policies around the world due to its relevance for addressing the sustainable development goals launched by Unesco. In this research, we developed a primary analysis of a data set of more than one hundred educational and non-educational factors associated with learning, coming from a census-based large-scale assessment carried on in Ecuador for 1.038.328 students, their families, teachers, and school directors, throughout 2014-2018. Each participating student was assessed by a standardized computer-based test. Learning outcomes were calibrated through item response theory with two-parameters logistic model for getting raw scores that were re-scaled and synthetized by a learning index (LI). Our objective was to develop a network for modelling educational deprivation and analyze the structure of inequality gaps, as well as their relationship with socioeconomic status, school financing, and student's ethnicity. Results from the model show that 348 270 students did not develop the minimum skills (prevalence rate=0.215) and that Afro-Ecuadorian, Montuvios and Indigenous students exhibited the highest prevalence with 0.312, 0.278 and 0.226, respectively. Regarding the socioeconomic status of students (SES), modularity class shows clearly that the system is out of equilibrium: the first decile (the poorest) exhibits a prevalence rate of 0.386 while rate for decile ten (the richest) is 0.080, showing an intense negative relationship between learning and SES given by R= –0.58 (p < 0.001). Another interesting and unexpected result is the average-weighted degree (426.9) for both private and public schools attending Afro-Ecuadorian students, groups that got the highest PageRank (0.426) and pointing out that they suffer the highest educational deprivation due to discrimination, even belonging to the richest decile. The model also found the factors which explain deprivation through the highest PageRank and the greatest degree of connectivity for the first decile, they are: financial bonus for attending school, computer access, internet access, number of children, living with at least one parent, books access, read books, phone access, time for homework, teachers arriving late, paid work, positive expectations about schooling, and mother education. These results provide very accurate and clear knowledge about the variables affecting poorest students and the inequalities that it produces, from which it might be defined needs profiles, as well as actions on the factors in which it is possible to influence. Finally, these results confirm that network analysis is fundamental for educational policy, especially linking reliable microdata with social macro-parameters because it allows us to infer how gaps in educational achievements are driven by students’ context at the time of assigning resources.

Keywords: complex network, educational deprivation, evidence-based policy, large-scale assessments, policy informatics

Procedia PDF Downloads 105