Search results for: health system services
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26853

Search results for: health system services

1053 Getting It Right Before Implementation: Using Simulation to Optimize Recommendations and Interventions After Adverse Event Review

Authors: Melissa Langevin, Natalie Ward, Colleen Fitzgibbons, Christa Ramsey, Melanie Hogue, Anna Theresa Lobos

Abstract:

Description: Root Cause Analysis (RCA) is used by health care teams to examine adverse events (AEs) to identify causes which then leads to recommendations for prevention Despite widespread use, RCA has limitations. Best practices have not been established for implementing recommendations or tracking the impact of interventions after AEs. During phase 1 of this study, we used simulation to analyze two fictionalized AEs that occurred in hospitalized paediatric patients to identify and understand how the errors occurred and generated recommendations to mitigate and prevent recurrences. Scenario A involved an error of commission (inpatient drug error), and Scenario B involved detecting an error that already occurred (critical care drug infusion error). Recommendations generated were: improved drug labeling, specialized drug kids, alert signs and clinical checklists. Aim: Use simulation to optimize interventions recommended post critical event analysis prior to implementation in the clinical environment. Methods: Suggested interventions from Phase 1 were designed and tested through scenario simulation in the clinical environment (medicine ward or pediatric intensive care unit). Each scenario was simulated 8 times. Recommendations were tested using different, voluntary teams and each scenario was debriefed to understand why the error was repeated despite interventions and how interventions could be improved. Interventions were modified with subsequent simulations until recommendations were felt to have an optimal effect and data saturation was achieved. Along with concrete suggestions for design and process change, qualitative data pertaining to employee communication and hospital standard work was collected and analyzed. Results: Each scenario had a total of three interventions to test. In, scenario 1, the error was reproduced in the initial two iterations and mitigated following key intervention changes. In scenario 2, the error was identified immediately in all cases where the intervention checklist was utilized properly. Independently of intervention changes and improvements, the simulation was beneficial to identify which of these should be prioritized for implementation and highlighted that even the potential solutions most frequently suggested by participants did not always translate into error prevention in the clinical environment. Conclusion: We conclude that interventions that help to change process (epinephrine kit or mandatory checklist) were more successful at preventing errors than passive interventions (signage, change in memory aids). Given that even the most successful interventions needed modifications and subsequent re-testing, simulation is key to optimizing suggested changes. Simulation is a safe, practice changing modality for institutions to use prior to implementing recommendations from RCA following AE reviews.

Keywords: adverse events, patient safety, pediatrics, root cause analysis, simulation

Procedia PDF Downloads 150
1052 Dynamic-cognition of Strategic Mineral Commodities; An Empirical Assessment

Authors: Carlos Tapia Cortez, Serkan Saydam, Jeff Coulton, Claude Sammut

Abstract:

Strategic mineral commodities (SMC) both energetic and metals have long been fundamental for human beings. There is a strong and long-run relation between the mineral resources industry and society's evolution, with the provision of primary raw materials, becoming one of the most significant drivers of economic growth. Due to mineral resources’ relevance for the entire economy and society, an understanding of the SMC market behaviour to simulate price fluctuations has become crucial for governments and firms. For any human activity, SMC price fluctuations are affected by economic, geopolitical, environmental, technological and psychological issues, where cognition has a major role. Cognition is defined as the capacity to store information in memory, processing and decision making for problem-solving or human adaptation. Thus, it has a significant role in those systems that exhibit dynamic equilibrium through time, such as economic growth. Cognition allows not only understanding past behaviours and trends in SCM markets but also supports future expectations of demand/supply levels and prices, although speculations are unavoidable. Technological developments may also be defined as a cognitive system. Since the Industrial Revolution, technological developments have had a significant influence on SMC production costs and prices, likewise allowing co-integration between commodities and market locations. It suggests a close relation between structural breaks, technology and prices evolution. SCM prices forecasting have been commonly addressed by econometrics and Gaussian-probabilistic models. Econometrics models may incorporate the relationship between variables; however, they are statics that leads to an incomplete approach of prices evolution through time. Gaussian-probabilistic models may evolve through time; however, price fluctuations are addressed by the assumption of random behaviour and normal distribution which seems to be far from the real behaviour of both market and prices. Random fluctuation ignores the evolution of market events and the technical and temporal relation between variables, giving the illusion of controlled future events. Normal distribution underestimates price fluctuations by using restricted ranges, curtailing decisions making into a pre-established space. A proper understanding of SMC's price dynamics taking into account the historical-cognitive relation between economic, technological and psychological factors over time is fundamental in attempting to simulate prices. The aim of this paper is to discuss the SMC market cognition hypothesis and empirically demonstrate its dynamic-cognitive capacity. Three of the largest and traded SMC's: oil, copper and gold, will be assessed to examine the economic, technological and psychological cognition respectively.

Keywords: commodity price simulation, commodity price uncertainties, dynamic-cognition, dynamic systems

Procedia PDF Downloads 458
1051 Lamivudine Continuation/Tenofovir Add-on Adversely Affects Treatment Response among Lamivudine Non-Responder HIV-HBV Co-Infected Patients from Eastern India

Authors: Ananya Pal, Neelakshi Sarkar, Debraj Saha, Dipanwita Das, Subhashish Kamal Guha, Bibhuti Saha, Runu Chakravarty

Abstract:

Presently, tenofovir disoproxil fumurate (TDF) is the most effective anti-viral agent for the treatment of hepatitis B virus (HBV) in individuals co-infected with HIV and HBV as TDF has activity to suppress both wild-type and lamivudine (3TC)-resistant HBV. However, suboptimal response to TDF was reported in HIV-HBV co-infected individuals with prior 3TC therapy from different countries recently. The incidence of 3TC-resistant HBV strains is quite high in HIV-HBV co-infected patients experiencing long-term anti-retroviral therapy (ART) in eastern India. In spite of this risk, most of the patients with long-term 3TC treatment are continued with the same anti-viral agent in this country. Only a few have received TDF in addition to 3TC in the ART regimen since TDF has been available in India for the treatment of HIV-infected patients in 2012. In this preliminary study, we investigated the virologic and biochemical parameters among HIV-HBV co-infected patients who are non-responders to 3TC treatment during the continuation of 3TC or TDF add-on to 3TC in their ART regimen. Fifteen HIV-HBV co-infected patients who experienced long-term 3TC (mean duration months 36.87 ± 24.08 months) were identified with high HBV viremia ( > 20,000 IU/ml) or harbouring 3TC-resistant HBV. These patients receiving ART from School of Tropical Medicine Kolkata, the main ART centre in eastern India were followed-up semi-annually for next three visits. Different virologic parameters including quantification of plasma HBV load by real-time PCR, detection of hepatitis B e antigen (HBeAg) by commercial ELISA and anti-viral resistant mutations by sequencing were studied. During three follow-up among study subjects, 86%, 47%, and 43% had 3TC-mono-therapy (mean treatment-duration 41.54±18.84, 49.67±11.67, 54.17±12.37 months respectively) whereas 14%, 53%, and 57% experienced TDF in addition to 3TC (mean treatment duration 4.5±2.12, 16.56±11.06, and 23±4.07 months respectively). Mean CD4 cell-count in patients receiving 3TC was tended to be lower during third follow-up as compared to the first and the second [520.67±380.30 (1st), 454.8±196.90 (2nd), and 397.5±189.24 (3rd) cells/mm3) and similar trend was seen in patients experiencing TDF in addition to 3TC [334.5±330.218 (1st), 476.5±194.25 (2nd), and 461.17±269.89 (3rd) cells/mm3]. Serum HBV load was increased during successive follow-up of patients with 3TC-mono-therapy. Initiation of TDF lowered serum HBV-load among 3TC-non-responders at the time of second visit ( < 2,000 IU/ml), interestingly during third follow-up, mean HBV viremia increased >1 log IU/ml (mean 3.56±2.84 log IU/ml). Persistence of 3TC-resistant double and triple mutations was also observed in both the treatment regimens. Mean serum alanine aminotransferase remained elevated in these patients during this follow-up study. Persistence of high HBV viraemia and 3TC-resistant mutation in HBV during the continuation of 3TC might lead to major public health threat in India. The inclusion of TDF in the ART regimen of 3TC non-responder HIV-HBV co-infected patients showed adverse treatment response in terms of virologic and biochemical parameters. Therefore, serious attention is necessary for proper management of long-term 3TC experienced HIV-HBV co-infected patients with high HBV viraemia or 3TC-resistant HBV mutants in India.

Keywords: HBV, HIV, TDF, 3TC-resistant

Procedia PDF Downloads 374
1050 Assessment Environmental and Economic of Yerba Mate as a Feed Additive on Feedlot Lamb

Authors: Danny Alexander R. Moreno, Gustavo L. Sartorello, Yuli Andrea P. Bermudez, Richard R. Lobo, Ives Claudio S. Bueno, Augusto H. Gameiro

Abstract:

Meat production is a significant sector for Brazil's economy; however, the agricultural segment has suffered censure regarding the negative impacts on the environment, which consequently results in climate change. Therefore, it is essential the implementation of nutritional strategies that can improve the environmental performance of livestock. This research aimed to estimate the environmental impact and profitability of the use of yerba mate extract (Ilex paraguariensis) as an additive in the feeding of feedlot lamb. Thirty-six castrated male lambs (average weight of 23.90 ± 3.67 kg and average age of 75 days) were randomly assigned to four experimental diets with different levels of inclusion of yerba mate extract (0, 1, 2, and 4 %) based on dry matter. The animals were confined for fifty-three days and fed with 60:40 corn silage to concentrate ratio. As an indicator of environmental impact, the carbon footprint (CF) was measured as kg of CO₂ equivalent (CO₂-eq) per kg of body weight produced (BWP). The greenhouse gas (GHG) emissions such as methane (CH₄) generated from enteric fermentation, were calculated using the sulfur hexafluoride gas tracer (SF₆) technique; while the CH₄, nitrous oxide (N₂O - emissions generated by feces and urine), and carbon dioxide (CO₂ - emissions generated by concentrate and silage processing) were estimated using the Intergovernmental Panel on Climate Change (IPCC) methodology. To estimate profitability, the gross margin was used, which is the total revenue minus the total cost; the latter is composed of the purchase of animals and food. The boundaries of this study considered only the lamb fattening system. The enteric CH₄ emission from the lamb was the largest source of on-farm GHG emissions (47%-50%), followed by CH₄ and N₂O emissions from manure (10%-20%) and CO₂ emission from the concentrate, silage, and fossil energy (17%-5%). The treatment that generated the least environmental impact was the group with 4% of yerba mate extract (YME), which showed a 3% reduction in total GHG emissions in relation to the control (1462.5 and 1505.5 kg CO₂-eq, respectively). However, the scenario with 1% YME showed an increase in emissions of 7% compared to the control group. In relation to CF, the treatment with 4% YME had the lowest value (4.1 kg CO₂-eq/kg LW) compared with the other groups. Nevertheless, although the 4% YME inclusion scenario showed the lowest CF, the gross margin decreased by 36% compared to the control group (0% YME), due to the cost of YME as a food additive. The results showed that the extract has the potential for use in reducing GHG. However, the cost of implementing this input as a mitigation strategy increased the production cost. Therefore, it is important to develop political strategies that help reduce the acquisition costs of input that contribute to the search for the environmental and economic benefit of the livestock sector.

Keywords: meat production, natural additives, profitability, sheep

Procedia PDF Downloads 137
1049 Wood as a Climate Buffer in a Supermarket

Authors: Kristine Nore, Alexander Severnisen, Petter Arnestad, Dimitris Kraniotis, Roy Rossebø

Abstract:

Natural materials like wood, absorb and release moisture. Thus wood can buffer indoor climate. When used wisely, this buffer potential can be used to counteract the outer climate influence on the building. The mass of moisture used in the buffer is defined as the potential hygrothermal mass, which can be an energy storage in a building. This works like a natural heat pump, where the moisture is active in damping the diurnal changes. In Norway, the ability of wood as a material used for climate buffering is tested in several buildings with the extensive use of wood, including supermarkets. This paper defines the potential of hygrothermal mass in a supermarket building. This includes the chosen ventilation strategy, and how the climate impact of the building is reduced. The building is located above the arctic circle, 50m from the coastline, in Valnesfjord. It was built in 2015, has a shopping area, including toilet and entrance, of 975 m². The climate of the area is polar according to the Köppen classification, but the supermarket still needs cooling on hot summer days. In order to contribute to the total energy balance, wood needs dynamic influence to activate its hygrothermal mass. Drying and moistening of the wood are energy intensive, and this energy potential can be exploited. Examples are to use solar heat for drying instead of heating the indoor air, and raw air with high enthalpy that allow dry wooden surfaces to absorb moisture and release latent heat. Weather forecasts are used to define the need for future cooling or heating. Thus, the potential energy buffering of the wood can be optimized with intelligent ventilation control. The ventilation control in Valnesfjord includes the weather forecast and historical data. That is a five-day forecast and a two-day history. This is to prevent adjustments to smaller weather changes. The ventilation control has three zones. During summer, the moisture is retained to dampen for solar radiation through drying. In the winter time, moist air let into the shopping area to contribute to the heating. When letting the temperature down during the night, the moisture absorbed in the wood slow down the cooling. The ventilation system is shut down during closing hours of the supermarket in this period. During the autumn and spring, a regime of either storing the moisture or drying out to according to the weather prognoses is defined. To ensure indoor climate quality, measurements of CO₂ and VOC overrule the low energy control if needed. Verified simulations of the Valnesfjord building will build a basic model for investigating wood as a climate regulating material also in other climates. Future knowledge on hygrothermal mass potential in materials is promising. When including the time-dependent buffer capacity of materials, building operators can achieve optimal efficiency of their ventilation systems. The use of wood as a climate regulating material, through its potential hygrothermal mass and connected to weather prognoses, may provide up to 25% energy savings related to heating, cooling, and ventilation of a building.

Keywords: climate buffer, energy, hygrothermal mass, ventilation, wood, weather forecast

Procedia PDF Downloads 213
1048 Designing Sustainable and Energy-Efficient Urban Network: A Passive Architectural Approach with Solar Integration and Urban Building Energy Modeling (UBEM) Tools

Authors: A. Maghoul, A. Rostampouryasouri, MR. Maghami

Abstract:

The development of an urban design and power network planning has been gaining momentum in recent years. The integration of renewable energy with urban design has been widely regarded as an increasingly important solution leading to climate change and energy security. Through the use of passive strategies and solar integration with Urban Building Energy Modeling (UBEM) tools, architects and designers can create high-quality designs that meet the needs of clients and stakeholders. To determine the most effective ways of combining renewable energy with urban development, we analyze the relationship between urban form and renewable energy production. The procedure involved in this practice include passive solar gain (in building design and urban design), solar integration, location strategy, and 3D models with a case study conducted in Tehran, Iran. The study emphasizes the importance of spatial and temporal considerations in the development of sector coupling strategies for solar power establishment in arid and semi-arid regions. The substation considered in the research consists of two parallel transformers, 13 lines, and 38 connection points. Each urban load connection point is equipped with 500 kW of solar PV capacity and 1 kWh of battery Energy Storage (BES) to store excess power generated from solar, injecting it into the urban network during peak periods. The simulations and analyses have occurred in EnergyPlus software. Passive solar gain involves maximizing the amount of sunlight that enters a building to reduce the need for artificial lighting and heating. Solar integration involves integrating solar photovoltaic (PV) power into smart grids to reduce emissions and increase energy efficiency. Location strategy is crucial to maximize the utilization of solar PV in an urban distribution feeder. Additionally, 3D models are made in Revit, and they are keys component of decision-making in areas including climate change mitigation, urban planning, and infrastructure. we applied these strategies in this research, and the results show that it is possible to create sustainable and energy-efficient urban environments. Furthermore, demand response programs can be used in conjunction with solar integration to optimize energy usage and reduce the strain on the power grid. This study highlights the influence of ancient Persian architecture on Iran's urban planning system, as well as the potential for reducing pollutants in building construction. Additionally, the paper explores the advances in eco-city planning and development and the emerging practices and strategies for integrating sustainability goals.

Keywords: energy-efficient urban planning, sustainable architecture, solar energy, sustainable urban design

Procedia PDF Downloads 74
1047 Augmenting Navigational Aids: The Development of an Assistive Maritime Navigation Application

Authors: A. Mihoc, K. Cater

Abstract:

On the bridge of a ship the officers are looking for visual aids to guide navigation in order to reconcile the outside world with the position communicated by the digital navigation system. Aids to navigation include: Lighthouses, lightships, sector lights, beacons, buoys, and others. They are designed to help navigators calculate their position, establish their course or avoid dangers. In poor visibility and dense traffic areas, it can be very difficult to identify these critical aids to guide navigation. The paper presents the usage of Augmented Reality (AR) as a means to present digital information about these aids to support navigation. To date, nautical navigation related mobile AR applications have been limited to the leisure industry. If proved viable, this prototype can facilitate the creation of other similar applications that could help commercial officers with navigation. While adopting a user centered design approach, the team has developed the prototype based on insights from initial research carried on board of several ships. The prototype, built on Nexus 9 tablet and Wikitude, features a head-up display of the navigational aids (lights) in the area, presented in AR and a bird’s eye view mode presented on a simplified map. The application employs the aids to navigation data managed by Hydrographic Offices and the tablet’s sensors: GPS, gyroscope, accelerometer, compass and camera. Sea trials on board of a Navy and a commercial ship revealed the end-users’ interest in using the application and further possibility of other data to be presented in AR. The application calculates the GPS position of the ship, the bearing and distance to the navigational aids; all within a high level of accuracy. However, during testing several issues were highlighted which need to be resolved as the prototype is developed further. The prototype stretched the capabilities of Wikitude, loading over 500 objects during tests in a major port. This overloaded the display and required over 45 seconds to load the data. Therefore, extra filters for the navigational aids are being considered in order to declutter the screen. At night, the camera is not powerful enough to distinguish all the lights in the area. Also, magnetic interference with the bridge of the ship generated a continuous compass error of the AR display that varied between 5 and 12 degrees. The deviation of the compass was consistent over the whole testing durations so the team is now looking at the possibility of allowing users to manually calibrate the compass. It is expected that for the usage of AR in professional maritime contexts, further development of existing AR tools and hardware is needed. Designers will also need to implement a user-centered design approach in order to create better interfaces and display technologies for enhanced solutions to aid navigation.

Keywords: compass error, GPS, maritime navigation, mobile augmented reality

Procedia PDF Downloads 328
1046 The Effect of Technology on Skin Development and Progress

Authors: Haidy Weliam Megaly Gouda

Abstract:

Dermatology is often a neglected specialty in low-resource settings despite the high morbidity associated with skin disease. This becomes even more significant when associated with HIV infection, as dermatological conditions are more common and aggressive in HIV-positive patients. African countries have the highest HIV infection rates, and skin conditions are frequently misdiagnosed and mismanaged because of a lack of dermatological training and educational material. The frequent lack of diagnostic tests in the African setting renders basic clinical skills all the more vital. This project aimed to improve the diagnosis and treatment of skin disease in the HIV population in a district hospital in Malawi. A basic dermatological clinical tool was developed and produced in collaboration with local staff and based on available literature and data collected from clinics. The aim was to improve diagnostic accuracy and provide guidance for the treatment of skin disease in HIV-positive patients. A literature search within Embassy, Medline and Google Scholar was performed and supplemented through data obtained from attending 5 Antiretroviral clinics. From the literature, conditions were selected for inclusion in the resource if they were described as specific, more prevalent, or extensive in the HIV population or have more adverse outcomes if they develop in HIV patients. Resource-appropriate treatment options were decided using Malawian Ministry of Health guidelines and textbooks specific to African dermatology. After the collection of data and discussion with local clinical and pharmacy staff, a list of 15 skin conditions was included, and a booklet was created using the simple layout of a picture, a diagnostic description of the disease and treatment options. Clinical photographs were collected from local clinics (with full consent of the patient) or from the book ‘Common Skin Diseases in Africa’ (permission granted if fully acknowledged and used in a not-for-profit capacity). This tool was evaluated by the local staff alongside an educational teaching session on skin disease. This project aimed to reduce uncertainty in diagnosis and provide guidance for appropriate treatment in HIV patients by gathering information into one practical and manageable resource. To further this project, we hope to review the effectiveness of the tool in practice.

Keywords: prevalence and pattern of skin diseases, impact on quality of life, rural Nepal, interventions, quality switched ruby laser, skin color river blindness, clinical signs, circularity index, grey level run length matrix, grey level co-occurrence matrix, local binary pattern, object detection, ring detection, shape identification

Procedia PDF Downloads 60
1045 Effect of Internet Addiction on Dietary Behavior and Lifestyle Characteristics among University Students

Authors: Hafsa Kamran, Asma Afreen, Zaheer Ahmed

Abstract:

Internet addiction, an emerging mental health disorder from last two decades, is manifested by the inability in the controlled use of internet leading to academics, social, physiological and/or psychological difficulties. The present study aimed to assess the levels of internet addiction among university students in Lahore and to explore the effects of internet addiction on their dietary behavior and lifestyle. It was an analytical cross-sectional study. Data was collected from October to December 2016 from students of four universities selected through two-stage sampling method. The numbers of participants were 500 and 13 questionnaires were rejected due to incomplete information. Levels of Internet Addiction (IA) were calculated using Young Internet Addiction Test (YIAT). Data was also collected on students’ demographics, lifestyle factors and dietary behavior using self-reported questionnaire. Data was analyzed using SPSS (version 21). Chi-square test was applied to evaluate the relationship between variables. Results of the study revealed that 10% of the population had severe internet addiction while moderate Internet Addiction was present in 42%. High prevalence was found among males (11% vs. 8%), private sector university students (p = 0.008) and engineering students (p = 0.000). The lifestyle habits of internet addicts were significantly of poorer quality than normal users (p = 0.05). Internet addiction was found associated with lesser physically activity (p = 0.025), had shorter duration of physical activity (p = 0.016), had more disorganized sleep pattern (p = 0.023), had less duration of sleep (p = 0.019), reported being more tired and sleepy in class (p = 0.033) and spending more time on internet as compared to normal users. Severe and moderate internet addicts also found to be more overweight and obese than normal users (p = 0.000). The dietary behavior of internet addicts was significantly poorer than normal users. Internet addicts were found to skip breakfast more than a normal user (p = 0.039). Common reasons for meal skipping were lack of time and snacking between meals (p = 0.000). They also had increased meal size (p = 0.05) and habit of snacking while using the internet (p = 0.027). Fast food (p = 0.016) and fried items (p = 0.05) were most consumed snacks, while carbonated beverages (p = 0.019) were most consumed beverages among internet addicts. Internet Addicts were found to consume less than recommended daily servings of dairy (p = 0.008) and fruits (p = 0.000) and more servings of meat group (p = 0.025) than their no internet addict counterparts. In conclusion, in this study, it was demonstrated that internet addicts have unhealthy dietary behavior and inappropriate lifestyle habits. University students should be educated regarding the importance of balanced diet and healthy lifestyle, which are critical for effectual primary prevention of numerous chronic degenerative diseases. Furthermore, it is necessary to raise awareness concerning adverse effects of internet addiction among youth and their parents.

Keywords: dietary behavior, internet addiction, lifestyle, university students

Procedia PDF Downloads 200
1044 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods

Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard

Abstract:

The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.

Keywords: algorithms, genetics, matching, population

Procedia PDF Downloads 142
1043 Analysis of the Outcome of the Treatment of Osteoradionecrosis in Patients after Radiotherapy for Head and Neck Cancer

Authors: Petr Daniel Kovarik, Matt Kennedy, James Adams, Ajay Wilson, Andy Burns, Charles Kelly, Malcolm Jackson, Rahul Patil, Shahid Iqbal

Abstract:

Introduction: Osteoradionecrosis (ORN) is a recognised toxicity of radiotherapy (RT) for head and neck cancer (HNC). Existing literature lacks any generally accepted definition and staging system for this toxicity. Objective: The objective is to analyse the outcome of the surgical and nonsurgical treatments of ORN. Material and Method: Data on 2303 patients treated for HNC with radical or adjuvant RT or RT-chemotherapy from January 2010 - December 2021 were retrospectively analysed. Median follow-up to the whole group of patients was 37 months (range 0–148 months). Results: ORN developed in 185 patients (8.1%). The location of ORN was as follows; mandible=170, maxilla=10, and extra oral cavity=5. Multiple ORNs developed in 7 patients. 5 patients with extra oral cavity ORN were excluded from treatment analysis as the management is different. In 180 patients with oral cavity ORN, median follow-up was 59 months (range 5–148 months). ORN healed in 106 patients, treatment failed in 74 patients (improving=10, stable=43, and deteriorating=21). Median healing time was 14 months (range 3-86 months). Notani staging is available in 158 patients with jaw ORN with no previous surgery to the mandible (Notani class I=56, Notani class II=27, and Notani class III=76). 28 ORN (mandible=27, maxilla=1; Notani class I=23, Notani II=3, Notani III=1) healed spontaneously with a median healing time 7 months (range 3–46 months). In 20 patients, ORN developed after dental extraction, in 1 patient in the neomandible after radical surgery as a part of the primary treatment. In 7 patients, ORN developed and spontaneously healed in irradiated bone with no previous surgical/dental intervention. Radical resection of the ORN (segmentectomy, hemi-mandibulectomy with fibula flap) was performed in 43 patients (all mandible; Notani II=1, Notani III=39, Notani class was not established in 3 patients as ORN developed in the neomandible). 27 patients healed (63%); 15 patients failed (improving=2, stable=5, deteriorating=8). The median time from resection to healing was 6 months (range 2–30 months). 109 patients (mandible=100, maxilla=9; Notani I=3, Notani II=23, Notani III=35, Notani class was not established in 9 patients as ORN developed in the maxilla/neomandible) were treated conservatively using a combination of debridement, antibiotics and Pentoclo. 50 patients healed (46%) with a median healing time 14 months (range 3–70 months), 59 patients are recorded with persistent ORN (improving=8, stable=38, deteriorating=13). Out of 109 patients treated conservatively, 13 patients were treated with Pentoclo only (all mandible; Notani I=6, Notani II=3, Notani III=3, 1 patient with neomandible). In total, 8 patients healed (61.5%), treatment failed in 5 patients (stable=4, deteriorating=1). Median healing time was 14 months (range 4–24 months). Extra orally (n=5), 3 cases of ORN were in the auditory canal and 2 in mastoid. ORN healed in one patient (auditory canal after 32 months. Treatment failed in 4 patients (improving=3, stable=1). Conclusion: The outcome of the treatment of ORN remains in general, poor. Every effort should therefore be made to minimise the risk of development of this devastating toxicity.

Keywords: head and neck cancer, radiotherapy, osteoradionecrosis, treatment outcome

Procedia PDF Downloads 91
1042 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders

Authors: Christian Andrä, Luisa Zimmermann, Christina Müller

Abstract:

Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.

Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity

Procedia PDF Downloads 314
1041 Reinforcing The Nagoya Protocol through a Coherent Global Intellectual Property Framework: Effective Protection for Traditional Knowledge Associated with Genetic Resources in Biodiverse African States

Authors: Oluwatobiloba Moody

Abstract:

On October 12, 2014, the Nagoya Protocol, negotiated by Parties to the Convention on Biological Diversity (CBD), entered into force. The Protocol was negotiated to implement the third objective of the CBD which relates to the fair and equitable sharing of benefits arising from the utilization of genetic resources (GRs). The Protocol aims to ‘protect’ GRs and traditional knowledge (TK) associated with GRs from ‘biopiracy’, through the establishment of a binding international regime on access and benefit sharing (ABS). In reflecting on the question of ‘effectiveness’ in the Protocol’s implementation, this paper argues that the underlying problem of ‘biopiracy’, which the Protocol seeks to address, is one which goes beyond the ABS regime. It rather thrives due to indispensable factors emanating from the global intellectual property (IP) regime. It contends that biopiracy therefore constitutes an international problem of ‘borders’ as much as of ‘regimes’ and, therefore, while the implementation of the Protocol may effectively address the ‘trans-border’ issues which have hitherto troubled African provider countries in their establishment of regulatory mechanisms, it remains unable to address the ‘trans-regime’ issues related to the eradication of biopiracy, especially those issues which involve the IP regime. This is due to the glaring incoherence in the Nagoya Protocol’s implementation and the existing global IP system. In arriving at conclusions, the paper examines the ongoing related discussions within the IP regime, specifically those within the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore (IGC) and the WTO TRIPS Council. It concludes that the Protocol’s effectiveness in protecting TK associated with GRs is conditional on the attainment of outcomes, within the ongoing negotiations of the IP regime, which could be implemented in a coherent manner with the Nagoya Protocol. It proposes specific ways to achieve this coherence. Three main methodological steps have been incorporated in the paper’s development. First, a review of data accumulated over a two year period arising from the coordination of six important negotiating sessions of the WIPO Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore. In this respect, the research benefits from reflections on the political, institutional and substantive nuances which have coloured the IP negotiations and which provide both the context and subtext to emerging texts. Second, a desktop review of the history, nature and significance of the Nagoya Protocol, using relevant primary and secondary literature from international and national sources. Third, a comparative analysis of selected biopiracy cases is undertaken for the purpose of establishing the inseparability of the IP regime and the ABS regime in the conceptualization and development of solutions to biopiracy. A comparative analysis of select African regulatory mechanisms (Kenya, South Africa and Ethiopia and the ARIPO Swakopmund Protocol) for the protection of TK is also undertaken.

Keywords: biopiracy, intellectual property, Nagoya protocol, traditional knowledge

Procedia PDF Downloads 428
1040 Characterization of Anisotropic Deformation in Sandstones Using Micro-Computed Tomography Technique

Authors: Seyed Mehdi Seyed Alizadeh, Christoph Arns, Shane Latham

Abstract:

Geomechanical characterization of rocks in detail and its possible implications on flow properties is an important aspect of reservoir characterization workflow. In order to gain more understanding of the microstructure evolution of reservoir rocks under stress a series of axisymmetric triaxial tests were performed on two different analogue rock samples. In-situ compression tests were coupled with high resolution micro-Computed Tomography to elucidate the changes in the pore/grain network of the rocks under pressurized conditions. Two outcrop sandstones were chosen in the current study representing a various cementation status of well-consolidated and weakly-consolidated granular system respectively. High resolution images were acquired while the rocks deformed in a purpose-built compression cell. A detailed analysis of the 3D images in each series of step-wise compression tests (up to the failure point) was conducted which includes the registration of the deformed specimen images with the reference pristine dry rock image. Digital Image Correlation (DIC) technique based on the intensity of the registered 3D subsets and particle tracking are utilized to map the displacement fields in each sample. The results suggest the complex architecture of the localized shear zone in well-cemented Bentheimer sandstone whereas for the weakly-consolidated Castlegate sandstone no discernible shear band could be observed even after macroscopic failure. Post-mortem imaging a sister plug from the friable rock upon undergoing continuous compression reveals signs of a shear band pattern. This suggests that for friable sandstones at small scales loading mode may affect the pattern of deformation. Prior to mechanical failure, the continuum digital image correlation approach can reasonably capture the kinematics of deformation. As failure occurs, however, discrete image correlation (i.e. particle tracking) reveals superiority in both tracking the grains as well as quantifying their kinematics (in terms of translations/rotations) with respect to any stage of compaction. An attempt was made to quantify the displacement field in compression using continuum Digital Image Correlation which is based on the reference and secondary image intensity correlation. Such approach has only been previously applied to unconsolidated granular systems under pressure. We are applying this technique to sandstones with various degrees of consolidation. Such element of novelty will set the results of this study apart from previous attempts to characterize the deformation pattern in consolidated sands.

Keywords: deformation mechanism, displacement field, shear behavior, triaxial compression, X-ray micro-CT

Procedia PDF Downloads 188
1039 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 529
1038 Relationship Demise After Having Children: An Analysis of Abandonment and Nuclear Family Structure vs. Supportive Community Cultures

Authors: John W. Travis

Abstract:

There is an epidemic of couples separating after a child is born into a family, generally with the father leaving emotionally or physically in the first few years after birth. This separation creates high levels of stress for both parents, especially the primary parent, leaving her (or him) less available to the infant for healthy attachment and nurturing. The deterioration of the couple’s bond leaves parents increasingly under-resourced, and the dependent child in a compromised environment, with an increased likelihood of developing an attachment disorder. Objectives: To understand the dynamics of a couple, once the additional and extensive demands of a newborn are added to a nuclear family structure, and to identify effective ways to support all members of the family to thrive. Qualitative studies interviewed men, women, and couples after pregnancy and the early years as a family, regarding key destructive factors, as well as effective tools for the couple to retain a strong bond. In-depth analysis of a few cases, including the author’s own experience, reveal deeper insights about subtle factors, replicated in wider studies. Using a self-assessment survey, many fathers report feeling abandoned, due to the close bond of the mother-baby unit, and in turn, withdrawing themselves, leaving the mother without support and closeness to resource her for the baby. Fathers report various types of abandonment, from his partner to his mother, with whom he did not experience adequate connection as a child. The study identified a key destructive factor to be unrecognized wounding from childhood that was carried into the relationship. The study culminated in the naming of Male Postpartum Abandonment Syndrome (MPAS), describing the epidemic in industrialized cultures with the nuclear family as the primary configuration. A growing family system often collapses without a minimum number of adult caregivers per infant, approximately four per infant (3.87), which allows for proper healing and caretaking. In cases with no additional family or community beyond one or two parents, the layers of abandonment and trauma result in the deterioration of a couple’s relationship and ultimately the family structure. The solution includes engaging community in support of new families. The study identified (and recommends) specific resources to assist couples in recognizing and healing trauma and disconnection at multiple levels. Recommendations include wider awareness and availability of resources for healing childhood wounds and greater community-building efforts to support couples for the whole family to thrive.

Keywords: abandonment, attachment, community building, family and marital functioning, healing childhood wounds, infant wellness, intimacy, marital satisfaction, relationship quality, relationship satisfaction

Procedia PDF Downloads 225
1037 Assessing Sydney Tar Ponds Remediation and Natural Sediment Recovery in Nova Scotia, Canada

Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer

Abstract:

Sydney Harbour, Nova Scotia has long been subject to effluent and atmospheric inputs of metals, polycyclic aromatic hydrocarbons (PAHs), and polychlorinated biphenyls (PCBs) from a large coking operation and steel plant that operated in Sydney for nearly a century until closure in 1988. Contaminated effluents from the industrial site resulted in the creation of the Sydney Tar Ponds, one of Canada’s largest contaminated sites. Since its closure, there have been several attempts to remediate this former industrial site and finally, in 2004, the governments of Canada and Nova Scotia committed to remediate the site to reduce potential ecological and human health risks to the environment. The Sydney Tar Ponds and Coke Ovens cleanup project has become the most prominent remediation project in Canada today. As an integral part of remediation of the site (i.e., which consisted of solidification/stabilization and associated capping of the Tar Ponds), an extensive multiple media environmental effects program was implemented to assess what effects remediation had on the surrounding environment, and, in particular, harbour sediments. Additionally, longer-term natural sediment recovery rates of select contaminants predicted for the harbour sediments were compared to current conditions. During remediation, potential contributions to sediment quality, in addition to remedial efforts, were evaluated which included a significant harbour dredging project, propeller wash from harbour traffic, storm events, adjacent loading/unloading of coal and municipal wastewater treatment discharges. Two sediment sampling methodologies, sediment grab and gravity corer, were also compared to evaluate the detection of subtle changes in sediment quality. Results indicated that overall spatial distribution pattern of historical contaminants remains unchanged, although at much lower concentrations than previously reported, due to natural recovery. Measurements of sediment indicator parameter concentrations confirmed that natural recovery rates of Sydney Harbour sediments were in broad agreement with predicted concentrations, in spite of ongoing remediation activities. Overall, most measured parameters in sediments showed little temporal variability even when using different sampling methodologies, during three years of remediation compared to baseline, except for the detection of significant increases in total PAH concentrations noted during one year of remediation monitoring. The data confirmed the effectiveness of mitigation measures implemented during construction relative to harbour sediment quality, despite other anthropogenic activities and the dynamic nature of the harbour.

Keywords: contaminated sediment, monitoring, recovery, remediation

Procedia PDF Downloads 235
1036 Prevalence of Antibiotic Resistant Enterococci in Treated Wastewater Effluent in Durban, South Africa and Characterization of Vancomycin and High-Level Gentamicin-Resistant Strains

Authors: S. H. Gasa, L. Singh, B. Pillay, A. O. Olaniran

Abstract:

Wastewater treatment plants (WWTPs) have been implicated as the leading reservoir for antibiotic resistant bacteria (ARB), including Enterococci spp. and antibiotic resistance genes (ARGs), worldwide. Enterococci are a group of clinically significant bacteria that have gained much attention as a result of their antibiotic resistance. They play a significant role as the principal cause of nosocomial infections and dissemination of antimicrobial resistance genes in the environment. The main objective of this study was to ascertain the role of WWTPs in Durban, South Africa as potential reservoirs for antibiotic resistant Enterococci (ARE) and their related ARGs. Furthermore, the antibiogram and resistance gene profile of Enterococci species recovered from treated wastewater effluent and receiving surface water in Durban were also investigated. Using membrane filtration technique, Enterococcus selective agar and selected antibiotics, ARE were enumerated in samples (influent, activated sludge, before chlorination and final effluent) collected from two WWTPs, as well as from upstream and downstream of the receiving surface water. Two hundred Enterococcus isolates recovered from the treated effluent and receiving surface water were identified by biochemical and PCR-based methods, and their antibiotic resistance profiles determined by the Kirby-Bauer disc diffusion assay, while PCR-based assays were used to detect the presence of resistance and virulence genes. High prevalence of ARE was obtained at both WWTPs, with values reaching a maximum of 40%. The influent and activated sludge samples contained the greatest prevalence of ARE with lower values observed in the before and after chlorination samples. Of the 44 vancomycin and high-level gentamicin-resistant isolates, 11 were identified as E. faecium, 18 as E. faecalis, 4 as E. hirae while 11 are classified as “other” Enterococci species. High-level aminoglycoside resistance for gentamicin (39%) and vancomycin (61%) was recorded in species tested. The most commonly detected virulence gene was the gelE (44%), followed by asa1 (40%), while cylA and esp were detected in only 2% of the isolates. The most prevalent aminoglycoside resistance genes were aac(6')-Ie-aph(2''), aph(3')-IIIa, and ant(6')-Ia detected in 43%, 45% and 41% of the isolates, respectively. Positive correlation was observed between resistant phenotypes to high levels of aminoglycosides and presence of all aminoglycoside resistance genes. Resistance genes for glycopeptide: vanB (37%) and vanC-1 (25%), and macrolide: ermB (11%) and ermC (54%) were detected in the isolates. These results show the need for more efficient wastewater treatment and disposal in order to prevent the release of virulent and antibiotic resistant Enterococci species and safeguard public health.

Keywords: antibiogram, enterococci, gentamicin, vancomycin, virulence signatures

Procedia PDF Downloads 218
1035 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability

Authors: Chin-Chia Jane

Abstract:

In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.

Keywords: quality of service, reliability, transportation network, travel time

Procedia PDF Downloads 220
1034 Perception of Tactile Stimuli in Children with Autism Spectrum Disorder

Authors: Kseniya Gladun

Abstract:

Tactile stimulation of a dorsal side of the wrist can have a strong impact on our attitude toward physical objects such as pleasant and unpleasant impact. This study explored different aspects of tactile perception to investigate atypical touch sensitivity in children with autism spectrum disorder (ASD). This study included 40 children with ASD and 40 healthy children aged 5 to 9 years. We recorded rsEEG (sampling rate of 250 Hz) during 20 min using EEG amplifier “Encephalan” (Medicom MTD, Taganrog, Russian Federation) with 19 AgCl electrodes placed according to the International 10–20 System. The electrodes placed on the left, and right mastoids served as joint references under unipolar montage. The registration of EEG v19 assignments was carried out: frontal (Fp1-Fp2; F3-F4), temporal anterior (T3-T4), temporal posterior (T5-T6), parietal (P3-P4), occipital (O1-O2). Subjects were passively touched by 4 types of tactile stimuli on the left wrist. Our stimuli were presented with a velocity of about 3–5 cm per sec. The stimuli materials and procedure were chosen for being the most "pleasant," "rough," "prickly" and "recognizable". Type of tactile stimulation: Soft cosmetic brush - "pleasant" , Rough shoe brush - "rough", Wartenberg pin wheel roller - "prickly", and the cognitive tactile stimulation included letters by finger (most of the patient’s name ) "recognizable". To designate the moments of the stimuli onset-offset, we marked the moment when the moment of the touch began and ended; the stimulation was manual, and synchronization was not precise enough for event-related measures. EEG epochs were cleaned from eye movements by ICA-based algorithm in EEGLAB plugin for MatLab 7.11.0 (Mathwork Inc.). Muscle artifacts were cut out by manual data inspection. The response to tactile stimuli was significantly different in the group of children with ASD and healthy children, which was also depended on type of tactile stimuli and the severity of ASD. Amplitude of Alpha rhythm increased in parietal region to response for only pleasant stimulus, for another type of stimulus ("rough," "thorny", "recognizable") distinction of amplitude was not observed. Correlation dimension D2 was higher in healthy children compared to children with ASD (main effect ANOVA). In ASD group D2 was lower for pleasant and unpleasant compared to the background in the right parietal area. Hilbert transform changes in the frequency of the theta rhythm found only for a rough tactile stimulation compared with healthy participants only in the right parietal area. Children with autism spectrum disorders and healthy children were responded to tactile stimulation differently with specific frequency distribution alpha and theta band in the right parietal area. Thus, our data supports the hypothesis that rsEEG may serve as a sensitive index of altered neural activity caused by ASD. Children with autism have difficulty in distinguishing the emotional stimuli ("pleasant," "rough," "prickly" and "recognizable").

Keywords: autism, tactile stimulation, Hilbert transform, pediatric electroencephalography

Procedia PDF Downloads 250
1033 A Clinical Cutoff to Identify Metabolically Unhealthy Obese and Normal-Weight Phenotype in Young Adults

Authors: Lívia Pinheiro Carvalho, Luciana Di Thommazo-Luporini, Rafael Luís Luporini, José Carlos Bonjorno Junior, Renata Pedrolongo Basso Vanelli, Manoel Carneiro de Oliveira Junior, Rodolfo de Paula Vieira, Renata Trimer, Renata G. Mendes, Mylène Aubertin-Leheudre, Audrey Borghi-Silva

Abstract:

Rationale: Cardiorespiratory fitness (CRF) and functional capacity in young obese and normal-weight people are associated with metabolic and cardiovascular diseases and mortality. However, it remains unclear whether their metabolically healthy (MH) or at risk (AR) phenotype influences cardiorespiratory fitness in this vulnerable population such as obese adults but also in normal-weight people. HOMA insulin resistance index (HI) and leptin-adiponectin ratio (LA) are strong markers for characterizing those phenotypes that we hypothesized to be associated with physical fitness. We also hypothesized that an easy and feasible exercise test could identify a subpopulation at risk to develop metabolic and related disorders. Methods: Thirty-nine sedentary men and women (20-45y; 18.530 kg.m-2) underwent a clinical evaluation, including the six-minute step test (ST), a well-validated and reliable test for young people. Body composition assessment was done by a tetrapolar bioimpedance in a fasting state and in the folicular phase for women. A maximal cardiopulmonary exercise testing, as well as the ST, evaluated the oxygen uptake at the peak of the test (VO2peak) by an ergospirometer Oxycon Mobile. Lipids, glucose, insulin were analysed and the ELISA method quantified the serum leptin and adiponectin from blood samples. Volunteers were divided in two groups: AR or MH according to a HI cutoff of 1.95, which was previously determined in the literature. T-test for comparison between groups, Pearson´s test to correlate main variables and ROC analysis for discriminating AR from up-and-down cycles in ST (SC) were applied (p<0.05). Results: Higher LA, fat mass (FM) and lower HDL, SC, leg lean mass (LM) and VO2peak were found in AR than in MH. Significant correlations were found between VO2peak and SC (r= 0.80) as well as between LA and FM (r=0.87), VO2peak (r=-0.73), and SC (r=-0.65). Area under de curve showed moderate accuracy (0.75) of SC <173 to discriminate AR phenotype. Conclusion: Our study found that at risk obese and normal-weight subjects showed an unhealthy metabolism as well as a poor CRF and functional daily activity capacity. Additionally, a simple and less costly functional test associated with above-mentioned aspects is able to identify ‘at risk’ subjects for primary intervention with important clinical and health implications.

Keywords: aerobic capacity, exercise, fitness, metabolism, obesity, 6MST

Procedia PDF Downloads 353
1032 Active Vibration Reduction for a Flexible Structure Bonded with Sensor/Actuator Pairs on Efficient Locations Using a Developed Methodology

Authors: Ali H. Daraji, Jack M. Hale, Ye Jianqiao

Abstract:

With the extensive use of high specific strength structures to optimise the loading capacity and material cost in aerospace and most engineering applications, much effort has been expended to develop intelligent structures for active vibration reduction and structural health monitoring. These structures are highly flexible, inherently low internal damping and associated with large vibration and long decay time. The modification of such structures by adding lightweight piezoelectric sensors and actuators at efficient locations integrated with an optimal control scheme is considered an effective solution for structural vibration monitoring and controlling. The size and location of sensor and actuator are important research topics to investigate their effects on the level of vibration detection and reduction and the amount of energy provided by a controller. Several methodologies have been presented to determine the optimal location of a limited number of sensors and actuators for small-scale structures. However, these studies have tackled this problem directly, measuring the fitness function based on eigenvalues and eigenvectors achieved with numerous combinations of sensor/actuator pair locations and converging on an optimal set using heuristic optimisation techniques such as the genetic algorithms. This is computationally expensive for small- and large-scale structures subject to optimise a number of s/a pairs to suppress multiple vibration modes. This paper proposes an efficient method to determine optimal locations for a limited number of sensor/actuator pairs for active vibration reduction of a flexible structure based on finite element method and Hamilton’s principle. The current work takes the simplified approach of modelling a structure with sensors at all locations, subjecting it to an external force to excite the various modes of interest and noting the locations of sensors giving the largest average percentage sensors effectiveness measured by dividing all sensor output voltage over the maximum for each mode. The methodology was implemented for a cantilever plate under external force excitation to find the optimal distribution of six sensor/actuator pairs to suppress the first six modes of vibration. It is shown that the results of the optimal sensor locations give good agreement with published optimal locations, but with very much reduced computational effort and higher effectiveness. Furthermore, it is shown that collocated sensor/actuator pairs placed in these locations give very effective active vibration reduction using optimal linear quadratic control scheme.

Keywords: optimisation, plate, sensor effectiveness, vibration control

Procedia PDF Downloads 230
1031 Demographic Assessment and Evaluation of Degree of Lipid Control in High Risk Indian Dyslipidemia Patients

Authors: Abhijit Trailokya

Abstract:

Background: Cardiovascular diseases (CVD’s) are the major cause of morbidity and mortality in both developed and developing countries. Many clinical trials have demonstrated that low-density lipoprotein cholesterol (LDL-C) lowering, reduces the incidence of coronary and cerebrovascular events across a broad spectrum of patients at risk. Guidelines for the management of patients at risk have been established in Europe and North America. The guidelines have advocated progressively lower LDL-C targets and more aggressive use of statin therapy. In Indian patients, comprehensive data on dyslipidemia management and its treatment outcomes are inadequate. There is lack of information on existing treatment patterns, the patient’s profile being treated, and factors that determine treatment success or failure in achieving desired goals. Purpose: The present study was planned to determine the lipid control status in high-risk dyslipidemic patients treated with lipid-lowering therapy in India. Methods: This cross-sectional, non-interventional, single visit program was conducted across 483 sites in India where male and female patients with high-risk dyslipidemia aged 18 to 65 years who had visited for a routine health check-up to their respective physician at hospital or a healthcare center. Percentage of high-risk dyslipidemic patients achieving adequate LDL-C level (< 70 mg/dL) on lipid-lowering therapy and the association of lipid parameters with patient characteristics, comorbid conditions, and lipid lowering drugs were analysed. Results: 3089 patients were enrolled in the study; of which 64% were males. LDL-C data was available for 95.2% of the patients; only 7.7% of these patients achieved LDL-C levels < 70 mg/dL on lipid-lowering therapy, which may be due to inability to follow therapeutic plans, poor compliance, or inadequate counselling by physician. The physician’s lack of awareness about recent treatment guidelines also might contribute to patients’ poor adherence, not explaining adequately the benefit and risks of a medication, not giving consideration to the patient’s life style and the cost of medication. Statin was the most commonly used anti-dyslipidemic drug across population. The higher proportion of patients had the comorbid condition of CVD and diabetes mellitus across all dyslipidemic patients. Conclusion: As per the European Society of Cardiology guidelines the ideal LDL-C levels in high risk dyslipidemic patients should be less than 70%. In the present study, 7.7% of the patients achieved LDL-C levels < 70 mg/dL on lipid lowering therapy which is very less. Most of high risk dyslipidemic patients in India are on suboptimal dosage of statin. So more aggressive and high dosage statin therapy may be required to achieve target LDLC levels in high risk Indian dyslipidemic patients.

Keywords: cardiovascular disease, diabetes mellitus, dyslipidemia, LDL-C, lipid lowering drug, statins

Procedia PDF Downloads 200
1030 Conditional Relation between Migration, Demographic Shift and Human Development in India

Authors: Rakesh Mishra, Rajni Singh, Mukunda Upadhyay

Abstract:

Since the last few decades, the prima facie of development has shifted towards the working population in India. There has been a paradigm shift in the development approach with the realization that the present demographic dividend has to be harnessed for sustainable development. Rapid urbanization and improved socioeconomic characteristics experienced within its territory has catalyzed various forms of migration into it, resulting in massive transference of workforce between its states. Workforce in any country plays a very crucial role in deciding development of both the places, from where they have out-migrated and the place they are residing currently. In India, people are found to be migrating from relatively less developed states to a well urbanized and developed state for satisfying their neediness. Linking migration to HDI at place of destination, the regression coefficient (β ̂) shows affirmative association between them, because higher the HDI of the place would be, higher would be chance of earning and hence likeliness of the migrants would be more to choose that place as a new destination and vice versa. So the push factor is compromised by the cost of rearing and provides negative impulse on the in migrants letting down their numbers to metro cities or megacities of the states but increasing their mobility to the suburban areas and vice versa. The main objective of the study is to check the role of migration in deciding the dividend of the place of destination as well as people at the place of their usual residence with special focus to highly urban states in India. Idealized scenario of Indian migrants refers to some new theories in making. On analyzing the demographic dividend of the places we got to know that Uttar Pradesh provides maximum dividend to Maharashtra, West Bengal and Delhi, and the demographic divided of migrants are quite comparable to the native’s shares in the demographic dividend in these places. On analyzing the data from National Sample Survey 64th round and Census of India-2001, we have observed that for males in rural areas, the share of unemployed person declined by 9 percentage points (from 45% before migration to 36 % after migration) and for females in rural areas the decline was nearly 12 percentage points (from 79% before migration to 67% after migration. It has been observed that the shares of unemployed males in both rural and urban areas, which were significant before migration, got reduced after migration while the share of unemployed females in the rural as well as in the urban areas remained almost negligible both for before and after migration. So increase in the number of employed after migration provides an indication of changes in the associated cofactors like health and education of the place of destination and arithmetically to the place from where they have migrated out. This paper presents the evidence on the patterns of prevailing migration dynamics and corresponding demographic benefits in India and its states, examines trends and effects, and discusses plausible explanations.

Keywords: migration, demographic shift, human development index, multilevel analysis

Procedia PDF Downloads 386
1029 Strategies of Translation: Unlocking the Secret of 'Locksley Hall'

Authors: Raja Lahiani

Abstract:

'Locksley Hall' is a poem that Lord Alfred Tennyson (1809-1892) published in 1842. It is believed to be his first attempt to face as a poet some of the most painful of his experiences, as it is a study of his rising out of sickness into health, conquering his selfish sorrow by faith and hope. So far, in Victorian scholarship as in modern criticism, 'Locksley Hall' has been studied and approached as a canonical Victorian English poem. The aim of this project is to prove that some strategies of translation were used in this poem in such a way as to guarantee its assimilation into the English canon and hence efface to a large extent its Arabic roots. In its relationship with its source text, 'Locksley Hall' is at the same time mimetic and imitative. As part of the terminology used in translation studies, ‘imitation’ means almost the exact opposite of what it means in ordinary English. By adopting an imitative procedure, a translator would do something totally different from the original author, wandering far and freely from the words and sense of the original text. An imitation is thus aimed at an audience which wants the work of the particular translator rather than the work of the original poet. Hallam Tennyson, the poet’s biographer, asserts that 'Locksley Hall' is a simple invention of place, incidents, and people, though he notes that he remembers the poet claiming that Sir William Jones’ prose translation of the Mu‘allaqat (pre-Islamic poems) gave him the idea of the poem. A comparative work would prove that 'Locksley Hall' mirrors a great deal of Tennyson’s biography and hence is not a simple invention of details as asserted by his biographer. It would be challenging to prove that 'Locksley Hall' shares so many details with the Mu‘allaqat, as declared by Tennyson himself, that it needs to be studied as an imitation of the Mu‘allaqat of Imru’ al-Qays and ‘Antara in addition to its being a poem in its own right. Thus, the main aim of this work is to unveil the imitative and mimetic strategies used by Tennyson in his composition of 'Locksley Hall.' It is equally important that this project researches the acculturating assimilative tools used by the poet to root his poem in its Victorian English literary, cultural and spatiotemporal settings. This work adopts a comparative methodology. Comparison is done at different levels. The poem will be contextualized in its Victorian English literary framework. Alien details related to structure, socio-spatial setting, imagery and sound effects shall be compared to Arabic poems from the Mu‘allaqat collection. This would determine whether the poem is a translation, an adaption, an imitation or a genuine work. The ultimate objective of the project is to unveil in this canonical poem a new dimension that has for long been either marginalized or ignored. By proving that 'Locksley Hall' is an imitation of classical Arabic poetry, the project aspires to consolidate its literary value and open up new gates of accessing it.

Keywords: comparative literature, imitation, Locksley Hall, Lord Alfred Tennyson, translation, Victorian poetry

Procedia PDF Downloads 199
1028 Hybrid versus Cemented Fixation in Total Knee Arthroplasty: Mid-Term Follow-Up

Authors: Pedro Gomes, Luís Sá Castelo, António Lopes, Marta Maio, Pedro Mota, Adélia Avelar, António Marques Dias

Abstract:

Introduction: Total Knee Arthroplasty (TKA) has contributed to improvement of patient`s quality of life, although it has been associated with some complications including component loosening and polyethylene wear. To prevent these complications various fixation techniques have been employed. Hybrid TKA with cemented tibial and cementless femoral components have shown favourable outcomes, although it still lack of consensus in the literature. Objectives: To evaluate the clinical and radiographic results of hybrid versus cemented TKA with an average 5 years follow-up and analyse the survival rates. Methods: A retrospective study of 125 TKAs performed in 92 patients at our institution, between 2006 to 2008, with a minimum follow-up of 2 years. The same prosthesis was used in all knees. Hybrid TKA fixation was performed in 96 knees, with a mean follow-up of 4,8±1,7 years (range, 2–8,3 years) and 29 TKAs received fully cemented fixation with a mean follow-up of 4,9±1,9 years (range, 2-8,3 years). Selection for hybrid fixation was nonrandomized and based on femoral component fit. The Oxford Knee Score (OKS 0-48) was evaluated for clinical assessment and Knee Society Roentgenographic Evaluation Scoring System was used for radiographic outcome. The survival rate was calculated using the Kaplan-Meier method, with failures defined as revision of either the tibial or femoral component for aseptic failures and all-causes (aseptic and infection). Analysis of survivorship data was performed using the log-rank test. SPSS (v22) was the computer program used for statistical analysis. Results: The hybrid group consisted of 72 females (75%) and 24 males (25%), with mean age 64±7 years (range, 50-78 years). The preoperative diagnosis was osteoarthritis (OA) in 94 knees (98%), rheumatoid arthritis (RA) in 1 knee (1%) and Posttraumatic arthritis (PTA) in 1 Knee (1%). The fully cemented group consisted of 23 females (79%) and 6 males (21%), with mean age 65±7 years (range, 47-78 years). The preoperative diagnosis was OA in 27 knees (93%), PTA in 2 knees (7%). The Oxford Knee Scores were similar between the 2 groups (hybrid 40,3±2,8 versus cemented 40,2±3). The percentage of radiolucencies seen on the femoral side was slightly higher in the cemented group 20,7% than the hybrid group 11,5% p0.223. In the cemented group there were significantly more Zone 4 radiolucencies compared to the hybrid group (13,8% versus 2,1% p0,026). Revisions for all causes were performed in 4 of the 96 hybrid TKAs (4,2%) and 1 of the 29 cemented TKAs (3,5%). The reason for revision was aseptic loosening in 3 hybrid TKAs and 1 of the cemented TKAs. Revision was performed for infection in 1 hybrid TKA. The hybrid group demonstrated a 7 years survival rate of 93% for all-cause failures and 94% for aseptic loosening. No significant difference in survivorship was seen between the groups for all-cause failures or aseptic failures. Conclusions: Hybrid TKA yields similar intermediate-term results and survival rates as fully cemented total knee arthroplasty and remains a viable option in knee joint replacement surgery.

Keywords: hybrid, survival rate, total knee arthroplasty, orthopaedic surgery

Procedia PDF Downloads 592
1027 The Comparison of Physical Fitness across Age and Gender in the Lithuanian Primary School Students: Population-Based Cross-Sectional Study

Authors: Arunas Emeljanovas, Brigita Mieziene, Vida Cesnaitiene, Ingunn Fjortoft, Lise Kjonniksen

Abstract:

Background: Gender differences in physical fitness were tracked in many studies with lower effect in preschool children and increasing difference among genders across age. In Lithuania, on a population level, secular trends in physical fitness were regularly observed each ten years for the last two decades for 11-18 years old students. However, there is apparently a lack of such epidemiological studies among primary school students. Assessing and monitoring physical fitness from an early age is of particular importance seeking to develop and strengthen physical abilities of youths for future health benefits. The goal of the current study was to indicate age and gender differences in anthropometric measures, musculoskeletal, motor and cardiorespiratory fitness in Lithuanian primary school children. Methods: The study included 3456 1-4th grade students from 6 to 10 years. The data reliably represents the population of primary school children in Lithuania. Among them, 1721 (49.8 percent) were boys. Physical fitness was measured by the 9-item test battery, developed by Fjørtoft and colleagues (2011). Height and weight were measured and body mass index was calculated. Student t test evaluated differences in physical fitness between boys and girls, ANOVA was performed to indicate differences across age. Results: All anthropometric and fitness means that were identified as significantly different were better in boys than in girls and in older than younger students (p < .05). Among anthropometric measures, height was higher in boys aged 7 through 9 years. Weight and BMI differed among boys and girls only at 8 years old. Means of height and weight increased significantly across all ages. Among musculoskeletal fitness tests, means of standing broad jump, throwing a tennis ball and pushing a medicine ball were different between genders within each age group and across all ages. Differences between genders were less likely in motor fitness than in musculoskeletal or cardiorespiratory fitness. Differences in means of shuttle run 10 x 5 test between genders occurred at age 6, 9 and 10 years; running 20 m at age 6 and 9 years, and climbing wall bars at age 9 and 10. Means of Reduced Cooper test representing cardiorespiratory fitness were different between genders within each age group but did not differ among age 6 and 8 as well as 7 and 8 years in boys, and among age 7 and 8 years in girls. Conclusion: In general, the current study confirms gender differences in musculoskeletal, motor and cardiorespiratory fitness found in other studies across the world in primary school and older children. Observed gender differences might be explained by higher physical activity in boys rather than girls. As it is explained by previous literature, older boys and girls had better performances than younger ones, because of the components of fitness change as a function of growth, maturation, development, and interactions among the three processes.

Keywords: primary school children, motor fitness, musculoskeletal fitness, cardiovascular fitness

Procedia PDF Downloads 207
1026 A New Index for the Differential Diagnosis of Morbid Obese Children with and without Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Metabolic syndrome (MetS) is a severe health problem which is common among obese individuals. The components of MetS are rather stable in adults compared to the components discussed for children. Due to the ambiguity in this group of the population, how to diagnose MetS in morbid obese (MO) children still constitutes a matter of discussion. For this purpose, a formula, which facilitates the diagnosis of MetS in MO children, was investigated. The aim of this study was to develop a formula which was capable of discriminating MO children with and without MetS findings. Study population comprised MO children. Age and sex-dependent body mass index (BMI) percentiles of the children were above 99. Metabolic syndrome components were also determined. Elevated systolic and diastolic blood pressures (SBP and DBP), elevated fasting blood glucose (FBG), elevated triglycerides (TRG), and/or depressed high density lipoprotein cholesterol (HDL-C) in addition to central obesity were listed as MetS components for each child. Presence of at least two of these components confirmed that the case was MetS. Two groups were constituted. In the first group, there were forty-two MO children without MetS components. Second group was composed of forty-four MO children with at least two MetS components. Anthropometric measurements, including weight, height, waist, and hip circumferences, were performed following physical examination. Body mass index and homeostatic model assessment of insulin resistance values were calculated. Informed consent forms were obtained from the parents of the children. Institutional Non-Interventional Ethics Committee approved the study design. Blood pressure values were recorded. Routine biochemical analysis, including FBG, insulin (INS), TRG, HDL-C were performed. The performance and the clinical utility of the Diagnostic Obesity Notation Model Assessment Metabolic Syndrome Index (DONMA MetS index) [(INS/FBG)/(HDL-C/TRG)*100] was tested. Appropriate statistical tests were applied to the study data. p value smaller than 0.05 was defined as significant. Metabolic syndrome index values were 41.6±5.1 in MO group and 104.4±12.8 in MetS group. Corresponding values for HDL-C values were 54.5±13.2 mg/dl and 44.2±11.5 mg/dl. There were statistically significant differences between the groups (p<0.001). Upon evaluation of the correlations between MetS index and HDL-C values, a much stronger negative correlation was found in MetS group (r=-0.515; p=0.001) in comparison with the correlation detected in MO group (r=-0.371; p=0.016). From these findings, it was concluded that the statistical significance degree of the difference between MO and MetS groups was highly acceptable for this recently introduced MetS index as expected. This was due to the involvement of all of the biochemically defined MetS components into the index. This is particularly important because each of these four parameters used in the formula is cardiac risk factor. Aside from discriminating MO children with and without MetS findings, MetS index introduced in this study is important from the cardiovascular risk point of view in MetS group of children.

Keywords: children, fasting blood glucose, high density lipoprotein cholesterol, index, insulin, metabolic syndrome, morbid obesity, triglycerides.

Procedia PDF Downloads 90
1025 Ozonation as an Effective Method to Remove Pharmaceuticals from Biologically Treated Wastewater of Different Origin

Authors: Agne Jucyte Cicine, Vytautas Abromaitis, Zita Rasuole Gasiunaite, I. Vybernaite-Lubiene, D. Overlinge, K. Vilke

Abstract:

Pharmaceutical pollution in aquatic environments has become a growing concern. Various active pharmaceutical ingredient (API) residues, hormones, antibiotics, or/and psychiatric drugs, have already been discovered in different environmental compartments. Due to existing ineffective wastewater treatment technologies to remove APIs, an underestimated amount can enter the ecosystem by discharged treated wastewater. Especially, psychiatric compounds, such as carbamazepine (CBZ) and venlafaxine (VNX), persist in effluent even post-treatment. Therefore, these pharmaceuticals usually exceed safe environmental levels and pose risks to the aquatic environment, particularly to sensitive ecosystems such as the Baltic Sea. CBZ, known for its chemical stability and long biodegradation time, accumulates in the environment, threatening aquatic life and human health through the food chain. As the use of medication rises, there is an urgent need for advanced wastewater treatment to reduce pharmaceutical contamination and meet future regulatory requirements. In this study, we tested advanced oxidation technology using ozone to remove two commonly used psychiatric drugs (carbamazepine and venlafaxine) from biologically treated wastewater effluent. Additionally, general water quality parameters (suspended matter (SPM), dissolved organic carbon (DOC), chemical oxygen demand (COD), and bacterial presence were analyzed. Three wastewater treatment plants (WWTPs) representing different anthropogenic pressures were selected: 1) resort, 2) resort and residential, and 3) residential, industrial, and resort. Wastewater samples for the experiment were collected during the summer season after mechanical and biological treatment and ozonated for 5, 10, and 15 minutes. The initial dissolved ozone concentration of 7,3±0,7 mg/L was held constant during all the experiments. Pharmaceutical levels in this study exceeded the predicted no-effect concentration (PNEC) of 500 and 90 ng L⁻¹ for CBZ and VNX, respectively, in all WWTPs, except CBZ in WWTP 1. Initial CBZ contamination was found to be lower in WWTP 1 (427.4 ng L⁻¹), compared with WWTP 2 (1266.5 ng L⁻¹) and 3 (119.2 ng L⁻¹). VNX followed a similar trend with concentrations of 341.2 ng L⁻¹, 361.4 ng L⁻¹, and 390.0 ng L⁻¹, respectively, for WWTPs 1, 2, and 3. It was determined that CBZ was not detected in the effluent after 5 minutes of ozonation in any of the WWTPs. Contrarily, VNX was still detected after 5, 10, and 15 minutes of treatment with ozone, however, under the limits of quantification (LOD) (<5ng L⁻¹). Additionally, general pollution of SPM, DOC, COD, and bacterial contamination was reduced notably after 5 minutes of treatment with ozone, while no bacterial growth was obtained. Although initial pharmaceutical levels exceeded PNECs, indicating ongoing environmental risks, ozonation demonstrated high efficiency in reducing pharmaceutical and general contamination in wastewater with different pollution matrices.

Keywords: Baltic Sea, ozonation, pharmaceuticals, wastewater treatment plants

Procedia PDF Downloads 19
1024 2,7-Diazaindole as a Photophysical Probe for Excited State Hydrogen/Proton Transfer

Authors: Simran Baweja, Bhavika Kalal, Surajit Maity

Abstract:

Photoinduced tautomerization reactions have been the centre of attention among the scientific community over the past several decades because of their significance in various biological systems. 7-azaindole (7AI) is considered a model system for DNA base pairing and to understand the role of such tautomerization reactions in mutations. To the best of our knowledge, extensive studies have been carried out on 7-azaindole and its solvent clusters exhibiting proton/ hydrogen transfer in both solution as well as gas phases. Derivatives of the above molecule, like 2,7- and 2,6-diazaindoles are proposed to have even better photophysical properties due to the presence of -aza group on the 2nd position. However, there are studies in the solution phase that suggest the relevance of these molecules, but there are no experimental studies reported in the gas phase yet. In our current investigation, we present the first gas phase spectroscopic data of 2,7-diazaindole (2,7-DAI) and its solvent cluster (2,7-DAI-H2O). In this, we have employed state-of-the-art laser spectroscopic methods such as fluorescence excitation (LIF), dispersed fluorescence (DF), resonant two-photon ionization-time of flight mass spectrometry (2C-R2PI), photoionization efficiency spectroscopy (PIE), IR-UV double resonance spectroscopy, i.e., fluorescence-dip infrared spectroscopy (FDIR) and resonant ion-dip infrared spectroscopy (IDIR) to understand the electronic structure of the molecule. The origin band corresponding to the S1 ← S0 transition of the bare 2,7-DAI is found to be positioned at 33910 cm-1, whereas the origin band corresponding to S1 ← S0 transition of the 2,7-DAI-H2O is positioned at 33074 cm-1. The red-shifted transition in the case of solvent cluster suggests the enhanced feasibility of excited state hydrogen/ proton transfer. The ionization potential for the 2,7-DAI molecule is found to be 8.92 eV which is significantly higher than the previously reported 7AI (8.11 eV) molecule, making it a comparatively complex molecule to study. The ionization potential is reduced by 0.14 eV in the case of 2,7-DAI-H2O (8.78 eV) cluster compared to that of 2,7-DAI. Moreover, on comparison with the available literature values of 7AI, we found the origin band of 2,7-DAI and 2,7-DAI-H2O to be red-shifted by -729 and -280 cm-1 respectively. The ground and excited state N-H stretching frequencies of the 27DAI molecule were determined using fluorescence-dip infrared spectra (FDIR) and resonant ion dip infrared spectroscopy (IDIR), obtained at 3523 and 3467 cm-1, respectively. The lower value of vNH in the electronically excited state of 27DAI implies the higher acidity of the group compared to the ground state. Moreover, we have done extensive computational analysis, which suggests that the energy barrier in the excited state reduces significantly as we increase the number of catalytic solvent molecules (S= H2O, NH3) as well as the polarity of solvent molecules. We found that the ammonia molecule is a better candidate for hydrogen transfer compared to water because of its higher gas-phase basicity. Further studies are underway to understand the excited state dynamics and photochemistry of such N-rich chromophores.

Keywords: excited state hydrogen transfer, supersonic expansion, gas phase spectroscopy, IR-UV double resonance spectroscopy, laser induced fluorescence, photoionization efficiency spectroscopy

Procedia PDF Downloads 73