Search results for: average value at risk
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10099

Search results for: average value at risk

619 Reliability of 2D Motion Analysis System for Sagittal Plane Lower Limb Kinematics during Running

Authors: Seyed Hamed Mousavi, Juha M. Hijmans, Reza Rajabi, Ron Diercks, Johannes Zwerver, Henk van der Worp

Abstract:

Introduction: Running is one of the most popular sports activity among people. Improper sagittal plane ankle, knee and hip kinematics are considered to be associated with the increase of injury risk in runners. Motion assessing smart-phone applications are increasingly used to measure kinematics both in the field and laboratory setting, as they are cheaper, more portable, accessible, and easier to use relative to 3D motion analysis system. The aims of this study are 1) to compare the results of 3D gait analysis system and CE; 2) to evaluate the test-retest and intra-rater reliability of coach’s eye (CE) app for the sagittal plane hip, knee, and ankle angles in the touchdown and toe-off while running. Method: Twenty subjects participated in this study. Sixteen reflective markers and cluster markers were attached to the subject’s body. Subjects were asked to run at a self-selected speed on a treadmill. Twenty-five seconds of running were collected for analyzing kinematics of interest. To measure sagittal plane hip, knee and ankle joint angles at touchdown (TD) and toe off (TO), the mean of first ten acceptable consecutive strides was calculated for each angle. A smartphone (Samsung Note5, android) was placed on the right side of the subject so that whole body was simultaneously filmed with 3D gait system during running. All subjects repeated the task with the same running speed after a short interval of 5 minutes in between. The CE app, installed on the smartphone, was used to measure the sagittal plane hip, knee and ankle joint angles at touchdown and toe off the stance phase. Results: Intraclass correlation coefficient (ICC) was used to assess test-retest and intra-rater reliability. To analyze the agreement between 3D and 2D outcomes, the Bland and Altman plot was used. The values of ICC were for Ankle at TD (TRR=0.8,IRR=0.94), ankle at TO (TRR=0.9,IRR=0.97), knee at TD (TRR=0.78,IRR=0.98), knee at TO (TRR=0.9,IRR=0.96), hip at TD (TRR=0.75,IRR=0.97), hip at TO (TRR=0.87,IRR=0.98). The Bland and Altman plots displaying a mean difference (MD) and ±2 standard deviation of MD (2SDMD) of 3D and 2D outcomes were for Ankle at TD (MD=3.71,+2SDMD=8.19, -2SDMD=-0.77), ankle at TO (MD=-1.27, +2SDMD=6.22, -2SDMD=-8.76), knee at TD (MD=1.48, +2SDMD=8.21, -2SDMD=-5.25), knee at TO (MD=-6.63, +2SDMD=3.94, -2SDMD=-17.19), hip at TD (MD=1.51, +2SDMD=9.05, -2SDMD=-6.03), hip at TO (MD=-0.18, +2SDMD=12.22, -2SDMD=-12.59). Discussion: The ability that the measurements are accurately reproduced is valuable in the performance and clinical assessment of outcomes of joint angles. The results of this study showed that the intra-rater and test-retest reliability of CE app for all kinematics measured are excellent (ICC ≥ 0.75). The Bland and Altman plots display that there are high differences of values for ankle at TD and knee at TO. Measuring ankle at TD by 2D gait analysis depends on the plane of movement. Since ankle at TD mostly occurs in the none-sagittal plane, the measurements can be different as foot progression angle at TD increases during running. The difference in values of the knee at TD can depend on how 3D and the rater detect the TO during the stance phase of running.

Keywords: reliability, running, sagittal plane, two dimensional

Procedia PDF Downloads 189
618 Wood Dust and Nanoparticle Exposure among Workers during a New Building Construction

Authors: Atin Adhikari, Aniruddha Mitra, Abbas Rashidi, Imaobong Ekpo, Jefferson Doehling, Alexis Pawlak, Shane Lewis, Jacob Schwartz

Abstract:

Building constructions in the US involve numerous wooden structures. Woods are routinely used in walls, framing floors, framing stairs, and making of landings in building constructions. Cross-laminated timbers are currently being used as construction materials for tall buildings. Numerous workers are involved in these timber based constructions, and wood dust is one of the most common occupational exposures for them. Wood dust is a complex substance composed of cellulose, polyoses and other substances. According to US OSHA, exposure to wood dust is associated with a variety of adverse health effects among workers, including dermatitis, allergic respiratory effects, mucosal and nonallergic respiratory effects, and cancers. The amount and size of particles released as wood dust differ according to the operations performed on woods. For example, shattering of wood during sanding operations produces finer particles than does chipping in sawing and milling industries. To our knowledge, how shattering, cutting and sanding of woods and wood slabs during new building construction release fine particles and nanoparticles are largely unknown. General belief is that the dust generated during timber cutting and sanding tasks are mostly large particles. Consequently, little attention has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor and conventional particle counters. This study was conducted in a large new building construction site in southern Georgia primarily during the framing of wooden side walls, inner partition walls, and landings. Exposure levels of nanoparticles (n = 10) were measured by a newly developed nanoparticle counter (TSI NanoScan SMPS Model 3910) at four different distances (5, 10, 15, and 30 m) from the work location. Other airborne particles (number of particles/m3) including PM2.5 and PM10 were monitored using a 6-channel (0.3, 0.5, 1.0, 2.5, 5.0 and 10 µm) particle counter at 15 m, 30 m, and 75 m distances at both upwind and downwind directions. Mass concentration of PM2.5 and PM10 (µg/m³) were measured by using a DustTrak Aerosol Monitor. Temperature and relative humidity levels were recorded. Wind velocity was measured by a hot wire anemometer. Concentration ranges of nanoparticles of 13 particle sizes were: 11.5 nm: 221 – 816/cm³; 15.4 nm: 696 – 1735/cm³; 20.5 nm: 879 – 1957/cm³; 27.4 nm: 1164 – 2903/cm³; 36.5 nm: 1138 – 2640/cm³; 48.7 nm: 938 – 1650/cm³; 64.9 nm: 759 – 1284/cm³; 86.6 nm: 705 – 1019/cm³; 115.5 nm: 494 – 1031/cm³; 154 nm: 417 – 806/cm³; 205.4 nm: 240 – 471/cm³; 273.8 nm: 45 – 92/cm³; and 365.2 nm: Keywords: wood dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 175
617 The Use of Rotigotine to Improve Hemispatial Neglect in Stroke Patients at the Charing Cross Neurorehabilitation Unit

Authors: Malab Sana Balouch, Meenakshi Nayar

Abstract:

Hemispatial Neglect is a common disorder primarily associated with right hemispheric stroke, in the acute phase of which it can occur up to 82% of the time. Such individuals fail to acknowledge or respond to people and objects in their left field of vision due to deficits in attention and awareness. Persistent hemispatial neglect significantly impedes post-stroke recovery, leading to longer hospital stays post-stroke, increased functional dependency, longer-term disability in ADLs and increased risk of falls. Recently, evidence has emerged for the use of dopamine agonist Rotigotine in neglect. The aim of our Quality Improvement Project (QIP) is to evaluate and better the current protocols and practice in assessment, documentation and management of neglect and rotigotine use at the Neurorehabilitation unit at Charing Cross Hospital (CNRU). In addition, it brings light to rotigotine use in the management of hemispatial neglect and paves the way for future research in the field. Our QIP was based in the CNRU. All patients admitted to the CNRU suffering from a right-sided stroke from 2nd of February 2018 to the 2nd of February 2021 were included in the project. Each patient’s multidisciplinary team report and hospital notes were searched for information, including bio-data, fulfilment of the inclusion criteria (having hemispatial neglect) and data related to rotigotine use. This includes whether or not the drug was administered, any contraindications to drug in patients that did not receive it, and any therapeutic benefits(subjective or objective improvement in neglect) in those that did receive the rotigotine. Data was simultaneously entered into excel sheet and further statistical analysis was done on SPSS 20.0. Out of 80 patients suffering from right sided strokes, 72.5% were infarcts and 27.5% were hemorrhagic strokes, with vast majority of both types of strokes were in the middle cerebral artery territory (MCA). A total of 31 (38.8%) of our patients were noted to have hemispatial neglect, with the highest number of cases being associated with MCA strokes. Almost half of our patients with MCA strokes suffered from neglect. Neglect was more common in male patients. Out of the 31 patients suffering from visuospatial neglect, only 16% actually received rotigotine and 80% of them were noted to have an objective improvement in their neglect tests and 20% revealed subjective improvement. After thoroughly going through neglect-associated documentation, the following recommendations/plans were put in place for the future. We plan to liaise with the occupational therapy team at our rehab unit to set a battery of tests that would be done on all patients presenting with neglect and recommend clear documentation of outcomes of each neglect screen under it. Also to create two proformas; one for the therapy team to aid in systematic documentation of neglect screens done prior to and after rotigotine administration and a second proforma for the medical team with clear documentation of rotigotine use, its benefits and any contraindications if not administered.

Keywords: hemispatial Neglect, right hemispheric stroke, rotigotine, neglect, dopamine agonist

Procedia PDF Downloads 65
616 The Impact of Tourism on the Intangible Cultural Heritage of Pilgrim Routes: The Case of El Camino de Santiago

Authors: Miguel Angel Calvo Salve

Abstract:

This qualitative and quantitative study will identify the impact of tourism pressure on the intangible cultural heritage of the pilgrim route of El Camino de Santiago (Saint James Way) and propose an approach to a sustainable touristic model for these Cultural Routes. Since 1993, the Spanish Section of the Pilgrim Route of El Camino de Santiago has been on the World Heritage List. In 1994, the International Committee on Cultural Routes (CIIC-ICOMOS) initiated its work with the goal of studying, preserving, and promoting the cultural routes and their significance as a whole. Another ICOMOS group, the Charter on Cultural Routes, pointed out in 2008 the importance of both tangible and intangible heritage and the need for a holistic vision in preserving these important cultural assets. Tangible elements provide a physical confirmation of the existence of these cultural routes, while the intangible elements serve to give sense and meaning to it as a whole. Intangible assets of a Cultural Route are key to understanding the route's significance and its associated heritage values. Like many pilgrim routes, the Route to Santiago, as the result of a long evolutionary process, exhibits and is supported by intangible assets, including hospitality, cultural and religious expressions, music, literature, and artisanal trade, among others. A large increase in pilgrims walking the route, with very different aims and tourism pressure, has shown how the dynamic links between the intangible cultural heritage and the local inhabitants along El Camino are fragile and vulnerable. Economic benefits for the communities and population along the cultural routes are commonly fundamental for the micro-economies of the people living there, substituting traditional productive activities, which, in fact, modifies and has an impact on the surrounding environment and the route itself. Consumption of heritage is one of the major issues of sustainable preservation promoted with the intention of revitalizing those sites and places. The adaptation of local communities to new conditions aimed at preserving and protecting existing heritage has had a significant impact on immaterial inheritance. Based on questionnaires to pilgrims, tourists and local communities along El Camino during the peak season of the year, and using official statistics from the Galician Pilgrim’s Office, this study will identify the risk and threats to El Camino de Santiago as a Cultural Route. The threats visible nowadays due to the impact of mass tourism include transformations of tangible heritage, consumerism of the intangible, changes of local activities, loss in the authenticity of symbols and spiritual significance, and pilgrimage transformed into a tourism ‘product’, among others. The study will also approach some measures and solutions to mitigate those impacts and better preserve this type of cultural heritage. Therefore, this study will help the Route services providers and policymakers to better preserve the Cultural Route as a whole to ultimately improve the satisfying experience of pilgrims.

Keywords: cultural routes, El Camino de Santiago, impact of tourism, intangible heritage

Procedia PDF Downloads 65
615 Solutions to Reduce CO2 Emissions in Autonomous Robotics

Authors: Antoni Grau, Yolanda Bolea, Alberto Sanfeliu

Abstract:

Mobile robots can be used in many different applications, including mapping, search, rescue, reconnaissance, hazard detection, and carpet cleaning, exploration, etc. However, they are limited due to their reliance on traditional energy sources such as electricity and oil which cannot always provide a convenient energy source in all situations. In an ever more eco-conscious world, solar energy offers the most environmentally clean option of all energy sources. Electricity presents threats of pollution resulting from its production process, and oil poses a huge threat to the environment. Not only does it pose harm by the toxic emissions (for instance CO2 emissions), it produces the combustion process necessary to produce energy, but there is the ever present risk of oil spillages and damages to ecosystems. Solar energy can help to mitigate carbon emissions by replacing more carbon intensive sources of heat and power. The challenge of this work is to propose the design and the implementation of electric battery recharge stations. Those recharge docks are based on the use of renewable energy such as solar energy (with photovoltaic panels) with the object to reduce the CO2 emissions. In this paper, a comparative study of the CO2 emission productions (from the use of different energy sources: natural gas, gas oil, fuel and solar panels) in the charging process of the Segway PT batteries is carried out. To make the study with solar energy, a photovoltaic panel, and a Buck-Boost DC/DC block has been used. Specifically, the STP005S-12/Db solar panel has been used to carry out our experiments. This module is a 5Wp-photovoltaic (PV) module, configured with 36 monocrystalline cells serially connected. With those elements, a battery recharge station is made to recharge the robot batteries. For the energy storage DC/DC block, a series of ultracapacitors have been used. Due to the variation of the PV panel with the temperature and irradiation, and the non-integer behavior of the ultracapacitors as well as the non-linearities of the whole system, authors have been used a fractional control method to achieve that solar panels supply the maximum allowed power to recharge the robots in the lesser time. Greenhouse gas emissions for production of electricity vary due to regional differences in source fuel. The impact of an energy technology on the climate can be characterised by its carbon emission intensity, a measure of the amount of CO2, or CO2 equivalent emitted by unit of energy generated. In our work, the coal is the fossil energy more hazardous, providing a 53% more of gas emissions than natural gas and a 30% more than fuel. Moreover, it is remarkable that existing fossil fuel technologies produce high carbon emission intensity through the combustion of carbon-rich fuels, whilst renewable technologies such as solar produce little or no emissions during operation, but may incur emissions during manufacture. The solar energy thus can help to mitigate carbon emissions.

Keywords: autonomous robots, CO2 emissions, DC/DC buck-boost, solar energy

Procedia PDF Downloads 409
614 Adverse Drug Reactions Monitoring in the Northern Region of Zambia

Authors: Ponshano Kaselekela, Simooya O. Oscar, Lunshano Boyd

Abstract:

The Copperbelt University Health Services (CBUHS) was designated by the Zambia Medicines Regulatory Authority (ZAMRA), formally the Pharmaceutical Regulatory Authority (PRA) as a regional pharmacovigilance centre to carryout activities of drug safety monitoring in four provinces in Zambia. CBUHS’s mandate included stimulating the reporting of adverse drug reactions (ADRs), as well as collecting and collating ADR reports from health institutions in the four provinces. This report covers the researchers’ experiences from May 2008 to September, 2016. The main objectives are 1) to monitor ADRs in the Zambian population, 2) to disseminate information to all health professionals in the region advising that the CBU health was a centre for reporting ADRs in the region, 3) to monitor polypharmacy as well as the benefit-risk profile of medicines, 4) to generate independent, evidence based recommendations on the safety of medicines, 5) to support ZAMRA in formulating safety related regulatory decisions for medicines, and 6) to communicate findings with all key stakeholders. The methodology involved monthly visits, beginning in early May 2008 to September, 2016, by the CBUHS to health institutions in the programme areas. Activities included holding discussions with health workers, distribution of ADR forms and collection of ADRs reports. These reports, once collected, were documented and assessed at the CBUHS. A report was then prepared for ZAMRA on quarterly basis. At ZAMRA, serious ADRs were noted and recommendations made to the Ministry of Health of the Republic of Zambia. The results show that 2,600 ADRs reports were received at the pharmacovigilance regional centre. Most of the ADRs reports that received were due to antiretroviral drugs, as well as a few from anti-malarial drugs like Artemether/Lumefantrine – Coartem®. Three hundred and twelve ADRs were entered in the Uppsala Monitoring Centre WHO Vigiflow for further analysis. It was concluded that in general, 2008-16 were exciting years for the pharmacovigilance group at CBUHS. From a very tentative beginning, a lot of strides were made and contacts established with healthcare facilities in the region. The researchers were encouraged by the support received from the Copperbelt University management, the motivation provided by ZAMRA and most importantly the enthusiasm of health workers in all the health care facilities visited. As a centre for drug safety in Zambia, the results show it achieves its objectives for monitoring ADRs, Pharmacovigilance (drug safety monitoring), and activities of monitoring ADRs as well as preventing them. However, the centre faces critical challenges caused by erratic funding that prevents the smooth running of the programme.

Keywords: adverse drug reactions, drug safety, monitoring, pharmacovigilance

Procedia PDF Downloads 191
613 The Impact of Professional Development on Teachers’ Instructional Practice

Authors: Karen Koellner, Nanette Seago, Jennifer Jacobs, Helen Garnier

Abstract:

Although studies of teacher professional development (PD) are prevalent, surprisingly most have only produced incremental shifts in teachers’ learning and their impact on students. There is a critical need to understand what teachers take up and use in their classroom practice after attending PD and why we often do not see greater changes in learning and practice. This paper is based on a mixed methods efficacy study of the Learning and Teaching Geometry (LTG) video-based mathematics professional development materials. The extent to which the materials produce a beneficial impact on teachers’ mathematics knowledge, classroom practices, and their students’ knowledge in the domain of geometry through a group-randomized experimental design are considered. In this study, we examine a small group of teachers to better understand their interpretations of the workshops and their classroom uptake. The participants included 103 secondary mathematics teachers serving grades 6-12 from two states in different regions. Randomization was conducted at the school level, with 23 schools and 49 teachers assigned to the treatment group and 18 schools and 54 teachers assigned to the comparison group. The case study examination included twelve treatment teachers. PD workshops for treatment teachers began in Summer 2016. Nine full days of professional development were offered to teachers, beginning with the one-week institute (Summer 2016) and four days of PD throughout the academic year. The same facilitator-led all of the workshops, after completing a facilitator preparation process that included a multi-faceted assessment of fidelity. The overall impact of the LTG PD program was assessed from multiple sources: two teacher content assessments, two PD embedded assessments, pre-post-post videotaped classroom observations, and student assessments. Additional data was collected from the case study teachers including additional videotaped classroom observations and interviews. Repeated measures ANOVA analyses were used to detect patterns of change in the treatment teachers’ content knowledge before and after completion of the LTG PD, relative to the comparison group. No significant effects were found across the two groups of teachers on the two teacher content assessments. Teachers were rated on the quality of their mathematics instruction captured in videotaped classroom observations using the Math in Common Observation Protocol. On average, teachers who attended the LTG PD intervention improved their ability to engage students in mathematical reasoning and to provide accurate, coherent, and well-justified mathematical content. In addition, the LTG PD intervention and instruction that engaged students in mathematical practices both positively and significantly predicted greater student knowledge gains. Teacher knowledge was not a significant predictor. Twelve treatment teachers were self-selected to serve as case study teachers to provide additional videotapes in which they felt they were using something from the PD they learned and experienced. Project staff analyzed the videos, compared them to previous videos and interviewed the teachers regarding their uptake of the PD related to content knowledge, pedagogical knowledge and resources used.

Keywords: teacher learning, professional development, pedagogical content knowledge, geometry

Procedia PDF Downloads 158
612 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection

Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan

Abstract:

Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.

Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori

Procedia PDF Downloads 296
611 The Analysis of Gizmos Online Program as Mathematics Diagnostic Program: A Story from an Indonesian Private School

Authors: Shofiayuningtyas Luftiani

Abstract:

Some private schools in Indonesia started integrating the online program Gizmos in the teaching-learning process. Gizmos was developed to supplement the existing curriculum by integrating it into the instructional programs. The program has some features using an inquiry-based simulation, in which students conduct exploration by using a worksheet while teachers use the teacher guidelines to direct and assess students’ performance In this study, the discussion about Gizmos highlights its features as the assessment media of mathematics learning for secondary school students. The discussion is based on the case study and literature review from the Indonesian context. The purpose of applying Gizmos as an assessment media refers to the diagnostic assessment. As a part of the diagnostic assessment, the teachers review the student exploration sheet, analyze particularly in the students’ difficulties and consider findings in planning future learning process. This assessment becomes important since the teacher needs the data about students’ persistent weaknesses. Additionally, this program also helps to build student’ understanding by its interactive simulation. Currently, the assessment over-emphasizes the students’ answers in the worksheet based on the provided answer keys while students perform their skill in translating the question, doing the simulation and answering the question. Whereas, the assessment should involve the multiple perspectives and sources of students’ performance since teacher should adjust the instructional programs with the complexity of students’ learning needs and styles. Consequently, the approach to improving the assessment components is selected to challenge the current assessment. The purpose of this challenge is to involve not only the cognitive diagnosis but also the analysis of skills and error. Concerning the selected setting for this diagnostic assessment that develops the combination of cognitive diagnosis, skills analysis and error analysis, the teachers should create an assessment rubric. The rubric plays the important role as the guide to provide a set of criteria for the assessment. Without the precise rubric, the teacher potentially ineffectively documents and follows up the data about students at risk of failure. Furthermore, the teachers who employ the program of Gizmos as the diagnostic assessment might encounter some obstacles. Based on the condition of assessment in the selected setting, the obstacles involve the time constrain, the reluctance of higher teaching burden and the students’ behavior. Consequently, the teacher who chooses the Gizmos with those approaches has to plan, implement and evaluate the assessment. The main point of this assessment is not in the result of students’ worksheet. However, the diagnostic assessment has the two-stage process; the process to prompt and effectively follow-up both individual weaknesses and those of the learning process. Ultimately, the discussion of Gizmos as the media of the diagnostic assessment refers to the effort to improve the mathematical learning process.

Keywords: diagnostic assessment, error analysis, Gizmos online program, skills analysis

Procedia PDF Downloads 171
610 Controlled Nano Texturing in Silicon Wafer for Excellent Optical and Photovoltaic Properties

Authors: Deb Kumar Shah, M. Shaheer Akhtar, Ha Ryeon Lee, O-Bong Yang, Chong Yeal Kim

Abstract:

The crystalline silicon (Si) solar cells are highly renowned photovoltaic technology and well-established as the commercial solar technology. Most of the solar panels are globally installed with the crystalline Si solar modules. At the present scenario, the major photovoltaic (PV) market is shared by c-Si solar cells, but the cost of c-Si panels are still very high as compared with the other PV technology. In order to reduce the cost of Si solar panels, few necessary steps such as low-cost Si manufacturing, cheap antireflection coating materials, inexpensive solar panel manufacturing are to be considered. It is known that the antireflection (AR) layer in c-Si solar cell is an important component to reduce Fresnel reflection for improving the overall conversion efficiency. Generally, Si wafer exhibits the 30% reflection because it normally poses the two major intrinsic drawbacks such as; the spectral mismatch loss and the high Fresnel reflection loss due to the high contrast of refractive indices between air and silicon wafer. In recent years, researchers and scientists are highly devoted to a lot of researches in the field of searching effective and low-cost AR materials. Silicon nitride (SiNx) is well-known AR materials in commercial c-Si solar cells due to its good deposition and interaction with passivated Si surfaces. However, the deposition of SiNx AR is usually performed by expensive plasma enhanced chemical vapor deposition (PECVD) process which could have several demerits like difficult handling and damaging the Si substrate by plasma when secondary electrons collide with the wafer surface for AR coating. It is very important to explore new, low cost and effective AR deposition process to cut the manufacturing cost of c-Si solar cells. One can also be realized that a nano-texturing process like the growth of nanowires, nanorods, nanopyramids, nanopillars, etc. on Si wafer can provide a low reflection on the surface of Si wafer based solar cells. The above nanostructures might be enhanced the antireflection property which provides the larger surface area and effective light trapping. In this work, we report on the development of crystalline Si solar cells without using the AR layer. The Silicon wafer was modified by growing nanowires like Si nanostructures using the wet controlled etching method and directly used for the fabrication of Si solar cell without AR. The nanostructures over Si wafer were optimized in terms of sizes, lengths, and densities by changing the etching conditions. Well-defined and aligned wires like structures were achieved when the etching time is 20 to 30 min. The prepared Si nanostructured displayed the minimum reflectance ~1.64% at 850 nm with the average reflectance of ~2.25% in the wavelength range from 400-1000 nm. The nanostructured Si wafer based solar cells achieved the comparable power conversion efficiency in comparison with c-Si solar cells with SiNx AR layer. From this study, it is confirmed that the reported method (controlled wet etching) is an easy, facile method for preparation of nanostructured like wires on Si wafer with low reflectance in the whole visible region, which has greater prospects in developing c-Si solar cells without AR layer at low cost.

Keywords: chemical etching, conversion efficiency, silicon nanostructures, silicon solar cells, surface modification

Procedia PDF Downloads 116
609 Provider Perceptions of the Effects of Current U.S. Immigration Enforcement Policies on Service Utilization in a Border Community

Authors: Isabel Latz, Mark Lusk, Josiah Heyman

Abstract:

The rise of restrictive U.S. immigration policies and their strengthened enforcement has reportedly caused concerns among providers about their inadvertent effects on service utilization among Latinx and immigrant communities. This study presents perceptions on this issue from twenty service providers in health care, mental health, nutrition assistance, legal assistance, and immigrant advocacy in El Paso, Texas. All participants were experienced professionals, with fifteen in CEO, COO, executive director, or equivalent positions, and based at organizations that provide services for immigrant and/or low-income populations in a bi-national border community. Quantitative and qualitative data were collected by two primary investigators via semi-structured telephone interviews with an average length of 20 minutes. A survey script with closed and open-ended questions inquired about participants’ demographic information and perceptions of impacts of immigration enforcement policies under the current federal administration on their work and patient or client populations. Quantitative and qualitative data were analyzed to produce descriptive statistics and identify salient themes, respectively. Nearly all respondents stated that their work has been negatively (N=13) or both positively and negatively (N=5) affected by current immigration enforcement policies. Negative effects were most commonly related to immigration enforcement-related fear and uncertainty among patient or client populations. Positive effects most frequently referred to a sense of increased community organizing and greater cooperation among organizations. Similarly, the majority of service providers either reported an increase (N=8) or decrease (N=6) in service utilization due to changes in immigration enforcement policies. Increased service needs were primarily related to a need for public education about immigration enforcement policy changes, information about how new policies impact individuals’ service eligibility, legal status, and civil rights, as well as a need to correct misinformation. Decreased service utilization was primarily related to fear-related service avoidance. While providers observed changes in service utilization among undocumented immigrants and mixed-immigration status families, in particular, participants also noted ‘spillover’ effects on the larger Latinx community, including legal permanent and temporary residents, refugees or asylum seekers, and U.S. citizens. This study reveals preliminary insights into providers’ widespread concerns about the effects of current immigration enforcement policies on health, social, and legal service utilization among Latinx individuals. Further research is necessary to comprehensively assess impacts of immigration enforcement policies on service utilization in Latinx and immigrant communities. This information is critical to address gaps in service utilization and prevent an exacerbation of health disparities among Latinx, immigrant, and border populations. In a global climate of rising nationalism and xenophobia, it is critical for policymakers to be aware of the consequences of immigration enforcement policies on the utilization of essential services to protect the well-being of minority and immigrant communities.

Keywords: immigration enforcement, immigration policy, provider perceptions, service utilization

Procedia PDF Downloads 129
608 Computational Fluid Dynamics (CFD) Calculations of the Wind Turbine with an Adjustable Working Surface

Authors: Zdzislaw Kaminski, Zbigniew Czyz, Krzysztof Skiba

Abstract:

This paper discusses the CFD simulation of a flow around a rotor of a Vertical Axis Wind Turbine. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed and avoid a costly preparation of a model or a prototype for a bench test. CFD simulation enables us to compare characteristics of aerodynamic forces acting on rotor working surfaces and define operational parameters like torque or power generated by a turbine assembly. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angular aperture α increases, the working surface which absorbs wind kinetic energy also increases. The operation of turbines is characterized by parameters like the angular aperture of blades, power, torque, speed for a given wind speed. These parameters have an impact on the efficiency of assemblies. The distribution of forces acting on the working surfaces in our turbine changes according to the angular velocity of the rotor. Moreover, the resultant force from the force acting on an advancing blade and retreating blade should be as high as possible. This paper is part of the research to improve an efficiency of a rotor assembly. Therefore, using simulation, the courses of the above parameters were studied in three full rotations individually for each of the blades for three angular apertures of blade working surfaces, i.e. 30 °, 60 °, 90 °, at three wind speeds, i.e. 4 m / s, 6 m / s, 8 m / s and rotor speeds ranging from 100 to 500 rpm. Finally, there were created the characteristics of torque coefficients and power as a function of time for each blade separately and for the entire rotor. Accordingly, the correlation between the turbine rotor power as a function of wind speed for varied values of rotor rotational speed. By processing this data, the correlation between the power of the turbine rotor and its rotational speed for each of the angular aperture of the working surfaces was specified. Finally, the optimal values, i.e. of the highest output power for given wind speeds were read. The research results in receiving the basic characteristics of turbine rotor power as a function of wind speed for the three angular apertures of the blades. Given the nature of rotor operation, the growth in the output turbine can be estimated if angular aperture of the blades increases. The controlled adjustment of angle α enables a smooth adjustment of power generated by a turbine rotor. If wind speed is significant, this type of adjustment enables this output power to remain at the same level (by reducing angle α) with no risk of damaging a construction. This work has been financed by the Polish Ministry of Science and Higher Education.

Keywords: computational fluid dynamics, numerical analysis, renewable energy, wind turbine

Procedia PDF Downloads 201
607 Measuring Emotion Dynamics on Facebook: Associations between Variability in Expressed Emotion and Psychological Functioning

Authors: Elizabeth M. Seabrook, Nikki S. Rickard

Abstract:

Examining time-dependent measures of emotion such as variability, instability, and inertia, provide critical and complementary insights into mental health status. Observing changes in the pattern of emotional expression over time could act as a tool to identify meaningful shifts between psychological well- and ill-being. From a practical standpoint, however, examining emotion dynamics day-to-day is likely to be burdensome and invasive. Utilizing social media data as a facet of lived experience can provide real-world, temporally specific access to emotional expression. Emotional language on social media may provide accurate and sensitive insights into individual and community mental health and well-being, particularly with focus placed on the within-person dynamics of online emotion expression. The objective of the current study was to examine the dynamics of emotional expression on the social network platform Facebook for active users and their relationship with psychological well- and ill-being. It was expected that greater positive and negative emotion variability, instability, and inertia would be associated with poorer psychological well-being and greater depression symptoms. Data were collected using a smartphone app, MoodPrism, which delivered demographic questionnaires, psychological inventories assessing depression symptoms and psychological well-being, and collected the Status Updates of consenting participants. MoodPrism also delivered an experience sampling methodology where participants completed items assessing positive affect, negative affect, and arousal, daily for a 30-day period. The number of positive and negative words in posts was extracted and automatically collated by MoodPrism. The relative proportion of positive and negative words from the total words written in posts was then calculated. Preliminary analyses have been conducted with the data of 9 participants. While these analyses are underpowered due to sample size, they have revealed trends that greater variability in the emotion valence expressed in posts is positively associated with greater depression symptoms (r(9) = .56, p = .12), as is greater instability in emotion valence (r(9) = .58, p = .099). Full data analysis utilizing time-series techniques to explore the Facebook data set will be presented at the conference. Identifying the features of emotion dynamics (variability, instability, inertia) that are relevant to mental health in social media emotional expression is a fundamental step in creating automated screening tools for mental health that are temporally sensitive, unobtrusive, and accurate. The current findings show how monitoring basic social network characteristics over time can provide greater depth in predicting risk and changes in depression and positive well-being.

Keywords: emotion, experience sampling methods, mental health, social media

Procedia PDF Downloads 234
606 Valuing Social Sustainability in Agriculture: An Approach Based on Social Outputs’ Shadow Prices

Authors: Amer Ait Sidhoum

Abstract:

Interest in sustainability has gained ground among practitioners, academics and policy-makers due to growing stakeholders’ awareness of environmental and social concerns. This is particularly true for agriculture. However, relatively little research has been conducted on the quantification of social sustainability and the contribution of social issues to the agricultural production efficiency. This research's main objective is to propose a method for evaluating prices of social outputs, more precisely shadow prices, by allowing for the stochastic nature of agricultural production that is to say for production uncertainty. In this article, the assessment of social outputs’ shadow prices is conducted within the methodological framework of nonparametric Data Envelopment Analysis (DEA). An output-oriented directional distance function (DDF) is implemented to represent the technology of a sample of Catalan arable crop farms and derive the efficiency scores the overall production technology of our sample is assumed to be the intersection of two different sub-technologies. The first sub-technology models the production of random desirable agricultural outputs, while the second sub-technology reflects the social outcomes from agricultural activities. Once a nonparametric production technology has been represented, the DDF primal approach can be used for efficiency measurement, while shadow prices are drawn from the dual representation of the DDF. Computing shadow prices is a method to assign an economic value to non-marketed social outcomes. Our research uses cross sectional, farm-level data collected in 2015 from a sample of 180 Catalan arable crop farms specialized in the production of cereals, oilseeds and protein (COP) crops. Our results suggest that our sample farms show high performance scores, from 85% for the bad state of nature to 88% for the normal and ideal crop growing conditions. This suggests that farm performance is increasing with an improvement in crop growth conditions. Results also show that average shadow prices of desirable state-contingent output and social outcomes for efficient and inefficient farms are positive, suggesting that the production of desirable marketable outputs and of non-marketable outputs makes a positive contribution to the farm production efficiency. Results also indicate that social outputs’ shadow prices are contingent upon the growing conditions. The shadow prices follow an upward trend as crop-growing conditions improve. This finding suggests that these efficient farms prefer to allocate more resources in the production of desirable outputs than of social outcomes. To our knowledge, this study represents the first attempt to compute shadow prices of social outcomes while accounting for the stochastic nature of the production technology. Our findings suggest that the decision-making process of the efficient farms in dealing with social issues are stochastic and strongly dependent on the growth conditions. This implies that policy-makers should adjust their instruments according to the stochastic environmental conditions. An optimal redistribution of rural development support, by increasing the public payment with the improvement in crop growth conditions, would likely enhance the effectiveness of public policies.

Keywords: data envelopment analysis, shadow prices, social sustainability, sustainable farming

Procedia PDF Downloads 111
605 Influence of a Cationic Membrane in a Double Compartment Filter-Press Reactor on the Atenolol Electro-Oxidation

Authors: Alan N. A. Heberle, Salatiel W. Da Silva, Valentin Perez-Herranz, Andrea M. Bernardes

Abstract:

Contaminants of emerging concern are substances widely used, such as pharmaceutical products. These compounds represent risk for both wild and human life since they are not completely removed from wastewater by conventional wastewater treatment plants. In the environment, they can be harm even in low concentration (µ or ng/L), causing bacterial resistance, endocrine disruption, cancer, among other harmful effects. One of the most common taken medicine to treat cardiocirculatory diseases is the Atenolol (ATL), a β-Blocker, which is toxic to aquatic life. In this way, it is necessary to implement a methodology, which is capable to promote the degradation of the ATL, to avoid the environmental detriment. A very promising technology is the advanced electrochemical oxidation (AEO), which mechanisms are based on the electrogeneration of reactive radicals (mediated oxidation) and/or on the direct substance discharge by electron transfer from contaminant to electrode surface (direct oxidation). The hydroxyl (HO•) and sulfate (SO₄•⁻) radicals can be generated, depending on the reactional medium. Besides that, at some condition, the peroxydisulfate (S₂O₈²⁻) ion is also generated from the SO₄• reaction in pairs. Both radicals, ion, and the direct contaminant discharge can break down the molecule, resulting in the degradation and/or mineralization. However, ATL molecule and byproducts can still remain in the treated solution. On this wise, some efforts can be done to implement the AEO process, being one of them the use of a cationic membrane to separate the cathodic (reduction) from the anodic (oxidation) reactor compartment. The aim of this study is investigate the influence of the implementation of a cationic membrane (Nafion®-117) to separate both cathodic and anodic, AEO reactor compartments. The studied reactor was a filter-press, with bath recirculation mode, flow 60 L/h. The anode was an Nb/BDD2500 and the cathode a stainless steel, both bidimensional, geometric surface area 100 cm². The solution feeding the anodic compartment was prepared with ATL 100 mg/L using Na₂SO₄ 4 g/L as support electrolyte. In the cathodic compartment, it was used a solution containing Na₂SO₄ 71 g/L. Between both solutions was placed the membrane. The applied currents densities (iₐₚₚ) of 5, 20 and 40 mA/cm² were studied over 240 minutes treatment time. Besides that, the ATL decay was analyzed by ultraviolet spectroscopy (UV/Vis). The mineralization was determined performing total organic carbon (TOC) in TOC-L CPH Shimadzu. In the cases without membrane, the iₐₚₚ 5, 20 and 40 mA/cm² resulted in 55, 87 and 98 % ATL degradation at the end of treatment time, respectively. However, with membrane, the degradation, for the same iₐₚₚ, was 90, 100 and 100 %, spending 240, 120, 40 min for the maximum degradation, respectively. The mineralization, without membrane, for the same studied iₐₚₚ, was 40, 55 and 72 %, respectively at 240 min, but with membrane, all tested iₐₚₚ reached 80 % of mineralization, differing only in the time spent, 240, 150 and 120 min, for the maximum mineralization, respectively. The membrane increased the ATL oxidation, probably due to avoid oxidant ions (S₂O₈²⁻) reduction on the cathode surface.

Keywords: contaminants of emerging concern, advanced electrochemical oxidation, atenolol, cationic membrane, double compartment reactor

Procedia PDF Downloads 118
604 Distribution of Micro Silica Powder at a Ready Mixed Concrete

Authors: Kyong-Ku Yun, Dae-Ae Kim, Kyeo-Re Lee, Kyong Namkung, Seung-Yeon Han

Abstract:

Micro silica is collected as a by-product of the silicon and ferrosilicon alloy production in electric arc furnace using highly pure quartz, wood chips, coke and the like. It consists of about 85% of silicon which has spherical particles with an average particle size of 150 μm. The bulk density of micro silica varies from 150 to 700kg/m^3 and the fineness ranges from 150,000 to 300,000cm^2/g. An amorphous structure with a high silicon oxide content of micro silica induces an active reaction with calcium hydroxide (Ca(OH)₂) generated by the cement hydrate of a large surface area (about 20 m^² / g), and they are also known to form calcium, silicate, hydrate conjugate (C-S-H). Micro silica tends to act as a filler because of the fine particles and the spherical shape. These particles do not get covered by water and they fit well in the space between the relatively rough cement grains which does not freely fluidize concrete. On the contrary, water demand increases since micro silica particles have a tendency to absorb water because of the large surface area. The overall effect of micro silica depends on the amount of micro silica added with other parameters in the water-(cement + micro silica) ratio, and the availability of superplasticizer. In this research, it was studied on cellular sprayed concrete. This method involves a direct re-production of ready mixed concrete into a high performance at a job site. It could reduce the cost of construction by an adding a cellular and a micro silica into a ready mixed concrete truck in a field. Also, micro silica which is difficult with mixing due to high fineness in the field can be added and dispersed in concrete by increasing the fluidity of ready mixed concrete through the surface activity of cellular. Increased air content is converged to a certain level of air content by spraying and it also produces high-performance concrete by remixing of powders in the process of spraying. As it does not use a field mixing equipment the cost of construction decrease and it can be constructed after installing special spray machine in a commercial pump car. Therefore, use of special equipment is minimized, providing economic feasibility through the utilization of existing equipment. This study was carried out to evaluate a highly reliable method of confirming dispersion through a high performance cellular sprayed concrete. A mixture of 25mm coarse aggregate and river sand was applied to the concrete. In addition, by applying silica fume and foam, silica fume dispersion is confirmed in accordance with foam mixing, and the mean and standard deviation is obtained. Then variation coefficient is calculated to finally evaluate the dispersion. Comparison and analysis of before and after spraying were conducted on the experiment variables of 21L, 35L foam for each 7%, 14% silica fume respectively. Taking foam and silica fume as variables, the experiment proceed. Casting a specimen for each variable, a five-day sample is taken from each specimen for EDS test. In this study, it was examined by an experiment materials, plan and mix design, test methods, and equipment, for the evaluation of dispersion in accordance with micro silica and foam.

Keywords: micro silica, distribution, ready mixed concrete, foam

Procedia PDF Downloads 203
603 Prevalence of Behavioral and Emotional Problems in School Going Adolescents in India

Authors: Anshu Gupta, Charu Gupta

Abstract:

Background: Adolescence is the transitional period between puberty and adulthood. It is marked by immense turmoil in emotional and behavioral spheres. Adolescents are at risk of an array of behavioral and emotional problems, resulting in social, academic and vocational function impairments. Conflicts in the family and inability of the parents to cope with the changing demands of an adolescent have a negative impact on the overall development of the child. This augers ill for the individual’s future, resulting in depression, delinquency and suicides among other problems. Aim: The aim of the study was to compare the prevalence of behavioral and emotional problems in school going adolescents aged 13 to 15 years residing in Ludhiana city. Method: A total of 1380 school children in the age group of 13 to 15 years were assessed by the adolescent health screening questionnaire (FAPS) and Youth Self-Report (2001) questionnaire. Statistical significance was ascertained by t-test, chi-square test (x²) and ANOVA, as appropriate. Results: A considerably high prevalence of behavioral and emotional problems was found in school going adolescents (26.5%), more in girls (31.7%) than in boys (24.4%). In case of boys, the maximum problem was in the 13 year age group, i.e., 28.2%, followed by a significant decline by the age of 14 years, i.e., 24.2% and 15 years, i.e., 19.6%. In case of girls also, the maximum problem was in the 13 year age group, i.e., 32.4% followed by a marginal decline in the 14 years i.e., 31.8% and 15 year age group, i.e., 30.2%. Demographic factors were non contributory. Internalizing syndrome (22.4%) was the most common problem followed by the neither internalizing nor externalizing (17.6%) group. In internalizing group, most (26.5%) of the students were observed to be anxious/ depressed. Social problem was observed to be the most frequent (10.6%) among neither internalizing nor externalizing group. Aggressive behavior was the commonest (8.4%) among externalizing group. Internalizing problems, mainly anxiety and depression, were commoner in females (30.6%) than males (24.6%). More boys (16%) than girls (13.4%) were reported to suffer from externalizing disorders. A critical review of the data showed that most of the adolescents had poor knowledge about reproductive health. Almost 36% reported that the source of their information on sexual and reproductive health being friends and the electronic media. There was a high percentage of adolescents who reported being worried about sexual abuse (20.2%) with majority of them being girls (93.6%) reflecting poorly on the social setup in the country. About 41% of adolescents reported being concerned about body weight and most of them being girls (92.4%). Up to 14.5% reported having thoughts of using alcohol or drugs perhaps due to the easy availability of substances of abuse in this part of the country. 12.8% (mostly girls) reported suicidal thoughts. Summary/conclusion: There is a high prevalence of emotional and behavioral problems among school-going adolescents. Resolution of these problems during adolescence is essential for attaining a healthy adulthood. The need of the hour is to spread awareness among caregivers and formulation of effective management strategies including school mental health programme.

Keywords: adolescence, behavioral, emotional, internalizing problem

Procedia PDF Downloads 268
602 The Product Innovation Using Nutraceutical Delivery System on Improving Growth Performance of Broiler

Authors: Kitti Supchukun, Kris Angkanaporn, Teerapong Yata

Abstract:

The product innovation using a nutraceutical delivery system on improving the growth performance of broilers is the product planning and development to solve the antibiotics banning policy incurred in the local and global livestock production system. Restricting the use of antibiotics can reduce the quality of chicken meat and increase pathogenic bacterial contamination. Although other alternatives were used to replace antibiotics, the efficacy was inconsistent, reflecting on low chicken growth performance and contaminated products. The product innovation aims to effectively deliver the selected active ingredients into the body. This product is tested on the pharmaceutical lab scale and on the farm-scale for market feasibility in order to create product innovation using the nutraceutical delivery system model. The model establishes the product standardization and traceable quality control process for farmers. The study is performed using mixed methods. Starting with a qualitative method to find the farmers' (consumers) demands and the product standard, then the researcher used the quantitative research method to develop and conclude the findings regarding the acceptance of the technology and product performance. The survey has been sent to different organizations by random sampling among the entrepreneur’s population including integrated broiler farm, broiler farm, and other related organizations. The mixed-method results, both qualitative and quantitative, verify the user and lead users' demands since they provide information about the industry standard, technology preference, developing the right product according to the market, and solutions for the industry problems. The product innovation selected nutraceutical ingredients that can solve the following problems in livestock; bactericidal, anti-inflammation, gut health, antioxidant. The combinations of the selected nutraceutical and nanostructured lipid carriers (NLC) technology aim to improve chemical and pharmaceutical components by changing the structure of active ingredients into nanoparticle, which will be released in the targeted location with accurate concentration. The active ingredients in nanoparticle form are more stable, elicit antibacterial activity against pathogenic Salmonella spp and E.coli, balance gut health, have antioxidant and anti-inflammation activity. The experiment results have proven that the nutraceuticals have an antioxidant and antibacterial activity which also increases the average daily gain (ADG), reduces feed conversion ratio (FCR). The results also show a significant impact on the higher European Performance Index that can increase the farmers' profit when exporting. The product innovation will be tested in technology acceptance management methods from farmers and industry. The production of broiler and commercialization analyses are useful to reduce the importation of animal supplements. Most importantly, product innovation is protected by intellectual property.

Keywords: nutraceutical, nano structure lipid carrier, anti-microbial drug resistance, broiler, Salmonella

Procedia PDF Downloads 162
601 The Gut Microbiome in Cirrhosis and Hepatocellular Carcinoma: Characterization of Disease-Related Microbial Signature and the Possible Impact of Life Style and Nutrition

Authors: Lena Lapidot, Amir Amnon, Rita Nosenko, Veitsman Ella, Cohen-Ezra Oranit, Davidov Yana, Segev Shlomo, Koren Omry, Safran Michal, Ben-Ari Ziv

Abstract:

Introduction: Hepatocellular carcinoma (HCC) is one of the leading causes of cancer related mortality worldwide. Liver Cirrhosis is the main predisposing risk factor for the development of HCC. The factor(s) influencing disease progression from Cirrhosis to HCC remain unknown. Gut microbiota has recently emerged as a major player in different liver diseases, however its association with HCC is still a mystery. Moreover, there might be an important association between the gut microbiota, nutrition, life style and the progression of Cirrhosis and HCC. The aim of our study was to characterize the gut microbial signature in association with life style and nutrition of patients with Cirrhosis, HCC-Cirrhosis and healthy controls. Design: Stool samples were collected from 95 individuals (30 patients with HCC, 38 patients with Cirrhosis and 27 age, gender and BMI-matched healthy volunteers). All participants answered lifestyle and Food Frequency Questionnaires. 16S rRNA sequencing of fecal DNA was performed (MiSeq Illumina). Results: There was a significant decrease in alpha diversity in patients with Cirrhosis (qvalue=0.033) and in patients with HCC-Cirrhosis (qvalue=0.032) compared to healthy controls. The microbiota of patients with HCC-cirrhosis compared to patients with Cirrhosis, was characterized by a significant overrepresentation of Clostridium (pvalue=0.024) and CF231 (pvalue=0.010) and lower expression of Alphaproteobacteria (pvalue=0.039) and Verrucomicrobia (pvalue=0.036) in several taxonomic levels: Verrucomicrobiae, Verrucomicrobiales, Verrucomicrobiaceae and the genus Akkermansia (pvalue=0.039). Furthermore, we performed an analysis of predicted metabolic pathways (Kegg level 2) that resulted in a significant decrease in the diversity of metabolic pathways in patients with HCC-Cirrhosis (qvalue=0.015) compared to controls, one of which was amino acid metabolism. Furthermore, investigating the life style and nutrition habits of patients with HCC-Cirrhosis, we found significant correlations between intake of artificial sweeteners and Verrucomicrobia (qvalue=0.12), High sugar intake and Synergistetes (qvalue=0.021) and High BMI and the pathogen Campylobacter (qvalue=0.066). Furthermore, overweight in patients with HCC-Cirrhosis modified bacterial diversity (qvalue=0.023) and composition (qvalue=0.033). Conclusions: To the best of the our knowledge, we present the first report of the gut microbial composition in patients with HCC-Cirrhosis, compared with Cirrhotic patients and healthy controls. We have demonstrated in our study that there are significant differences in the gut microbiome of patients with HCC-cirrhosis compared to Cirrhotic patients and healthy controls. Our findings are even more pronounced because the significantly increased bacteria Clostridium and CF231 in HCC-Cirrhosis weren't influenced by diet and lifestyle, implying this change is due to the development of HCC. Further studies are needed to confirm these findings and assess causality.

Keywords: Cirrhosis, Hepatocellular carcinoma, life style, liver disease, microbiome, nutrition

Procedia PDF Downloads 108
600 The Valuable Triad of Adipokine Indices to Differentiate Pediatric Obesity from Metabolic Syndrome: Chemerin, Progranulin, Vaspin

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Obesity is associated with cardiovascular disease risk factors and metabolic syndrome (MetS). In this study, associations between adipokines and adipokine as well as obesity indices were evaluated. Plasma adipokine levels may exhibit variations according to body adipose tissue mass. Besides, upon consideration of obesity as an inflammatory disease, adipokines may play some roles in this process. The ratios of proinflammatory adipokines to adiponectin may act as highly sensitive indicators of body adipokine status. The aim of the study is to present some adipokine indices, which are thought to be helpful for the evaluation of childhood obesity and also to determine the best discriminators in the diagnosis of MetS. 80 prepubertal children (aged between 6-9.5 years) included in the study were divided into three groups; 30 children with normal weight (NW), 25 morbid obese (MO) children and 25 MO children with MetS. Physical examinations were performed. Written informed consent forms were obtained from the parents. The study protocol was approved by Ethics Committee of Namik Kemal University Medical Faculty. Anthropometric measurements, such as weight, height, waist circumference (C), hip C, head C, neck C were recorded. Values for body mass index (BMI), diagnostic obesity notation model assessment Index-II (D2 index) as well as waist-to-hip, head-to-neck ratios were calculated. Adiponectin, resistin, leptin, chemerin, vaspin, progranulin assays were performed by ELISA. Adipokine-to-adiponectin ratios were obtained. SPSS Version 20 was used for the evaluation of data. p values ≤ 0.05 were accepted as statistically significant. Values of BMI and D2 index, waist-to-hip, head-to-neck ratios did not differ between MO and MetS groups (p ≥ 0.05). Except progranulin (p ≤ 0.01), similar patterns were observed for plasma levels of each adipokine. There was not any difference in vaspin as well as resistin levels between NW and MO groups. Significantly increased leptin-to-adiponectin, chemerin-to-adiponectin and vaspin-to-adiponectin values were noted in MO in comparison with those of NW. The most valuable adipokine index was progranulin-to-adiponectin (p ≤ 0.01). This index was strongly correlated with vaspin-to-adiponectin ratio in all groups (p ≤ 0.05). There was no correlation between vaspin-to-adiponectin and chemerin-to--adiponectin in NW group. However, a correlation existed in MO group (r = 0.486; p ≤ 0.05). Much stronger correlation (r = 0.609; p ≤ 0.01) was observed in MetS group between these two adipokine indices. No correlations were detected between vaspin and progranulin as well as vaspin and chemerin levels. Correlation analyses showed a unique profile confined to MetS children. Adiponectin was found to be correlated with waist-to-hip (r = -0.435; p ≤ 0.05) as well as head-to-neck (r = 0.541; p ≤ 0.05) ratios only in MetS children. In this study, it has been investigated if adipokine indices have priority over adipokine levels. In conclusion, vaspin-to-adiponectin, progranulin-to-adiponectin, chemerin-to-adiponectin along with waist-to-hip and head-to-neck ratios were the optimal combinations. Adiponectin, waist-to-hip, head-to-neck, vaspin-to-adiponectin, chemerin-to-adiponectin ratios had appropriate discriminatory capability for MetS children.

Keywords: adipokine indices, metabolic syndrome, obesity indices, pediatric obesity

Procedia PDF Downloads 198
599 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network

Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu

Abstract:

Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.

Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning

Procedia PDF Downloads 116
598 Evaluation of Groundwater Quality and Contamination Sources Using Geostatistical Methods and GIS in Miryang City, Korea

Authors: H. E. Elzain, S. Y. Chung, V. Senapathi, Kye-Hun Park

Abstract:

Groundwater is considered a significant source for drinking and irrigation purposes in Miryang city, and it is attributed to a limited number of a surface water reservoirs and high seasonal variations in precipitation. Population growth in addition to the expansion of agricultural land uses and industrial development may affect the quality and management of groundwater. This research utilized multidisciplinary approaches of geostatistics such as multivariate statistics, factor analysis, cluster analysis and kriging technique in order to identify the hydrogeochemical process and characterizing the control factors of the groundwater geochemistry distribution for developing risk maps, exploiting data obtained from chemical investigation of groundwater samples under the area of study. A total of 79 samples have been collected and analyzed using atomic absorption spectrometer (AAS) for major and trace elements. Chemical maps using 2-D spatial Geographic Information System (GIS) of groundwater provided a powerful tool for detecting the possible potential sites of groundwater that involve the threat of contamination. GIS computer based map exhibited that the higher rate of contamination observed in the central and southern area with relatively less extent in the northern and southwestern parts. It could be attributed to the effect of irrigation, residual saline water, municipal sewage and livestock wastes. At wells elevation over than 85m, the scatter diagram represents that the groundwater of the research area was mainly influenced by saline water and NO3. Level of pH measurement revealed low acidic condition due to dissolved atmospheric CO2 in the soil, while the saline water had a major impact on the higher values of TDS and EC. Based on the cluster analysis results, the groundwater has been categorized into three group includes the CaHCO3 type of the fresh water, NaHCO3 type slightly influenced by sea water and Ca-Cl, Na-Cl types which are heavily affected by saline water. The most predominant water type was CaHCO3 in the study area. Contamination sources and chemical characteristics were identified from factor analysis interrelationship and cluster analysis. The chemical elements that belong to factor 1 analysis were related to the effect of sea water while the elements of factor 2 associated with agricultural fertilizers. The degree level, distribution, and location of groundwater contamination have been generated by using Kriging methods. Thus, geostatistics model provided more accurate results for identifying the source of contamination and evaluating the groundwater quality. GIS was also a creative tool to visualize and analyze the issues affecting water quality in the Miryang city.

Keywords: groundwater characteristics, GIS chemical maps, factor analysis, cluster analysis, Kriging techniques

Procedia PDF Downloads 158
597 Impact of Climatic Hazards on the Jamuna River Fisheries and Coping and Adaptation Strategies

Authors: Farah Islam, Md. Monirul Islam, Mosammat Salma Akter, Goutam Kumar Kundu

Abstract:

The continuous variability of climate and the risk associated with it have a significant impact on the fisheries leading to a global concern for about half a billion fishery-based livelihoods. Though in the context of Bangladesh mounting evidence on the impacts of climate change on fishery-based livelihoods or their socioeconomic conditions are present, the country’s inland fisheries sector remains in a negligible corner as compared to the coastal areas which are spotted on the highlight due to its higher vulnerability to climatic hazards. The available research on inland fisheries, particularly river fisheries, has focussed mainly on fish production, pollution, fishing gear, fish biodiversity and livelihoods of the fishers. This study assesses the impacts of climate variability and changes on the Jamuna (a transboundary river called Brahmaputra in India) River fishing communities and their coping and adaptation strategies. This study has used primary data collected from Kalitola Ghat and Debdanga fishing communities of the Jamuna River during May, August and December 2015 using semi-structured interviews, oral history interviews, key informant interviews, focus group discussions and impact matrix as well as secondary data. This study has found that both communities are exposed to storms, floods and land erosions which impact on fishery-based livelihood assets, strategies, and outcomes. The impact matrix shows that human and physical capitals are more affected by climate hazards which in turn affect financial capital. Both communities have been responding to these exposures through multiple coping and adaptation strategies. The coping strategies include making dam with soil, putting jute sac on the yard, taking shelter on boat or embankment, making raised platform or ‘Kheua’ and involving with temporary jobs. While, adaptation strategies include permanent migration, change of livelihood activities and strategies, changing fishing practices and making robust houses. The study shows that migration is the most common adaptation strategy for the fishers which resulted in mostly positive outcomes for the migrants. However, this migration has impacted negatively on the livelihoods of existing fishers in the communities. In sum, the Jamuna river fishing communities have been impacted by several climatic hazards and they have traditionally coped with or adapted to the impacts which are not sufficient to maintain sustainable livelihoods and fisheries. In coming decades, this situation may become worse as predicted by latest scientific research and an enhanced level of response would be needed.

Keywords: climatic hazards, impacts and adaptation, fisherfolk, the Jamuna River

Procedia PDF Downloads 301
596 Excess Body Fat as a Store Toxin Affecting the Glomerular Filtration and Excretory Function of the Liver in Patients after Renal Transplantation

Authors: Magdalena B. Kaziuk, Waldemar Kosiba, Marek J. Kuzniewski

Abstract:

Introduction: Adipose tissue is a typical place for storage water-insoluble toxins in the body. It's connective tissue, where the intercellular substance consist of fat, which level in people with low physical activity should be 18-25% for women and 13-18% for men. Due to the fat distribution in the body we distinquish two types of obesity: android (visceral, abdominal) and gynoidal (gluteal-femoral, peripheral). Abdominal obesity increases the risk of complications of the cardiovascular system diseases, and impaired renal and liver function. Through the influence on disorders of metabolism, lipid metabolism, diabetes and hypertension, leading to emergence of the metabolic syndrome. So thus, obesity will especially overload kidney function in patients after transplantation. Aim: An attempt was made to estimate the impact of amount fat tissue on transplanted kidney function and excretory function of the liver in patients after Ktx. Material and Methods: The study included 108 patients (50 females, 58 male, age 46.5 +/- 12.9 years) with active kidney transplant after more than 3 months from the transplantation. An analysis of body composition was done by using electrical bioimpedance (BIA) and anthropometric measurements. Estimated basal metabolic rate (BMR), muscle mass, total body water content and the amount of body fat. Information about physical activity were obtained during clinical examination. Nutritional status, and type of obesity were determined by using indicators: Waist to Height Ratio (WHR) and Waist to Hip Ratio (WHR). Excretory functions of the transplanted kidney was rated by calculating the estimated renal glomerular filtration rate (eGFR) using the MDRD formula. Liver function was rated by total bilirubin and alanine aminotransferase levels ALT concentration in serum. In our patients haemolitic uremic syndrome (HUS) was excluded. Results: In 19.44% of patients had underweight, 22.37% of the respondents were with normal weight, 11.11% had overweight, and the rest were with obese (49.08%). People with android stature have a lower eGFR compared with those with the gynoidal stature (p = 0.004). All patients with obesity had higher amount of body fat from a few to several percent. The higher amount of body fat percentage, the lower eGFR had patients (p <0.001). Elevated ALT levels significantly correlated with a high fat content (p <0.02). Conclusion: Increased amount of body fat, particularly in the case of android obesity can be a predictor of kidney and liver damage. Due to that obese patients should have more frequent control of diagnostic functions of these organs and the intensive dietary proceedings, pharmacological and regular physical activity adapted to the current physical condition of patients after transplantation.

Keywords: obesity, body fat, kidney transplantation, glomerular filtration rate, liver function

Procedia PDF Downloads 449
595 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 186
594 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials

Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner

Abstract:

Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.

Keywords: CO₂ curing, carbonation, CCU, steel slag

Procedia PDF Downloads 94
593 Adaptation Measures as a Response to Climate Change Impacts and Associated Financial Implications for Construction Businesses by the Application of a Mixed Methods Approach

Authors: Luisa Kynast

Abstract:

It is obvious that buildings and infrastructure are highly impacted by climate change (CC). Both, design and material of buildings need to be resilient to weather events in order to shelter humans, animals, or goods. As well as buildings and infrastructure are exposed to weather events, the construction process itself is generally carried out outdoors without being protected from extreme temperatures, heavy rain, or storms. The production process is restricted by technical limitations for processing materials with machines and physical limitations due to human beings (“outdoor-worker”). In future due to CC, average weather patterns are expected to change as well as extreme weather events are expected to occur more frequently and more intense and therefore have a greater impact on production processes and on the construction businesses itself. This research aims to examine this impact by analyzing an association between responses to CC and financial performance of businesses within the construction industry. After having embedded the above depicted field of research into the resource dependency theory, a literature review was conducted to expound the state of research concerning a contingent relation between climate change adaptation measures (CCAM) and corporate financial performance for construction businesses. The examined studies prove that this field is rarely investigated, especially for construction businesses. Therefore, reports of the Carbon Disclosure Project (CDP) were analyzed by applying content analysis using the software tool MAXQDA. 58 construction companies – located worldwide – could be examined. To proceed even more systematically a coding scheme analogous to findings in literature was adopted. Out of qualitative analysis, data was quantified and a regression analysis containing corporate financial data was conducted. The results gained stress adaptation measures as a response to CC as a crucial proxy to handle climate change impacts (CCI) by mitigating risks and exploiting opportunities. In CDP reports the majority of answers stated increasing costs/expenses as a result of implemented measures. A link to sales/revenue was rarely drawn. Though, CCAM were connected to increasing sales/revenues. Nevertheless, this presumption is supported by the results of the regression analysis where a positive effect of implemented CCAM on construction businesses´ financial performance in the short-run was ascertained. These findings do refer to appropriate responses in terms of the implemented number of CCAM. Anyhow, still businesses show a reluctant attitude for implementing CCAM, which was confirmed by findings in literature as well as by findings in CDP reports. Businesses mainly associate CCAM with costs and expenses rather than with an effect on their corporate financial performance. Mostly companies underrate the effect of CCI and overrate the costs and expenditures for the implementation of CCAM and completely neglect the pay-off. Therefore, this research shall create a basis for bringing CC to the (financial) attention of corporate decision-makers, especially within the construction industry.

Keywords: climate change adaptation measures, construction businesses, financial implication, resource dependency theory

Procedia PDF Downloads 132
592 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar

Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo

Abstract:

The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.

Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB

Procedia PDF Downloads 79
591 Therapeutic Challenges in Treatment of Adults Bacterial Meningitis Cases

Authors: Sadie Namani, Lindita Ajazaj, Arjeta Zogaj, Vera Berisha, Bahrije Halili, Luljeta Hasani, Ajete Aliu

Abstract:

Background: The outcome of bacterial meningitis is strongly related to the resistance of bacterial pathogens to the initial antimicrobial therapy. The objective of the study was to analyze the initial antimicrobial therapy, the resistance of meningeal pathogens and the outcome of adults bacterial meningitis cases. Materials/methods: This prospective study enrolled 46 adults older than 16 years of age, treated for bacterial meningitis during the years 2009 and 2010 at the infectious diseases clinic in Prishtinë. Patients are categorized into specific age groups: > 16-26 years of age (10 patients), > 26-60 years of age (25 patients) and > 60 years of age (11 patients). All p-values < 0.05 were considered statistically significant. Data were analyzed using Stata 7.1 and SPSS 13. Results: During the two year study period 46 patients (28 males) were treated for bacterial meningitis. 33 patients (72%) had a confirmed bacterial etiology; 13 meningococci, 11 pneumococci, 7 gram-negative bacilli (Ps. aeruginosa 2, Proteus sp. 2, Acinetobacter sp. 2 and Klebsiella sp. 1 case) and 2 staphylococci isolates were found. Neurological complications developed in 17 patients (37%) and the overall mortality rate was 13% (6 deaths). Neurological complications observed were: cerebral abscess (7/46; 15.2%), cerebral edema (4/46; 8.7%); haemiparesis (3/46; 6.5%); recurrent seizures (2/46; 4.3%), and single cases of thrombosis sinus cavernosus, facial nerve palsy and decerebration (1/46; 2.1%). The most common meningeal pathogens were meningococcus in the youngest age group, gram negative-bacilli in second age group and pneumococcus in eldery age group. Initial single-agent antibiotic therapy (ceftriaxone) was used in 17 patients (37%): in 60% of patients in the youngest age group and in 44% of cases in the second age group. 29 patients (63%) were treated with initial dual-agent antibiotic therapy; ceftriaxone in combination with vancomycin or ampicillin. Ceftriaxone and ampicillin were the most commonly used antibiotics for the initial empirical therapy in adults > 50 years of age. All adults > 60 years of age were treated with the initial dual-agent antibiotic therapy as in this age group was recorded the highest mortality rate (M=27%) and adverse outcome (64%). Resistance of pathogens to antimicrobics was recorded in cases caused by gram-negative bacilli and was associated with greater risk for developing neurological complications (p=0.09). None of the gram-negative bacilli were resistant to carbapenems; all were resistant to ampicillin while 5/7 isolates were resistant to cefalosporins. Resistance of meningococci and pneumococci to beta-lactams was not recorded. There were no statistical differences in the occurrence of neurological complications (p > 0.05), resistance of meningeal pathogens to antimicrobics (p > 0.05) and the inital antimicrobial therapy (one vs. two antibiotics) concerning group-ages in adults. Conclusions: The initial antibiotic therapy with ceftriaxone alone or in combination with vancomycin or ampicillin did not cover cases caused by gram-negative bacilli.

Keywords: adults, bacterial meningitis, outcomes, therapy

Procedia PDF Downloads 161
590 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 65