Search results for: imaging sensitivity measurement
585 Development of a Fire Analysis Drone for Smoke Toxicity Measurement for Fire Prediction and Management
Authors: Gabrielle Peck, Ryan Hayes
Abstract:
This research presents the design and creation of a drone gas analyser, aimed at addressing the need for independent data collection and analysis of gas emissions during large-scale fires, particularly wasteland fires. The analyser drone, comprising a lightweight gas analysis system attached to a remote-controlled drone, enables the real-time assessment of smoke toxicity and the monitoring of gases released into the atmosphere during such incidents. The key components of the analyser unit included two gas line inlets connected to glass wool filters, a pump with regulated flow controlled by a mass flow controller, and electrochemical cells for detecting nitrogen oxides, hydrogen cyanide, and oxygen levels. Additionally, a non-dispersive infrared (NDIR) analyser is employed to monitor carbon monoxide (CO), carbon dioxide (CO₂), and hydrocarbon concentrations. Thermocouples can be attached to the analyser to monitor temperature, as well as McCaffrey probes combined with pressure transducers to monitor air velocity and wind direction. These additions allow for monitoring of the large fire and can be used for predictions of fire spread. The innovative system not only provides crucial data for assessing smoke toxicity but also contributes to fire prediction and management. The remote-controlled drone's mobility allows for safe and efficient data collection in proximity to the fire source, reducing the need for human exposure to hazardous conditions. The data obtained from the gas analyser unit facilitates informed decision-making by emergency responders, aiding in the protection of both human health and the environment. This abstract highlights the successful development of a drone gas analyser, illustrating its potential for enhancing smoke toxicity analysis and fire prediction capabilities. The integration of this technology into fire management strategies offers a promising solution for addressing the challenges associated with wildfires and other large-scale fire incidents. The project's methodology and results contribute to the growing body of knowledge in the field of environmental monitoring and safety, emphasizing the practical utility of drones for critical applications.Keywords: fire prediction, drone, smoke toxicity, analyser, fire management
Procedia PDF Downloads 87584 Verification of Dosimetric Commissioning Accuracy of Flattening Filter Free Intensity Modulated Radiation Therapy and Volumetric Modulated Therapy Delivery Using Task Group 119 Guidelines
Authors: Arunai Nambi Raj N., Kaviarasu Karunakaran, Krishnamurthy K.
Abstract:
The purpose of this study was to create American Association of Physicist in Medicine (AAPM) Task Group 119 (TG 119) benchmark plans for flattening filter free beam (FFF) deliveries of intensity modulated radiation therapy (IMRT) and volumetric arc therapy (VMAT) in the Eclipse treatment planning system. The planning data were compared with the flattening filter (FF) IMRT & VMAT plan data to verify the dosimetric commissioning accuracy of FFF deliveries. AAPM TG 119 proposed a set of test cases called multi-target, mock prostate, mock head and neck, and C-shape to ascertain the overall accuracy of IMRT planning, measurement, and analysis. We used these test cases to investigate the performance of the Eclipse Treatment planning system for the flattening filter free beam deliveries. For these test cases, we generated two sets of treatment plans, the first plan using 7–9 IMRT fields and a second plan utilizing two arc VMAT technique for both the beam deliveries (6 MV FF, 6MV FFF, 10 MV FF and 10 MV FFF). The planning objectives and dose were set as described in TG 119. The dose prescriptions for multi-target, mock prostate, mock head and neck, and C-shape were taken as 50, 75.6, 50 and 50 Gy, respectively. The point dose (mean dose to the contoured chamber volume) at the specified positions/locations was measured using compact (CC‑13) ion chamber. The composite planar dose and per-field gamma analysis were measured with IMatriXX Evaluation 2D array with OmniPro IMRT Software (version 1.7b). FFF beam deliveries of IMRT and VMAT plans were comparable to flattening filter beam deliveries. Our planning and quality assurance results matched with TG 119 data. AAPM TG 119 test cases are useful to generate FFF benchmark plans. From the obtained data in this study, we conclude that the commissioning of FFF IMRT and FFF VMAT delivery were found within the limits of TG-119 and the performance of the Eclipse treatment planning system for FFF plans were found satisfactorily.Keywords: flattening filter free beams, intensity modulated radiation therapy, task group 119, volumetric modulated arc therapy
Procedia PDF Downloads 145583 Structures and Analytical Crucibles in Nigerian Indigenous Art Music
Authors: Albert Oluwole Uzodimma Authority
Abstract:
Nigeria is a diverse nation with a rich cultural heritage that has produced numerous art musicians and a vast range of art songs. The compositional styles, tonal rhythm, text rhythm, word painting, and text-tone relationship vary extensively from one dialect to another, indicating the need for standardized tools for the structural and analytical deconstruction of Nigerian indigenous art music. The purpose of this research is to examine the structures of Nigerian indigenous art music and outline some crucibles for analyzing it, by investigating how dialectical inflection influences the choice of text tone, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. The research used a structured questionnaire to collect data from 50 musicologists, out of which 41 responded. The study's focus was on the works of two prominent twentieth-century composers, Stephen Olusoji, and Nwamara Alvan-Ikoku, titled "Oyigiyigi" and "O Chineke, Inozikwa omee," respectively. The data collected was presented in percentages using pie charts and tables. The study shows that in Nigerian Indigenous music, several aspects are to be considered for proper analysis, such as linguistic sensitivity, dialectical inflection influences text-tone relationship, text rhythm and tonal rhythm, which help to convey the proper meanings of messages in songs. It also highlights the lack of standardized rubrics for analysis, which necessitated the proposal of robust criteria for analyzing African music, known as Neo-Eclectic-Crucibles. Hinging on eclectic approach, this research makes significant contributions to music scholarship by addressing the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It provides a template for further studies leading to standardized rubrics for analyzing African music. This research collected data through a structured questionnaire and analyzed it using pie charts and tables to present the findings accurately. The analysis focused on the respondents' perspectives on the research objectives and structural analysis of two indigenous music compositions by Olusoji and Nwamara. This research answers the questions on the structures and analytical crucibles used in Nigerian indigenous art music, how dialectical inflection influences text-tone relationship, scale mode, tonal rhythm, and the general ambiance of Nigerian art music. This paper demonstrates the need for standardized tools and crucibles for the structural and analytical deconstruction of Nigerian indigenous art music. It highlights several aspects that are crucial to analyzing Nigerian indigenous music and proposes the Neo-Eclectic-Crucibles criteria for analyzing African music. The contribution of this research to music scholarship is significant, providing a template for further studies and research in the field.Keywords: art-music, crucibles, dialectical inflections, indigenous, text-tone, tonal rhythm, word-painting
Procedia PDF Downloads 99582 Clinical Validation of C-PDR Methodology for Accurate Non-Invasive Detection of Helicobacter pylori Infection
Authors: Suman Som, Abhijit Maity, Sunil B. Daschakraborty, Sujit Chaudhuri, Manik Pradhan
Abstract:
Background: Helicobacter pylori is a common and important human pathogen and the primary cause of peptic ulcer disease and gastric cancer. Currently H. pylori infection is detected by both invasive and non-invasive way but the diagnostic accuracy is not up to the mark. Aim: To set up an optimal diagnostic cut-off value of 13C-Urea Breath Test to detect H. pylori infection and evaluate a novel c-PDR methodology to overcome of inconclusive grey zone. Materials and Methods: All 83 subjects first underwent upper-gastrointestinal endoscopy followed by rapid urease test and histopathology and depending on these results; we classified 49 subjects as H. pylori positive and 34 negative. After an overnight, fast patients are taken 4 gm of citric acid in 200 ml water solution and 10 minute after ingestion of the test meal, a baseline exhaled breath sample was collected. Thereafter an oral dose of 75 mg 13C-Urea dissolved in 50 ml water was given and breath samples were collected upto 90 minute for 15 minute intervals and analysed by laser based high precisional cavity enhanced spectroscopy. Results: We studied the excretion kinetics of 13C isotope enrichment (expressed as δDOB13C ‰) of exhaled breath samples and found maximum enrichment around 30 minute of H. pylori positive patients, it is due to the acid mediated stimulated urease enzyme activity and maximum acidification happened within 30 minute but no such significant isotopic enrichment observed for H. pylori negative individuals. Using Receiver Operating Characteristic (ROC) curve an optimal diagnostic cut-off value, δDOB13C ‰ = 3.14 was determined at 30 minute exhibiting 89.16% accuracy. Now to overcome grey zone problem we explore percentage dose of 13C recovered per hour, i.e. 13C-PDR (%/hr) and cumulative percentage dose of 13C recovered, i.e. c-PDR (%) in exhaled breath samples for the present 13C-UBT. We further explored the diagnostic accuracy of 13C-UBT by constructing ROC curve using c-PDR (%) values and an optimal cut-off value was estimated to be c-PDR = 1.47 (%) at 60 minute, exhibiting 100 % diagnostic sensitivity , 100 % specificity and 100 % accuracy of 13C-UBT for detection of H. pylori infection. We also elucidate the gastric emptying process of present 13C-UBT for H. pylori positive patients. The maximal emptying rate found at 36 minute and half empting time of present 13C-UBT was found at 45 minute. Conclusions: The present study exhibiting the importance of c-PDR methodology to overcome of grey zone problem in 13C-UBT for accurate determination of infection without any risk of diagnostic errors and making it sufficiently robust and novel method for an accurate and fast non-invasive diagnosis of H. pylori infection for large scale screening purposes.Keywords: 13C-Urea breath test, c-PDR methodology, grey zone, Helicobacter pylori
Procedia PDF Downloads 300581 The ‘Quartered Head Technique’: A Simple, Reliable Way of Maintaining Leg Length and Offset during Total Hip Arthroplasty
Authors: M. Haruna, O. O. Onafowokan, G. Holt, K. Anderson, R. G. Middleton
Abstract:
Background: Requirements for satisfactory outcomes following total hip arthroplasty (THA) include restoration of femoral offset, version, and leg length. Various techniques have been described for restoring these biomechanical parameters, with leg length restoration being the most predominantly described. We describe a “quartered head technique” (QHT) which uses a stepwise series of femoral head osteotomies to identify and preserve the centre of rotation of the femoral head during THA in order to ensure reconstruction of leg length, offset and stem version, such that hip biomechanics are restored as near to normal as possible. This study aims to identify whether using the QHT during hip arthroplasty effectively restores leg length and femoral offset to within acceptable parameters. Methods: A retrospective review of 206 hips was carried out, leaving 124 hips in the final analysis. Power analysis indicated a minimum of 37 patients required. All operations were performed using an anterolateral approach by a single surgeon. All femoral implants were cemented, collarless, polished double taper CPT® stems (Zimmer, Swindon, UK). Both cemented, and uncemented acetabular components were used (Zimmer, Swindon, UK). Leg length, version, and offset were assessed intra-operatively and reproduced using the QHT. Post-operative leg length and femoral offset were determined and compared with the contralateral native hip, and the difference was then calculated. For the determination of leg length discrepancy (LLD), we used the method described by Williamson & Reckling, which has been shown to be reproducible with a measurement error of ±1mm. As a reference, the inferior margin of the acetabular teardrop and the most prominent point of the lesser trochanter were used. A discrepancy of less than 6mm LLD was chosen as acceptable. All peri-operative radiographs were assessed by two independent observers. Results: The mean absolute post-operative difference in leg length from the contralateral leg was +3.58mm. 84% of patients (104/124) had LLD within ±6mm of the contralateral limb. The mean absolute post-operative difference in offset from contralateral leg was +3.88mm (range -15 to +9mm, median 3mm). 90% of patients (112/124) were within ±6mm offset of the contralateral limb. There was no statistical difference noted between observer measurements. Conclusion: The QHT provides a simple, inexpensive yet effective method of maintaining femoral leg length and offset during total hip arthroplasty. Combining this technique with pre-operative templating or other techniques described may enable surgeons to reduce even further the discrepancies between pre-operative state and post-operative outcome.Keywords: leg length discrepancy, technical tip, total hip arthroplasty, operative technique
Procedia PDF Downloads 79580 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV
Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran
Abstract:
Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.Keywords: geo-referencing, ortho-rectification, video frame, self-calibration
Procedia PDF Downloads 477579 Simulation of Maximum Power Point Tracking in a Photovoltaic System: A Circumstance Using Pulse Width Modulation Analysis
Authors: Asowata Osamede
Abstract:
Optimized gain in respect to output power of stand-alone photovoltaic (PV) systems is one of the major focus of PV in recent times. This is evident to its low carbon emission and efficiency. Power failure or outage from commercial providers in general does not promote development to the public and private sector, these basically limit the development of industries. The need for a well-structured PV system is of importance for an efficient and cost-effective monitoring system. The purpose of this paper is to validate the maximum power point of an off-grid PV system taking into consideration the most effective tilt and orientation angles for PV's in the southern hemisphere. This paper is based on analyzing the system using a solar charger with MPPT from a pulse width modulation (PWM) perspective. The power conditioning device chosen is a solar charger with MPPT. The practical setup consists of a PV panel that is set to an orientation angle of 0o north, with a corresponding tilt angle of 36 o, 26o and 16o. The load employed in this set-up are three Lead Acid Batteries (LAB). The percentage fully charged, charging and not charging conditions are observed for all three batteries. The results obtained in this research is used to draw the conclusion that would provide a benchmark for researchers and scientist worldwide. This is done so as to have an idea of the best tilt and orientation angles for maximum power point in a basic off-grid PV system. A quantitative analysis would be employed in this research. Quantitative research tends to focus on measurement and proof. Inferential statistics are frequently used to generalize what is found about the study sample to the population as a whole. This would involve: selecting and defining the research question, deciding on a study type, deciding on the data collection tools, selecting the sample and its size, analyzing, interpreting and validating findings Preliminary results which include regression analysis (normal probability plot and residual plot using polynomial 6) showed the maximum power point in the system. The best tilt angle for maximum power point tracking proves that the 36o tilt angle provided the best average on time which in turns put the system into a pulse width modulation stage.Keywords: power-conversion, meteonorm, PV panels, DC-DC converters
Procedia PDF Downloads 146578 Cost Based Analysis of Risk Stratification Tool for Prediction and Management of High Risk Choledocholithiasis Patients
Authors: Shreya Saxena
Abstract:
Background: Choledocholithiasis is a common complication of gallstone disease. Risk scoring systems exist to guide the need for further imaging or endoscopy in managing choledocholithiasis. We completed an audit to review the American Society for Gastrointestinal Endoscopy (ASGE) scoring system for prediction and management of choledocholithiasis against the current practice at a tertiary hospital to assess its utility in resource optimisation. We have now conducted a cost focused sub-analysis on patients categorized high-risk for choledocholithiasis according to the guidelines to determine any associated cost benefits. Method: Data collection from our prior audit was used to retrospectively identify thirteen patients considered high-risk for choledocholithiasis. Their ongoing management was mapped against the guidelines. Individual costs for the key investigations were obtained from our hospital financial data. Total cost for the different management pathways identified in clinical practice were calculated and compared against predicted costs associated with recommendations in the guidelines. We excluded the cost of laparoscopic cholecystectomy and considered a set figure for per day hospital admission related expenses. Results: Based on our previous audit data, we identified a77% positive predictive value for the ASGE risk stratification tool to determine patients at high-risk of choledocholithiasis. 47% (6/13) had an magnetic resonance cholangiopancreatography (MRCP) prior to endoscopic retrograde cholangiopancreatography (ERCP), whilst 53% (7/13) went straight for ERCP. The average length of stay in the hospital was 7 days, with an additional day and cost of £328.00 (£117 for ERCP) for patients awaiting an MRCP prior to ERCP. Per day hospital admission was valued at £838.69. When calculating total cost, we assumed all patients had admission bloods and ultrasound done as the gold standard. In doing an MRCP prior to ERCP, there was a 130% increase in cost incurred (£580.04 vs £252.04) per patient. When also considering hospital admission and the average length of stay, it was an additional £1166.69 per patient. We then calculated the exact costs incurred by the department, over a three-month period, for all patients, for key investigations or procedures done in the management of choledocholithiasis. This was compared to an estimate cost derived from the recommended pathways in the ASGE guidelines. Overall, 81% (£2048.45) saving was associated with following the guidelines compared to clinical practice. Conclusion: MRCP is the most expensive test associated with the diagnosis and management of choledocholithiasis. The ASGE guidelines recommend endoscopy without an MRCP in patients stratified as high-risk for choledocholithiasis. Our audit that focused on assessing the utility of the ASGE risk scoring system showed it to be relatively reliable for identifying high-risk patients. Our cost analysis has shown significant cost savings per patient and when considering the average length of stay associated with direct endoscopy rather than an additional MRCP. Part of this is also because of an increased average length of stay associated with waiting for an MRCP. The above data supports the ASGE guidelines for the management of high-risk for choledocholithiasis patients from a cost perspective. The only caveat is our small data set that may impact the validity of our average length of hospital stay figures and hence total cost calculations.Keywords: cost-analysis, choledocholithiasis, risk stratification tool, general surgery
Procedia PDF Downloads 97577 Exploring the Applicability of a Rapid Health Assessment in India
Authors: Claudia Carbajal, Jija Dutt, Smriti Pahwa, Sumukhi Vaid, Karishma Vats
Abstract:
ASER Centre, the research and assessment arm of Pratham Education Foundation sees measurement as the first stage of action. ASER uses primary research to push and give empirical foundations to policy discussions at a multitude of levels. At a household level, common citizens use a simple assessment (a floor-level test) to measure learning across rural India. This paper presents the evidence on the applicability of an ASER approach to the health sector. A citizen-led assessment was designed and executed that collected information from young mothers with children up to a year of age. The pilot assessments were rolled-out in two different models: Paid surveyors and student volunteers. The survey covered three geographic areas: 1,239 children in the Jaipur District of Rajasthan, 2,086 in the Rae Bareli District of Uttar Pradesh, and 593 children in the Bhuj Block in Gujarat. The survey tool was designed to study knowledge of health-related issues, daily practices followed by young mothers and access to relevant services and programs. It provides insights on behaviors related to infant and young child feeding practices, child and maternal nutrition and supplementation, water and sanitation, and health services. Moreover, the survey studies the reasons behind behaviors giving policy-makers actionable pathways to improve implementation of social sector programs. Although data on health outcomes are available, this approach could provide a rapid annual assessment of health issues with indicators that are easy to understand and act upon so that measurements do not become an exclusive domain of experts. The results give many insights into early childhood health behaviors and challenges. Around 98% of children are breastfed, and approximately half are not exclusively breastfed (for the first 6 months). Government established diet diversity guidelines are met for less than 1 out of 10 children. Although most households are satisfied with the quality of drinking water, most tested households had contaminated water.Keywords: citizen-led assessment, rapid health assessment, Infant and Young Children Feeding, water and sanitation, maternal nutrition, supplementation
Procedia PDF Downloads 168576 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost
Authors: German Osma, Gabriel Ordonez
Abstract:
The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling
Procedia PDF Downloads 169575 Crowdsensing Project in the Brazilian Municipality of Florianópolis for the Number of Visitors Measurement
Authors: Carlos Roberto De Rolt, Julio da Silva Dias, Rafael Tezza, Luca Foschini, Matteo Mura
Abstract:
The seasonal population fluctuation presents a challenge to touristic cities since the number of inhabitants can double according to the season. The aim of this work is to develop a model that correlates the waste collected with the population of the city and also allow cooperation between the inhabitants and the local government. The model allows public managers to evaluate the impact of the seasonal population fluctuation on waste generation and also to improve planning resource utilization throughout the year. The study uses data from the company that collects the garbage in Florianópolis, a Brazilian city that presents the profile of a city that attracts tourists due to numerous beaches and warm weather. The fluctuations are caused by the number of people that come to the city throughout the year for holidays, summer time vacations or business events. Crowdsensing will be accomplished through smartphones with access to an app for data collection, with voluntary participation of the population. Crowdsensing participants can access information collected in waves for this portal. Crowdsensing represents an innovative and participatory approach which involves the population in gathering information to improve the quality of life. The management of crowdsensing solutions plays an essential role given the complexity to foster collaboration, establish available sensors and collect and process the collected data. Practical implications of this tool described in this paper refer, for example, to the management of seasonal tourism in a large municipality, whose public services are impacted by the floating of the population. Crowdsensing and big data support managers in predicting the arrival, permanence, and movement of people in a given urban area. Also, by linking crowdsourced data to databases from other public service providers - e.g., water, garbage collection, electricity, public transport, telecommunications - it is possible to estimate the floating of the population of an urban area affected by seasonal tourism. This approach supports the municipality in increasing the effectiveness of resource allocation while, at the same time, increasing the quality of the service as perceived by citizens and tourists.Keywords: big data, dashboards, floating population, smart city, urban management solutions
Procedia PDF Downloads 287574 Alternative Islamic Finance Channels and Instruments: An Evaluation of the Potential and Considerations in Light of Sharia Principles
Authors: Tanvir A. Uddin, Blake Goud
Abstract:
Emerging trends in FinTech-enabled alternative finance, which includes channels and instruments emerging outside the traditional financial system, heralds unprecedented opportunities to improve financial intermediation and increase access to finance. With widespread criticism of the mainstream Islamic banking and finance sector as either mimicking the conventional system, failing to achieve inclusive growth or both, industry stakeholders are turning to technology to show that finance can be done differently. This paper will outline the critical elements for successful deployment of technology to maximize benefit and minimize potential for harm from introduction of Islamic FinTech and propose recommendations for Islamic financial institutions, FinTech companies, regulators and other stakeholders who are integrating or who are considering introducing FinTech solutions. The paper will present an overview of literature, present relevant case studies and summarize the lessons from interviews conducted with Islamic FinTech founders from around the world. With growing central bank concerns about leveraged loans and ballooning private credit markets globally (estimated at $1.4 trillion), current and future Islamic FinTech operators are at risk of contributing to the problems they aim to solve by operating in a 'shadow banking' system. The paper will show that by systematising a robust theory of change linked to positive outcomes, utilising objective impact frameworks (e.g., the Impact Measurement Project) and instilling a risk management culture that is proactive about potential social harm (e.g., irresponsible lending), FinTech can enable the Islamic finance industry to support positive social impact and minimize harm in support of the maqasid. The adoption of FinTech within the Islamic finance context is still at a nascent stage and the recommendations we provide based on the limited experience to date will help address some of the major cross-cutting issues related to FinTech. Further research will be needed to elucidate in more detail issues relating to individual sectors and countries within the broader global Islamic finance industry.Keywords: alternative finance, FinTech, Islamic finance, maqasid, theory of change
Procedia PDF Downloads 151573 Control Performance Simulation and Analysis for Microgravity Vibration Isolation System Onboard Chinese Space Station
Authors: Wei Liu, Shuquan Wang, Yang Gao
Abstract:
Microgravity Science Experiment Rack (MSER) will be onboard TianHe (TH) spacecraft planned to be launched in 2018. TH is one module of Chinese Space Station. Microgravity Vibration Isolation System (MVIS), which is MSER’s core part, is used to isolate disturbance from TH and provide high-level microgravity for science experiment payload. MVIS is two stage vibration isolation system, consisting of Follow Unit (FU) and Experiment Support Unit (ESU). FU is linked to MSER by umbilical cables, and ESU suspends within FU and without physical connection. The FU’s position and attitude relative to TH is measured by binocular vision measuring system, and the acceleration and angular velocity is measured by accelerometers and gyroscopes. Air-jet thrusters are used to generate force and moment to control FU’s motion. Measurement module on ESU contains a set of Position-Sense-Detectors (PSD) sensing the ESU’s position and attitude relative to FU, accelerometers and gyroscopes sensing ESU’s acceleration and angular velocity. Electro-magnetic actuators are used to control ESU’s motion. Firstly, the linearized equations of FU’s motion relative to TH and ESU’s motion relative to FU are derived, laying the foundation for control system design and simulation analysis. Subsequently, two control schemes are proposed. One control scheme is that ESU tracks FU and FU tracks TH, shorten as E-F-T. The other one is that FU tracks ESU and ESU tracks TH, shorten as F-E-T. In addition, motion spaces are constrained within ±15 mm、±2° between FU and ESU, and within ±300 mm between FU and TH or between ESU and TH. A Proportional-Integrate-Differentiate (PID) controller is designed to control FU’s position and attitude. ESU’s controller includes an acceleration feedback loop and a relative position feedback loop. A Proportional-Integrate (PI) controller is designed in the acceleration feedback loop to reduce the ESU’s acceleration level, and a PID controller in the relative position feedback loop is used to avoid collision. Finally, simulations of E-F-T and F-E-T are performed considering variety uncertainties, disturbances and motion space constrains. The simulation results of E-T-H showed that control performance was from 0 to -20 dB for vibration frequency from 0.01 to 0.1 Hz, and vibration was attenuated 40 dB per ten octave above 0.1Hz. The simulation results of T-E-H showed that vibration was attenuated 20 dB per ten octave at the beginning of 0.01Hz.Keywords: microgravity science experiment rack, microgravity vibration isolation system, PID control, vibration isolation performance
Procedia PDF Downloads 159572 Analyzing the Causes of Amblyopia among Patients in Tertiary Care Center: Retrospective Study in King Faisal Specialist Hospital and Research Center
Authors: Hebah M. Musalem, Jeylan El-Mansoury, Lin M. Tuleimat, Selwa Alhazza, Abdul-Aziz A. Al Zoba
Abstract:
Background: Amblyopia is a condition that affects the visual system triggering a decrease in visual acuity without a known underlying pathology. It is due to abnormal vision development in childhood or infancy. Most importantly, vision loss is preventable or reversible with the right kind of intervention in most of the cases. Strabismus, sensory defects, and anisometropia are all well-known causes of amblyopia. However, ocular misalignment in Strabismus is considered the most common form of amblyopia worldwide. The risk of developing amblyopia increases in premature children, developmentally delayed or children who had brain lesions affecting the visual pathway. The prevalence of amblyopia varies between 2 to 5 % in the world according to the literature. Objective: To determine the different causes of Amblyopia in pediatric patients seen in ophthalmology clinic of a tertiary care center, i.e. King Faisal Specialist Hospital and Research Center (KFSH&RC). Methods: This is a hospital based, random retrospective, based on reviewing patient’s files in the Ophthalmology Department of KFSH&RC in Riyadh city, Kingdom of Saudi Arabia. Inclusion criteria: amblyopic pediatric patients who attended the clinic from 2015 to 2016, who are between 6 months and 18 years old. Exclusion Criteria: patients above 18 years of age and any patient who is uncooperative to obtain an accurate vision or a proper refraction. Detailed ocular and medical history are recorded. The examination protocol includes a full ocular exam, full cycloplegic refraction, visual acuity measurement, ocular motility and strabismus evaluation. All data were organized in tables and graphs and analyzed by statistician. Results: Our preliminary results will be discussed on spot by our corresponding author. Conclusions: We focused on this study on utilizing various examination techniques which enhanced our results and highlighted a distinguished correlation between amblyopia and its’ causes. This paper recommendation emphasizes on critical testing protocols to be followed among amblyopic patient, especially in tertiary care centers.Keywords: amblyopia, amblyopia causes, amblyopia diagnostic criterion, amblyopia prevalence, Saudi Arabia
Procedia PDF Downloads 158571 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 445570 Effect of Human Resources Accounting on Financial Performance of Banks in Nigeria
Authors: Oti Ibiam, Alexanda O. Kalu
Abstract:
Human Resource Accounting is the process of identifying and measuring data about human resources and communicating this information to interested parties in order to meaningful investment decisions. In recent time, firms focus has shifted to human resource accounting so as to ensure efficiency and effectiveness in their operations. This study focused on the effect of human resource accounting on the financial performance of Banks in Nigerian. The problem that led to the study revolves around the current trend whereby Nigeria banks do not efficiently account for the input of human resource in their annual statement, thereby instead of capitalizing human resources in their statement of financial position; they expend it in their income statement thereby reducing their profit after tax. The broad objective of this study is to determine the extent to which human resource accounting affects the financial performance and value of Nigerian Banks. This study is therefore considered significant because, there are still universally, grey areas to be sorted out on the subject matter of human resources accounting. In the bid to achieve the study objectives, the researcher gathered data from sixteen commercial banks. Data were collected from both primary and secondary sources using an ex-post facto research design. The data collected were then tabulated and analyzed using the multiple regression analysis. The result of hypothesis one revealed that there is a significant relationship between Capitalized Human Resource Cost and post capitalization Profit before tax of banks in Nigeria. The finding of hypothesis two revealed that the association between Capitalized Human Resource Cost and post capitalization Net worth of banks in Nigeria is significant. The finding in Hypothesis three reveals that there is a significant difference between pre and post capitalization profit before tax of banks in Nigeria. The study concludes that human resources accounting positively influenced financial performance of banks in Nigeria within the period under study. It is recommended that standards should be set for human resources identification and measurement in the banking sector and also the management of commercial banks in Nigeria should have a proper appreciation of human resource accounting. This will enable managers to take right decision regarding investment in human resource. Also, the study recommends that policies on enhancing the post capitalization profit before tax of banks in Nigeria should pay great attention to capitalized human resources cost, net worth and total asset as the variables significantly influenced post capitalization profit before tax of the studied banks in Nigeria. The limitation of the study centers on the limited number of years and companies that was adopted for the study.Keywords: capitalization, human resources cost, profit before tax, net worth
Procedia PDF Downloads 149569 Effect of Amount of Crude Fiber in Grass or Silage to the Digestibility of Organic Matter in Suckler Cow Feeding Systems
Authors: Scholz Heiko, Kuhne Petra, Heckenberger Gerd
Abstract:
Problems during the calving period (December to May) often result in a high body condition score (BCS) at this time. At the end of the grazing period (frequently after early weaning), however, an increase of BCS can often be observed under German conditions. In the last eight weeks before calving, the body condition should be reduced or at least not increased. Rations with a higher amount of crude fiber can be used (rations with straw or late mowed grass silage). Fermentative digestion of fiber is slow and incomplete; that’s why the fermentative process in the rumen can be reduced over a long feeding time. Viewed in this context, feed intake of suckler cows (8 weeks before calving) in different rations and fermentation in the rumen should be checked by taking rumen fluid. Eight suckler cows (Charolais) were feeding a Total Mixed Ration (TMR) in the last eight weeks before calving and grass silage after calving. By the addition of straw (30 % [TMR1] vs. 60 % [TMR2] of dry matter) was varied the amount of crude fiber in the TMR (grass silage, straw, mineral) before calving. After calving of the cow's grass, silage [GS] was fed ad libitum, and the last measurement of rumen fluid took place on the pasture [PS]. Rumen fluid, plasma, body weight, and backfat thickness were collected. Rumen fluid pH was assessed using an electronic pH meter. Volatile fatty acids (VFA), sedimentation, methylene-blue and amount of infusorians were measured. From these 4 parameters, an “index of rumen fermentation” [IRF] in the rumen was formed. Fixed effects of treatment (TMR1, TMR2, GS and PS) and a number of lactations (3-7 lactations) were analyzed by ANOVA using SPSS Version 25.0 (significant by p ≤ 5 %). Rumen fluid pH was significant influenced by variants (TMR 1 by 6.6; TMR 2 by 6.9; GS by 6.6 and PS by 6.9) but was not affected by other effects. The IRF showed disturbed fermentation in the rumen by feeding the TMR 1+2 with a high amount of crude fiber (Score: > 10.0 points) and a very good environment for fermentation during grazing the pasture (Score: 6.9 points). Furthermore, significant differences were found for VFA, methylene blue and the number of infusorians. The use of rations with the high amount of crude fiber from weaning to calving may cause deviations from undisturbed fermentation in the rumen and adversely affect the utilization of the feed in the rumen.Keywords: suckler cow, feeding systems, crude fiber, digestibilty of organic matter
Procedia PDF Downloads 143568 Religious Fundamentalism Prescribes Requirements for Marriage and Reproduction
Authors: Steven M. Graham, Anne V. Magee
Abstract:
Most world religions have sacred texts and traditions that provide instruction about and definitions of marriage, family, and family duties and responsibilities. Given that religious fundamentalism (RF) is defined as the belief that these sacred texts and traditions are literally and completely true to the exclusion of other teachings, RF should be predictive of the attitudes one holds about these topics. The goals of the present research were to: (1) explore the extent to which people think that men and women can be happy without marriage, a significant sexual relationship, a long-term romantic relationship, and having children; (2) determine the extent to which RF is associated with these beliefs; and, (3) to determine how RF is associated with considering certain elements of a relationship to be necessary for thinking of that relationship as a marriage. In Study 1, participants completed a reliable and valid measure of RF and answered questions about the necessity of various elements for a happy life. Higher RF scores were associated with the belief that both men and women require marriage, a sexual relationship, a long-term romantic relationship, and children in order to have a happy life. In Study 2, participants completed these same measures and the pattern of results replicated when controlling for overall religiosity. That is, RF predicted these beliefs over and above religiosity. Additionally, participants indicated the extent to which a variety of characteristics were necessary to consider a particular relationship to be a marriage. Controlling for overall religiosity, higher RF scores were associated with the belief that the following were required to consider a relationship a marriage: religious sanctification, a sexual component, sexual monogamy, emotional monogamy, family approval, children (or the intent to have them), cohabitation, and shared finances. Interestingly, and unexpectedly, higher RF scores were correlated with less importance placed on mutual consent in order to consider a relationship a marriage. RF scores were uncorrelated with the importance placed on legal recognition or lifelong commitment and these null findings do not appear to be attributable to ceiling effects or lack of variability. These results suggest that RF constrains views about both the importance of marriage and family in one’s life and also the characteristics required to consider a relationship a proper marriage. This could have implications for the mental and physical health of believers high in RF, either positive or negative, depending upon the extent to which their lives correspond to these templates prescribed by RF. Additionally, some of these correlations with RF were substantial enough (> .70) that the relevant items could serve as a brief, unobtrusive measure of RF. Future research will investigate these possibilities.Keywords: attitudes about marriage, fertility intentions, measurement, religious fundamentalism
Procedia PDF Downloads 117567 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing
Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari
Abstract:
A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.Keywords: bacteria chromosome, bacterial identification, sequence, primer generation
Procedia PDF Downloads 191566 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry
Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister
Abstract:
The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming
Procedia PDF Downloads 210565 The Environmental Impact of Sustainability Dispersion of Chlorine Releases in Coastal Zone of Alexandra: Spatial-Ecological Modeling
Authors: Mohammed El Raey, Moustafa Osman Mohammed
Abstract:
The spatial-ecological modeling is relating sustainable dispersions with social development. Sustainability with spatial-ecological model gives attention to urban environments in the design review management to comply with Earth’s System. Naturally exchange patterns of ecosystems have consistent and periodic cycles to preserve energy flows and materials in Earth’s System. The probabilistic risk assessment (PRA) technique is utilized to assess the safety of industrial complex. The other analytical approach is the Failure-Safe Mode and Effect Analysis (FMEA) for critical components. The plant safety parameters are identified for engineering topology as employed in assessment safety of industrial ecology. In particular, the most severe accidental release of hazardous gaseous is postulated, analyzed and assessment in industrial region. The IAEA- safety assessment procedure is used to account the duration and rate of discharge of liquid chlorine. The ecological model of plume dispersion width and concentration of chlorine gas in the downwind direction is determined using Gaussian Plume Model in urban and ruler areas and presented with SURFER®. The prediction of accident consequences is traced in risk contour concentration lines. The local greenhouse effect is predicted with relevant conclusions. The spatial-ecological model is also predicted the distribution schemes from the perspective of pollutants that considered multiple factors of multi-criteria analysis. The data extends input–output analysis to evaluate the spillover effect, and conducted Monte Carlo simulations and sensitivity analysis. Their unique structure is balanced within “equilibrium patterns”, such as the biosphere and collective a composite index of many distributed feedback flows. These dynamic structures are related to have their physical and chemical properties and enable a gradual and prolonged incremental pattern. While this spatial model structure argues from ecology, resource savings, static load design, financial and other pragmatic reasons, the outcomes are not decisive in artistic/ architectural perspective. The hypothesis is an attempt to unify analytic and analogical spatial structure for development urban environments using optimization software and applied as an example of integrated industrial structure where the process is based on engineering topology as optimization approach of systems ecology.Keywords: spatial-ecological modeling, spatial structure orientation impact, composite structure, industrial ecology
Procedia PDF Downloads 79564 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia
Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger
Abstract:
Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia
Procedia PDF Downloads 72563 The Use of Brachytherapy in the Treatment of Liver Metastases: A Systematic Review
Authors: Mateusz Bilski, Jakub Klas, Emilia Kowalczyk, Sylwia Koziej, Katarzyna Kulszo, Ludmiła Grzybowska- Szatkowska
Abstract:
Background: Liver metastases are a common complication of primary solid tumors and sig-nificantly reduce patient survival. In the era of increasing diagnosis of oligometastatic disease and oligoprogression, methods of local treatment of metastases, i.e. MDT, are becoming more important. Implementation of such treatment can be considered for liver metastases, which are a common complication of primary solid tumors and significantly reduce patient survival. To date, the mainstay of treatment for oligometastatic disease has been surgical resection, but not all patients qualify for the procedure. As an alternative to surgical resection, radiotherapy techniques have become available, including stereotactic body radiation therapy (SBRT) or high-dose interstitial brachytherapy (iBT). iBT is an invasive method that emits very high doses of radiation from the inside of the tumor to the outside. This technique provides better tumor coverage than SBRT while having little impact on surrounding healthy tissue and elim-inates some concerns involving respiratory motion. Methods: We conducted a systematic re-view of the scientific literature on the use of brachytherapy in the treatment of liver metasta-ses from 2018 - 2023 using PubMed and ResearchGate browsers according to PRISMA rules. Results: From 111 articles, 18 publications containing information on 729 patients with liver metastases were selected. iBT has been shown to provide high rates of tumor control. Among 14 patients with 54 unresectable RCC liver metastases, after iBT LTC was 92.6% during a median follow-up of 10.2 months, PFS was 3.4 months. In analysis of 167 patients after treatment with a single fractional dose of 15-25 Gy with brachytherapy at 6- and 12-month follow-up, LRFS rates of 88,4-88.7% and 70.7 - 71,5%, PFS of 78.1 and 53.8%, and OS of 92.3 - 96.7% and 76,3% - 79.6%, respectively, were achieved. No serious complications were observed in all patients. Distant intrahepatic progression occurred later in patients with unre-sectable liver metastases after brachytherapy (PFS: 19.80 months) than in HCC patients (PFS: 13.50 months). A significant difference in LRFS between CRC patients (84.1% vs. 50.6%) and other histologies (92.4% vs. 92.4%) was noted, suggesting a higher treatment dose is necessary for CRC patients. The average target dose for metastatic colorectal cancer was 40 - 60 Gy (compared to 100 - 250 Gy for HCC). To better assess sensitivity to therapy and pre-dict side effects, it has been suggested that humoral mediators be evaluated. It was also shown that baseline levels of TNF-α, MCP-1 and VEGF, as well as NGF and CX3CL corre-lated with both tumor volume and radiation-induced liver damage, one of the most serious complications of iBT, indicating their potential role as biomarkers of therapy outcome. Con-clusions: The use of brachytherapy methods in the treatment of liver metastases of various cancers appears to be an interesting and relatively safe therapeutic method alternative to sur-gery. An important challenge remains the selection of an appropriate brachytherapy method and radiation dose for the corresponding initial tumor type from which the metastasis origi-nated.Keywords: liver metastases, brachytherapy, CT-HDRBT, iBT
Procedia PDF Downloads 113562 Pathogenic Escherichia Coli Strains and Their Antibiotic Susceptibility Profiles in Cases of Child Diarrhea at Addis Ababa University, College of Health Sciences, Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia
Authors: Benyam Zenebe, Tesfaye Sisay, Gurja Belay, Workabeba Abebe
Abstract:
Background: The prevalence and antibiogram of pathogenic E. coli strains, which cause diarrhea vary from region to region, and even within countries in the same geographical area. In Ethiopia, diagnostic approaches to E. coli induced diarrhea in children less than five years of age are not standardized. The aim of this study was to determine the involvement of pathogenic E. coli strains in child diarrhea and determine the antibiograms of the isolates in children less than 5 years of age with diarrhea at Addis Ababa University College of Health Sciences TikurAnbessa Specialized Hospital, Addis Ababa, Ethiopia. Methods: A purposive study that included 98 diarrheic children less than five years of age was conducted at Addis Ababa University College of Health Sciences, TikurAnbessa Specialized Hospital, Addis Ababa, Ethiopia to detect pathogenic E. coli biotypes. Stool culture was used to identify presumptive E. coliisolates. Presumptive isolates were confirmed by biochemical tests, and antimicrobial susceptibility tests were performed on confirmed E. coli isolates by the disk diffusion method. DNA was extracted from confirmed isolates by a heating method and subjected to Polymerase Chain Reaction or the presence of virulence genes. Amplified PCR products were analyzed by agarose gel electrophoresis. Data were collected on child demographics and clinical conditions using administered questionnaires. The prevalence of E. coli strains from the total diarrheic children, and the prevalence of pathogenic strains from total E. coli isolates along with their susceptibility profiles; the distribution of pathogenic E.coli biotypes among different age groups and between the sexes were determined by using descriptive statistics. Result: Out of 98 stool specimens collected from diarrheic children less than 5 years of age, 75 presumptive E. coli isolates were identified by culture; further confirmation by biochemical tests showed that only 56 of the isolates were E. coli; 29 of the isolates were found in male children and 27 of them in female children. Out of the 58 isolates of E. coli, 25 pathotypes belonging to different classes of pathogenic strains: STEC, EPEC, EHEC, EAEC were detected by using the PCR technique. Pathogenic E. coli exhibited high rates of antibiotic resistance to many of the antibiotics tested. Moreover, they exhibited multiple drug resistance. Conclusion: This study found that the isolation rate of E. coli and the involvement of antibiotic-resistant pathogenic E. coli in diarrheic children is prominent, and hence focus should be given on the diagnosis and antimicrobial sensitivity testing of pathogenic E. coli at Addis Ababa University College of Health Sciences TikurAnbessa Specialized Hospital. Among antibiotics tested, Cefotitan could be a drug of choice to treat E. coli.Keywords: antibiotic susceptibility profile, children, diarrhea, E. coli, pathogenic
Procedia PDF Downloads 231561 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns
Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis
Abstract:
The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy
Procedia PDF Downloads 160560 The Levels of Neurosteroid 7β-Hydroxy-Epiandrosterone in Men and Pregnant Women
Authors: J. Vitku, L. Kolatorova, T. Chlupacova, J. Heracek, M. Hill, M. Duskova, L. Starka
Abstract:
Background: 7β-hydroxy-epiandrosterone (7β–OH-EpiA) is an endogenous steroid, that has been shown to exert neuroprotective and anti-inflammatory effects in vitro as well as in animal models. However, to the best of our knowledge no information is available about concentration of this androgen metabolite in human population. The aim of the study was to measure and compare levels of 7β–OH-EpiA in men and pregnant women in different biological fluids and evaluate the relationship between 7β–OH-EpiA in men and their sperm quality. Methods: First, a sensitive isotope dilution high performance liquid chromatography-mass spectrometry method for measurement of 7β–OH-EpiA in different biological fluids was developed. Validation of the method met the requirements of FDA guidelines. Afterwards 7β–OH-EpiA in plasma and seminal plasma of 191 men with different degree of infertility (healthy men, lightly infertile men, moderately infertile men, severely infertile men) was analysed. Furthermore, the levels of 7β–OH-EpiA in plasma of 34 pregnant women in 37th week of gestation and corresponding cord plasma that reflects steroid levels in the fetus were measured. Results: Concentrations of 7β–OH-EpiA in seminal plasma were significantly higher in severely infertile men in comparison with healthy men and lightly infertile men. The same trend was observed when blood plasma was evaluated. Furthermore, plasmatic 7β –OH-EpiA negatively correlated with concentration (-0.215; p < 0.01) and total count (-0.15; p < 0.05). Seminal 7β–OH-EpiA was negatively associated with motility (-0.26; p < 0.01), progressively motile sperms (-0.233; p < 0.01) and nonprogressively motile sperms (-0.188; p < 0.05). Plasmatic 7β –OH-EpiA levels in men were generally higher in comparison with pregnant women. Levels 7β–OH-EpiA were under the lower limit of quantification (LLOQ) in majority of samples of pregnant women and cord plasma. Only 4 plasma samples of pregnant women and 7 cord blood plasma samples were above LLOQ and where in range of units of pg/ml. Conclusion: Based on available information, this is the first study measuring 7β–OH-EpiA in human samples. 7β–OH-EpiA is associated with lower sperm quality and certainly it is worth to explore its role in this field thoroughly. Interestingly, levels of 7β–OH-EpiA in pregnant women were extremely low despite the fact that steroid levels including androgens are generally higher during pregnancy. Acknowledgements: This work was supported by the project MH CR 17-30528 A from the Czech Health Research Council, MH CZ - DRO (Institute of Endocrinology - EU, 00023761) and by the MEYS CR (OP RDE, Excellent research - ENDO.CZ).Keywords: 7β-hydroxy-epiandrosterone, steroid, sperm quality, pregnancy
Procedia PDF Downloads 253559 Prevalence and Associated Risk Factors of Age-Related Macular Degeneration in the Retina Clinic at a Tertiary Center in Makkah Province, Saudi Arabia: A Retrospective Record Review
Authors: Rahaf Mandura, Fatmah Abusharkh, Layan Kurdi, Rahaf Shigdar, Khadijah Alattas
Abstract:
Introduction: Age-related macular degeneration (AMD) in older individuals are serious health issues that severely impact the quality of life of millions globally. In 2020, the fourth leading cause of blindness worldwide was AMD. The global prevalence of AMD is estimated to be around 8.7%. AMD is a progressive disease involving the macular region of the retina, and it has a complex pathophysiology. RPE cell dysfunction plays a crucial step in the pathway leading to irreversible degeneration of photoreceptors with yellowish lipid-rich, protein-containing drusen deposits accumulating between Bruch's membrane and the RPE. Furthermore, lipofuscinogenesis, drusogenesis, inflammation, and neovascularization are four main processes responsible for the formation of the two types of AMD: the wet (exudative, neovascular) and dry (non-exudative, geographic atrophy) types. We retrospectively evaluated the prevalence of AMD among patients visiting the retina clinic at King Abdulaziz University Hospital (Jeddah, Makkah Province, Saudi Arabia) to identify the commonly associated risk factors of AMD. Methods: The records of 3,067 individuals from 2017 to 2021 were reviewed. Of these, 1,935 satisfied the inclusion criteria and were included in this study. We excluded all patient below 18 years, and those who did not undergo fundus imaging or attend their booked appointments, follow-ups, treatments, and referrals were excluded. Results: The prevalence of AMD among the patients was 4%. The age of patients with AMD was significantly greater than those without AMD (72.4 ± 9.8 years vs. 57.2 ± 15.5 years; p < 0.001). Participants with a family history of AMD tended to have the disease more than those without such a history (85.7% vs. 45%; p = 0.043). Ex- and current smokers were more likely to have AMD than non-smokers (34% and 18.6% vs. 7.2%; p < 0.001). Patients with hypertension and those without type 1 diabetes were at a higher risk of developing AMD than those without hypertension (5.5% vs. 2.8%; p = 0.002) and those with type 1 diabetes (4.2% vs. 0.8%; p = 0.040). In contrast, sex, nationality, type 2 diabetes, and abnormal lipid profile were not significantly associated with AMD. Regarding the clinical characteristics of AMD cases, most cases (70.4%) were of the dry type and affected both eyes (77.2%). The disease duration was ≥5 years in 43.1% of the patients. The most frequent chronic diseases associated with AMD were type 2 diabetes (69.1%), hypertension (61.7%), and dyslipidemia (18.5%). Conclusion: In summary, our single tertiary center study showed that AMD is widely prevalent in Jeddah, Saudi Arabia (4%) and linked to a wide range of risk factors. Some of these are modifiable risk factors that can be adjusted to help reduce AMD occurrence. Furthermore, this study has shown the importance of screening and follow-up of family members of patients with AMD to promote early detection and intervention of AMD. We recommend conducting further research on AMD in Saudi Arabia. Concerning the study design, a community-based cross-sectional study would be more helpful for assessing the disease's prevalence. Finally, recruiting a larger sample size is required for more accurate estimation.Keywords: age related macular degeneration, prevelence, risk factor, dry AMD
Procedia PDF Downloads 40558 Spectrum of Bacteria Causing Oral and Maxillofacial Infections and Their Antibiotic Susceptibility among Patients Attending Muhimbili National Hospital
Authors: Sima E. Rugarabamu, Mecky I. Matee, Elison N. M. Simon
Abstract:
Background: In Tanzania bacteriological studies of etiological agents of oro-facial infections are very limited, and very few have investigated anaerobes. The aim of this study was to determine the spectrum of bacterial agents involved in oral and maxillofacial infections in patients attending Muhimbili National Hospital, Dar-es-salaam, Tanzania. Method: This was a hospital based descriptive cross-sectional study that was conducted in the Department of Oral and Maxillofacial Surgery of the Muhimbili National Hospital in Dar es Salaam, Tanzania from 1st January 2014 to 31st August 2014. Seventy (70) patients with various forms of oral and maxillofacial infections who were recruited for the study. The study participants were interviewed using a prepared questionnaire after getting their consent. Pus aspirate was cultured on Blood agar, Chocolate Agar, MacConkey agar and incubated aerobically at 37°C. Imported blood agar was used for anaerobic culture whereby they were incubated at 37°Cin anaerobic jars in an atmosphere of generated using commercial gas-generating kits in accordance with manufacturer’s instructions. Plates were incubated at 37°C for 24 hours (For aerobic culture and 48 hours for anaerobic cultures). Gram negative rods were identified using API 20E while all other isolates were identified by conventional biochemical tests. Antibiotic sensitivity testing for isolated aerobic and anaerobic bacteria was detected by the disk diffusion, agar dilution and E-test using routine and commercially available antibiotics used to treat oral facial infections. Results: This study comprised of 41 (58.5%) males and 29 (41.5%) females with a mean age of 32 years SD +/-15.1 and a range of 19 to 70 years. A total of 161 bacteria strains were isolated from specimens obtained from 70 patients which were an average of 2.3 isolates per patient. Of these 103 were aerobic organism and 58 were strict anaerobes. A complex mix of strict anaerobes and facultative anaerobes accounted for 87% of all infections.The most frequent aerobes isolated was streptococcus spp 70 (70%) followed by Staphylococcus spp 18 (18%). Other organisms such as Klebsiella spp 4 (4%), Proteus spp 5 (5%) and Pseudomonas spp 2 (2%) were also seen. The anaerobic group was dominated by Prevotella spp 25 (43%) followed by Peptostreptococcus spp 18 (31%); other isolates were Pseudomonas spp 2 (1%), black pigmented Pophyromonas spp 4 (5%), Fusobacterium spp 3 (3%) and Bacteroides spp 5 (8%). Majority of these organisms were sensitive to Amoxicillin (98%), Gentamycin (89%), and Ciprofloxacin (100%). A 40% resistance to metronidazole was observed in Bacteroides spp otherwise this drug and others displayed good activity against anaerobes. Conclusions: Oral and maxillofacial facial infections at Muhimbili National Hospital are mostly caused by streptococcus spp and Prevotella spp. Strict anaerobes accounted for 36% of all isolates. The profile of isolates should assist in selecting empiric therapy for infections of the oral and maxillofacial region. Inclusion of antimicrobial agents against anaerobic bacteria is highly recommended.Keywords: bacteria, oral and maxillofacial infections, antibiotic susceptibility, Tanzania
Procedia PDF Downloads 330557 Ankle Fracture Management: A Unique Cross Departmental Quality Improvement Project
Authors: Langhit Kurar, Loren Charles
Abstract:
Introduction: In light of recent BOAST 12 (August 2016) published guidance on management of ankle fractures, the project aimed to highlight key discrepancies throughout the care trajectory from admission to point of discharge at a district general hospital. Wide breadth of data covering three key domains: accident and emergency, radiology, and orthopaedic surgery were subsequently stratified and recommendations on note documentation, and outpatient follow up were made. Methods: A retrospective twelve month audit was conducted reviewing results of ankle fracture management in 37 patients. Inclusion criterion involved all patients seen at Darent Valley Hospital (DVH) emergency department with radiographic evidence of an ankle fracture. Exclusion criterion involved all patients managed solely by nursing staff or having sustained purely ligamentous injury. Medical notes, including discharge summaries and the PACS online radiographic tool were used for data extraction. Results: Cross-examination of the A & E domain revealed limited awareness of the BOAST 12 recent publication including requirements to document skin integrity and neurovascular assessment. This had direct implications as this would have changed the surgical plan for acutely compromised patients. The majority of results obtained from the radiographic domain were satisfactory with appropriate X-rays taken in over 95% of cases. However, due to time pressures within A & E, patients were often left without a post manipulation XRAY in a backslab. Poorly reduced fractures were subsequently left for a long period resulting in swollen ankles and a time-dependent lag to surgical intervention. This had knocked on implications for prolonged inpatient stay resulting in hospital-acquired co-morbidity including pressure sores. Discussion: The audit has highlighted several areas of improvement throughout the disease trajectory from review in the emergency department to follow up as an outpatient. This has prompted the creation of an algorithm to ensure patients with significant fractures presenting to the emergency department are seen promptly and treatment expedited as per recent guidance. This includes timing for X-rays taken in A & E. Re-audit has shown significant improvement in both documentation at time of presentation and appropriate follow-up strategies. Within the orthopedic domain, we are in the process of creating an ankle fracture pathway to ensure imaging and weight bearing status are made clear to the consulting clinicians in an outpatient setting. Significance/Clinical Relevance: As a result of the ankle fracture algorithm we have adapted the BOAST 12 guidance to shape an intrinsic pathway to not only improve patient management within the emergency department but also create a standardised format for follow up.Keywords: ankle, fracture, BOAST, radiology
Procedia PDF Downloads 178556 Isolation, Identification and Measurement of Cottonseed Oil Gossypol in the Treatment of Drug-Resistant Cutaneous Leishmaniasis
Authors: Sara Taghdisi, Mehrosadat Mirmohammadi, Mostafa Mokhtarian, Mohammad Hossein Pazandeh
Abstract:
Leishmaniasis is one of the 10 most important diseases of the World Health Organization with health problems in more than 90 countries. Over one billion people are at risk of these diseases on almost every continent. The present human study was performed to evaluate the therapeutic effect of cotton plant on cutaneous leishmaniasis leision. firstly, the cotton seeds were cleaned and grinded to smaller particles. In the second step, the seeds were oiled by cold press method. In order to separate bioactive compound, after saponification of the oil, its gossypol was hydrolyzed and crystalized. finally, the therapeutic effect of Cottonseed Oil on cutaneous leishmaniasis was investigated. In the current project, Gossypol was extracted with a liquid-liquid extraction method in 120 minutes in the presence of Phosphoric acid from the cotton seed oil of Golestan beach varieties, then got crystallized in darkness using Acetic acid and isolated as Gossypol Acetic acid. The efficiency of the extracted crystal was obtained at 1.28±0.12. the cotton plant could be efficient in the treatment of Cutaneous leishmaniasis. This double-blind randomized controlled clinical trial was performed on 88 cases of leishmaniasis wounds. Patients were randomly divided into two groups of 44 cases. two groups received conventional treatment. In addition to the usual treatment (glucantime), the first group received cottonseed oil and the control group received placebo. The results of the present study showed that the surface of lesion before the intervention and in the first to fourth weeks after the intervention was not significantly different between the two groups (P-value> 0.05). But the surface of lesion in the Intervention group in the eighth and twelfth weeks was lower than the control group (P-value <0.05). This study showed that the improvement of leishmaniasis lesion using topical cotton plant mark in the eighth and twelfth weeks after the intervention was significantly more than the control group. Considering the most common chemical drugs for Cutaneous leishmaniasis treatment are sodium stibogluconate, and meglumine antimonate, which not only have relatively many side effects, but also some species of the Leishmania genus have become resistant to them. Therefore, a plant base bioactive compound such as cottonseed oil can be useful whit fewer side effects.Keywords: cottonseed oil, crystallization, gossypol, leishmaniasis
Procedia PDF Downloads 58