Search results for: optimal scheme
599 Dosimetry in Interventional Radiology Examinations for Occupational Exposure Monitoring
Authors: Ava Zarif Sanayei, Sedigheh Sina
Abstract:
Interventional radiology (IR) uses imaging guidance, including X-rays and CT scans, to deliver therapy precisely. Most IR procedures are performed under local anesthesia and start with a small needle being inserted through the skin, which may be called pinhole surgery or image-guided surgery. There is increasing concern about radiation exposure during interventional radiology procedures due to procedure complexity. The basic aim of optimizing radiation protection as outlined in ICRP 139, is to strike a balance between image quality and radiation dose while maximizing benefits, ensuring that diagnostic interpretation is satisfactory. This study aims to estimate the equivalent doses to the main trunk of the body for the Interventional radiologist and Superintendent using LiF: Mg, Ti (TLD-100) chips at the IR department of a hospital in Shiraz, Iran. In the initial stage, the dosimeters were calibrated with the use of various phantoms. Afterward, a group of dosimeters was prepared, following which they were used for three months. To measure the personal equivalent dose to the body, three TLD chips were put in a tissue-equivalent batch and used under a protective lead apron. After the completion of the duration, TLDs were read out by a TLD reader. The results revealed that these individuals received equivalent doses of 387.39 and 145.11 µSv, respectively. The findings of this investigation revealed that the total radiation exposure to the staff was less than the annual limit of occupational exposure. However, it's imperative to implement appropriate radiation protection measures. Although the dose received by the interventional radiologist is a bit noticeable, it may be due to the reason for using conventional equipment with over-couch x-ray tubes for interventional procedures. It is therefore important to use dedicated equipment and protective means such as glasses and screens whenever compatible with the intervention when they are available or have them fitted to equipment if they are not present. Based on the results, the placement of staff in an appropriate location led to increasing the dose to the radiologist. Manufacturing and installation of moveable lead curtains with a thickness of 0.25 millimeters can effectively minimize the radiation dose to the body. Providing adequate training on radiation safety principles, particularly for technologists, can be an optimal approach to further decreasing exposure.Keywords: interventional radiology, personal monitoring, radiation protection, thermoluminescence dosimetry
Procedia PDF Downloads 62598 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic
Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx
Abstract:
Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM
Procedia PDF Downloads 205597 Purification and Characterization of a Novel Extracellular Chitinase from Bacillus licheniformis LHH100
Authors: Laribi-Habchi Hasiba, Bouanane-Darenfed Amel, Drouiche Nadjib, Pausse André, Mameri Nabil
Abstract:
Chitin, a linear 1, 4-linked N-acetyl-d-glucosamine (GlcNAc) polysaccharide is the major structural component of fungal cell walls, insect exoskeletons and shells of crustaceans. It is one of the most abundant naturally occurring polysaccharides and has attracted tremendous attention in the fields of agriculture, pharmacology and biotechnology. Each year, a vast amount of chitin waste is released from the aquatic food industry, where crustaceans (prawn, crab, Shrimp and lobster) constitute one of the main agricultural products. This creates a serious environmental problem. This linear polymer can be hydrolyzed by bases, acids or enzymes such as chitinase. In this context an extracellular chitinase (ChiA-65) was produced and purified from a newly isolated LHH100. Pure protein was obtained after heat treatment and ammonium sulphate precipitation followed by Sephacryl S-200 chromatography. Based on matrix assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF/MS) analysis, the purified enzyme is a monomer with a molecular mass of 65,195.13 Da. The sequence of the 27 N-terminal residues of the mature ChiA-65 showed high homology with family-18 chitinases. Optimal activity was achieved at pH 4 and 75◦C. Among the inhibitors and metals tested p-chloromercuribenzoic acid, N-ethylmaleimide, Hg2+ and Hg + completelyinhibited enzyme activity. Chitinase activity was high on colloidal chitin, glycol chitin, glycol chitosane, chitotriose and chitooligosaccharide. Chitinase activity towards synthetic substrates in the order of p-NP-(GlcNAc) n (n = 2–4) was p-NP-(GlcNAc)2> p-NP-(GlcNAc)4> p-NP-(GlcNAc)3. Our results suggest that ChiA-65 preferentially hydrolyzed the second glycosidic link from the non-reducing end of (GlcNAc) n. ChiA-65 obeyed Michaelis Menten kinetics the Km and kcat values being 0.385 mg, colloidal chitin/ml and5000 s−1, respectively. ChiA-65 exhibited remarkable biochemical properties suggesting that this enzyme is suitable for bioconversion of chitin waste.Keywords: Bacillus licheniformis LHH100, characterization, extracellular chitinase, purification
Procedia PDF Downloads 437596 Optimization of Waste Plastic to Fuel Oil Plants' Deployment Using Mixed Integer Programming
Authors: David Muyise
Abstract:
Mixed Integer Programming (MIP) is an approach that involves the optimization of a range of decision variables in order to minimize or maximize a particular objective function. The main objective of this study was to apply the MIP approach to optimize the deployment of waste plastic to fuel oil processing plants in Uganda. The processing plants are meant to reduce plastic pollution by pyrolyzing the waste plastic into a cleaner fuel that can be used to power diesel/paraffin engines, so as (1) to reduce the negative environmental impacts associated with plastic pollution and also (2) to curb down the energy gap by utilizing the fuel oil. A programming model was established and tested in two case study applications that are, small-scale applications in rural towns and large-scale deployment across major cities in the country. In order to design the supply chain, optimal decisions on the types of waste plastic to be processed, size, location and number of plants, and downstream fuel applications were concurrently made based on the payback period, investor requirements for capital cost and production cost of fuel and electricity. The model comprises qualitative data gathered from waste plastic pickers at landfills and potential investors, and quantitative data obtained from primary research. It was found out from the study that a distributed system is suitable for small rural towns, whereas a decentralized system is only suitable for big cities. Small towns of Kalagi, Mukono, Ishaka, and Jinja were found to be the ideal locations for the deployment of distributed processing systems, whereas Kampala, Mbarara, and Gulu cities were found to be the ideal locations initially utilize the decentralized pyrolysis technology system. We conclude that the model findings will be most important to investors, engineers, plant developers, and municipalities interested in waste plastic to fuel processing in Uganda and elsewhere in developing economy.Keywords: mixed integer programming, fuel oil plants, optimisation of waste plastics, plastic pollution, pyrolyzing
Procedia PDF Downloads 129595 Multiscale Process Modeling of Ceramic Matrix Composites
Authors: Marianna Maiaru, Gregory M. Odegard, Josh Kemppainen, Ivan Gallegos, Michael Olaya
Abstract:
Ceramic matrix composites (CMCs) are typically used in applications that require long-term mechanical integrity at elevated temperatures. CMCs are usually fabricated using a polymer precursor that is initially polymerized in situ with fiber reinforcement, followed by a series of cycles of pyrolysis to transform the polymer matrix into a rigid glass or ceramic. The pyrolysis step typically generates volatile gasses, which creates porosity within the polymer matrix phase of the composite. Subsequent cycles of monomer infusion, polymerization, and pyrolysis are often used to reduce the porosity and thus increase the durability of the composite. Because of the significant expense of such iterative processing cycles, new generations of CMCs with improved durability and manufacturability are difficult and expensive to develop using standard Edisonian approaches. The goal of this research is to develop a computational process-modeling-based approach that can be used to design the next generation of CMC materials with optimized material and processing parameters for maximum strength and efficient manufacturing. The process modeling incorporates computational modeling tools, including molecular dynamics (MD), to simulate the material at multiple length scales. Results from MD simulation are used to inform the continuum-level models to link molecular-level characteristics (material structure, temperature) to bulk-level performance (strength, residual stresses). Processing parameters are optimized such that process-induced residual stresses are minimized and laminate strength is maximized. The multiscale process modeling method developed with this research can play a key role in the development of future CMCs for high-temperature and high-strength applications. By combining multiscale computational tools and process modeling, new manufacturing parameters can be established for optimal fabrication and performance of CMCs for a wide range of applications.Keywords: digital engineering, finite elements, manufacturing, molecular dynamics
Procedia PDF Downloads 98594 Analysis of Socio-Economics of Tuna Fisheries Management (Thunnus Albacares Marcellus Decapterus) in Makassar Waters Strait and Its Effect on Human Health and Policy Implications in Central Sulawesi-Indonesia
Authors: Siti Rahmawati
Abstract:
Indonesia has had long period of monetary economic crisis and it is followed by an upward trend in the price of fuel oil. This situation impacts all aspects of tuna fishermen community. For instance, the basic needs of fishing communities increase and the lower purchasing power then lead to economic and social instability as well as the health of fishermen household. To understand this AHP method is applied to acknowledge the model of tuna fisheries management priorities and cold chain marketing channel and the utilization levels that impact on human health. The study is designed as a development research with the number of 180 respondents. The data were analyzed by Analytical Hierarchy Process (AHP) method. The development of tuna fishery business can improve productivity of production with economic empowerment activities for coastal communities, improving the competitiveness of products, developing fish processing centers and provide internal capital for the development of optimal fishery business. From economic aspects, fishery business is more attracting because the benefit cost ratio of 2.86. This means that for 10 years, the economic life of this project can work well as B/C> 1 and therefore the rate of investment is economically viable. From the health aspects, tuna can reduce the risk of dying from heart disease by 50%, because tuna contain selenium in the human body. The consumption of 100 g of tuna meet 52.9% of the selenium in the body and activating the antioxidant enzyme glutathione peroxidaxe which can protect the body from free radicals and stimulate various cancers. The results of the analytic hierarchy process that the quality of tuna products is the top priority for export quality as well as quality control in order to compete in the global market. The implementation of the policy can increase the income of fishermen and reduce the poverty of fishermen households and have impact on the human health whose has high risk of disease.Keywords: management of tuna, social, economic, health
Procedia PDF Downloads 316593 Evaluation of Pozzolanic Properties of Micro and Nanofillers Origin from Waste Products
Authors: Laura Vitola, Diana Bajare, Genadijs Sahmenko, Girts Bumanis
Abstract:
About 8 % of CO2 emission in the world is produced by concrete industry therefore replacement of cement in concrete composition by additives with pozzolanic activity would give a significant impact on the environment. Material which contains silica SiO2 or amorphous silica SiO2 together with aluminum dioxide Al2O3 is called pozzolana type additives in the concrete industry. Pozzolana additives are possible to obtain from recycling industry and different production by-products such as processed bulb boric silicate (DRL type) and lead (LB type) glass, coal combustion bottom ash, utilized brick pieces and biomass ash, thus solving utilization problem which is so important in the world, as well as practically using materials which previously were considered as unusable. In the literature, there is no summarized method which could be used for quick waste-product pozzolana activity evaluation without the performance of wide researches related to the production of innumerable concrete contents and samples in the literature. Besides it is important to understand which parameters should be predicted to characterize the efficiency of waste-products. Simple methods of pozzolana activity increase for different types of waste-products are also determined. The aim of this study is to evaluate effectiveness of the different types of waste materials and industrial by-products (coal combustion bottom ash, biomass ash, waste glass, waste kaolin and calcined illite clays), and determine which parameters have the greatest impact on pozzolanic activity. By using materials, which previously were considered as unusable and landfilled, in concrete industry basic utilization problems will be partially solved. The optimal methods for treatment of waste materials and industrial by–products were detected with the purpose to increase their pozzolanic activity and produce substitutes for cement in the concrete industry. Usage of mentioned pozzolanic allows us to replace of necessary cement amount till 20% without reducing the compressive strength of concrete.Keywords: cement substitutes, micro and nano fillers, pozzolanic properties, specific surface area, particle size, waste products
Procedia PDF Downloads 427592 Thermo-Mechanical Processing Scheme to Obtain Micro-Duplex Structure Favoring Superplasticity in an As-Cast and Homogenized Medium Alloyed Nickel Base Superalloy
Authors: K. Sahithya, I. Balasundar, Pritapant, T. Raghua
Abstract:
Ni-based superalloy with a nominal composition Ni-14% Cr-11% Co-5.8% Mo-2.4% Ti-2.4% Nb-2.8% Al-0.26 % Fe-0.032% Si-0.069% C (all in wt %) is used as turbine discs in a variety of aero engines. Like any other superalloy, the primary processing of the as-cast superalloy poses a major challenge due to its complex alloy chemistry. The challenge was circumvented by characterizing the different phases present in the material, optimizing the homogenization treatment, identifying a suitable thermomechanical processing window using dynamic materials modeling. The as-cast material was subjected to homogenization at 1200°C for a soaking period of 8 hours and quenched using different media. Water quenching (WQ) after homogenization resulted in very fine spherical γꞌ precipitates of sizes 30-50 nm, whereas furnace cooling (FC) after homogenization resulted in bimodal distribution of precipitates (primary gamma prime of size 300nm and secondary gamma prime of size 5-10 nm). MC type primary carbides that are stable till the melting point of the material were found in both WQ and FC samples. Deformation behaviour of both the materials below (1000-1100°C) and above gamma prime solvus (1100-1175°C) was evaluated by subjecting the material to series of compression tests at different constant true strain rates (0.0001/sec-1/sec). An in-detail examination of the precipitate dislocation interaction mechanisms carried out using TEM revealed precipitate shearing and Orowan looping as the mechanisms governing deformation in WQ and FC, respectively. Incoherent/semi coherent gamma prime precipitates in the case of FC material facilitates better workability of the material, whereas the coherent precipitates in WQ material contributed to higher resistance to deformation of the material. Both the materials exhibited discontinuous dynamic recrystallization (DDRX) above gamma prime solvus temperature. The recrystallization kinetics was slower in the case of WQ material. Very fine grain boundary carbides ( ≤ 300 nm) retarded the recrystallisation kinetics in WQ. Coarse carbides (1-5 µm) facilitate particle stimulated nucleation in FC material. The FC material was cogged (primary hot working) 1120˚C, 0.03/sec resulting in significant grain refinement, i.e., from 3000 μm to 100 μm. The primary processed material was subjected to intensive thermomechanical deformation subsequently by reducing the temperature by 50˚C in each processing step with intermittent heterogenization treatment at selected temperatures aimed at simultaneous coarsening of the gamma prime precipitates and refinement of the gamma matrix grains. The heterogeneous annealing treatment carried out, resulted in gamma grains of 10 μm and gamma prime precipitates of 1-2 μm. Further thermo mechanical processing of the material was carried out at 1025˚C to increase the homogeneity of the obtained micro-duplex structure.Keywords: superalloys, dynamic material modeling, nickel alloys, dynamic recrystallization, superplasticity
Procedia PDF Downloads 121591 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column
Authors: G. Rajapakse, S. Jayasinghe, A. Fleming
Abstract:
This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter
Procedia PDF Downloads 113590 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System
Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee
Abstract:
This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation
Procedia PDF Downloads 101589 Improvement of Ground Water Quality Index Using Citrus limetta
Authors: Rupas Kumar M., Saravana Kumar M., Amarendra Kumar S., Likhita Komal V., Sree Deepthi M.
Abstract:
The demand for water is increasing at an alarming rate due to rapid urbanization and increase in population. Due to freshwater scarcity, Groundwater became the necessary source of potable water to major parts of the world. This problem of freshwater scarcity and groundwater dependency is very severe particularly in developing countries and overpopulated regions like India. The present study aimed at evaluating the Ground Water Quality Index (GWQI), which represents overall quality of water at certain location and time based on water quality parameters. To evaluate the GWQI, sixteen water quality parameters have been considered viz. colour, pH, electrical conductivity, total dissolved solids, turbidity, total hardness, alkalinity, calcium, magnesium, sodium, chloride, nitrate, sulphate, iron, manganese and fluorides. The groundwater samples are collected from Kadapa City in Andhra Pradesh, India and subjected to comprehensive physicochemical analysis. The high value of GWQI has been found to be mainly from higher values of total dissolved solids, electrical conductivity, turbidity, alkalinity, hardness, and fluorides. in the present study, citrus limetta (sweet lemon) peel powder has used as a coagulant and GWQI values are recorded in different concentrations to improve GWQI. Sensitivity analysis is also carried out to determine the effect of coagulant dosage, mixing speed and stirring time on GWQI. The research found the maximum percentage improvement in GWQI values are obtained when the coagulant dosage is 100ppm, mixing speed is 100 rpm and stirring time is 10 mins. Alum is also used as a coagulant aid and the optimal ratio of citrus limetta and alum is identified as 3:2 which resulted in best GWQI value. The present study proposes Citrus limetta peel powder as a potential natural coagulant to treat Groundwater and to improve GWQI.Keywords: alum, Citrus limetta, ground water quality index, physicochemical analysis
Procedia PDF Downloads 227588 Experimental Study of Hydrothermal Properties of Cool Pavements to Mitigate Urban Heat Islands
Authors: Youssef Wardeh, Elias Kinab, Pierre Rahme, Gilles Escadeillas, Stephane Ginestet
Abstract:
Urban heat islands designate a local phenomenon that appears in high density cities. This results in a rise ofambient temperature in the urban area compared to the neighboring rural area. Solar radiation plays an important role in this phenomenon since it is partially absorbed by the materials, especially roads and parking lots. Cool pavements constitute an innovative and promising technique to mitigate urban heat islands. The cool pavements studied in this work allow to limit the increase of the surface temperature, thanks to evaporation of the water conducted through capillary pores, from the humidified base to the surface exposed to solar radiation. However, the performance or the cooling capacity of a pavement sometimes remained difficult to characterize. In this work, a new definition of the cooling capacity of a pavement is presented, and a correlation between the latter and the hydrothermal properties of cool pavements is revealed. Firstly, several porous concrete pavements were characterized through their hydrothermal properties, which can be related to the cooling effect, such as albedo, thermal conductivity, water absorption, etc. Secondly, these pavements initially saturated and continuously supplied with water through their bases, were exposed to external solar radiation during three sunny summer days, and their surface temperatures were measured. For draining pavements, a strong second-degreepolynomial correlation(R² = 0.945) was found between the cooling capacity and the term, which reflects the interconnection of capillary water to the surface. Moreover, it was noticed that the cooling capacity reaches its maximum for an optimal range of capillary pores for which the capillary rise is stronger than gravity. For non-draining pavements, a good negative linear correlation (R² = 0.828) was obtained between the cooling capacity and the term, which expresses the ability to heat the capillary water by the energystored far from the surface, and, therefore, the dominance of the evaporation process by diffusion. The latest tests showed that this process is, however, likely to be disturbed by the material resistance to the water vapor diffusion.Keywords: urban heat islands, cool pavement, cooling capacity, hydrothermal properties, evaporation
Procedia PDF Downloads 97587 Impact of CYP3A5 Polymorphism on Tacrolimus to Predict the Optimal Initial Dose Requirements in South Indian Renal Transplant Recipients
Authors: S. Sreeja, Radhakrishnan R. Nair, Noble Gracious, Sreeja S. Nair, M. Radhakrishna Pillai
Abstract:
Background: Tacrolimus is a potent immunosuppressant clinically used for the long term treatment of antirejection of transplanted organs in liver and kidney transplant recipients though dose optimization is poorly managed. However, So far no study has been carried out on the South Indian kidney transplant patients. The objective of this study is to evaluate the potential influence of a functional polymorphism in CYP3A5*3 gene on tacrolimus physiological availability/dose ratio in South Indian renal transplant patients. Materials and Methods: Twenty five renal transplant recipients receiving tacrolimus were enrolled in this study. Their body weight, drug dosage, and therapeutic concentration of Tacrolimus were observed. All patients were on standard immunosuppressive regime of Tacrolimus-Mycophenolate mofetil along with steroids on a starting dose of Tac 0.1 mg/kg/day. CYP3A5 genotyping was performed by PCR followed with RFLP. Conformation of RFLP analysis and variation in the nucleotide sequence of CYP3A5*3 gene were determined by direct sequencing using a validated automated generic analyzer. Results: A significant association was found between tacrolimus per dose/kg/d and CYP3A5 gene (A6986G) polymorphism in the study population. The CYP3A5 *1/*1, *1/*3 and *3/*3 genotypes were detected in 5 (20 %), 5 (20 %) and 15 (60 %) of the 25 graft recipients, respectively. CYP3A5*3 genotypes were found to be a good predictor of tacrolimus Concentration/Dose ratio in kidney transplant recipients. Significantly higher L/D was observed among non-expressors 9.483 ng/mL(4.5- 14.1) as compared with the expressors 5.154 ng/mL (4.42-6.5 ) of CYP3A5. Acute rejection episodes were significantly higher for CYP3A5*1 homozygotes compared to patients with CYP3A5*1/*3 and CYP3A5*3/*3 genotypes (40 % versus 20 % and 13 %, respectively ). The dose normalized TAC concentration (ng/ml/mg/kg) was significantly lower in patients having CYP3A5*1/*3 polymorphism. Conclusion: This is the first study to extensively determine the effect of CYP3A5*3 genetic polymorphism on tacrolimus pharmacokinetics in South Indian renal transplant recipients and also shows that majority of our patients carry mutant allele A6986G in CYP3A5*3 gene. Identification of CYP3A5 polymorphism prior to transplantation could contribute to evaluate the appropriate initial dosage of tacrolimus for each patient.Keywords: kidney transplant patients, CYP3A5 genotype, tacrolimus, RFLP
Procedia PDF Downloads 301586 Iron Supplementation for Patients Undergoing Cardiac Surgery: A Systematic Review and Meta-Analysis of Randomized-Controlled Trials
Authors: Matthew Cameron, Stephen Yang, Latifa Al Kharusi, Adam Gosselin, Anissa Chirico, Pouya Gholipour Baradari
Abstract:
Background: Iron supplementation has been evaluated in several randomized controlled trials (RCTs) for the potential to increase baseline hemoglobin and decrease the incidence of red blood cell (RBC) transfusion during cardiac surgery. This study's main objective was to evaluate the evidence for iron administration in cardiac surgery patients for its effect on the incidence of perioperative RBC transfusion. Methods: This systematic review protocol was registered with PROSPERO (CRD42020161927) on Dec. 19th, 2019, and was prepared as per the PRISMA guidelines. MEDLINE, EMBASE, CENTRAL, Web of Science databases, and Google Scholar were searched for RCTs evaluating perioperative iron administration in adult patients undergoing cardiac surgery. Each abstract was independently reviewed by two reviewers using predefined eligibility criteria. The primary outcome was perioperative RBC transfusion, with secondary outcomes of the number of RBC units transfused, change in ferritin level, reticulocyte count, hemoglobin, and adverse events, after iron administration. The risk of bias was assessed with the Cochrane Collaboration Risk of Bias Tool, and the primary and secondary outcomes were analyzed with a random-effects model. Results: Out of 1556 citations reviewed, five studies (n = 554 patients) met the inclusion criteria. The use of iron demonstrated no difference in transfusion incidence (RR 0.86; 95% CI 0.65 to 1.13). There was a low heterogeneity between studies (I²=0%). The trial sequential analysis suggested an optimal information size of 1132 participants, which the accrued information size did not reach. Conclusion: The current literature does not support the routine use of iron supplementation before cardiac surgery; however, insufficient data is available to draw a definite conclusion. A critical knowledge gap has been identified, and more robust RCTs are required on this topic.Keywords: cardiac surgery, iron, iron supplementation, perioperative medicine, meta-analysis, systematic review, randomized controlled trial
Procedia PDF Downloads 131585 Interference of Mild Drought Stress on Estimation of Nitrogen Status in Winter Wheat by Some Vegetation Indices
Authors: H. Tavakoli, S. S. Mohtasebi, R. Alimardani, R. Gebbers
Abstract:
Nitrogen (N) is one of the most important agricultural inputs affecting crop growth, yield and quality in rain-fed cereal production. N demand of crops varies spatially across fields due to spatial differences in soil conditions. In addition, the response of a crop to the fertilizer applications is heavily reliant on plant available water. Matching N supply to water availability is thus essential to achieve an optimal crop response. The objective of this study was to determine effect of drought stress on estimation of nitrogen status of winter wheat by some vegetation indices. During the 2012 growing season, a field experiment was conducted at the Bundessortenamt (German Plant Variety Office) Marquardt experimental station which is located in the village of Marquardt about 5 km northwest of Potsdam, Germany (52°27' N, 12°57' E). The experiment was designed as a randomized split block design with two replications. Treatments consisted of four N fertilization rates (0, 60, 120 and 240 kg N ha-1, in total) and two water regimes (irrigated (Irr) and non-irrigated (NIrr)) in total of 16 plots with dimension of 4.5 × 9.0 m. The indices were calculated using readings of a spectroradiometer made of tec5 components. The main parts were two “Zeiss MMS1 nir enh” diode-array sensors with a nominal rage of 300 to 1150 nm with less than 10 nm resolutions and an effective range of 400 to 1000 nm. The following vegetation indices were calculated: NDVI, GNDVI, SR, MSR, NDRE, RDVI, REIP, SAVI, OSAVI, MSAVI, and PRI. All the experiments were conducted during the growing season in different plant growth stages including: stem elongation (BBCH=32-41), booting stage (BBCH=43), inflorescence emergence, heading (BBCH=56-58), flowering (BBCH=65-69), and development of fruit (BBCH=71). According to the results obtained, among the indices, NDRE and REIP were less affected by drought stress and can provide reliable wheat nitrogen status information, regardless of water status of the plant. They also showed strong relations with nitrogen status of winter wheat.Keywords: nitrogen status, drought stress, vegetation indices, precision agriculture
Procedia PDF Downloads 319584 A Review on Assessment on the Level of Development of Macedonia and Iran Organic Agriculture as Compared to Nigeria
Authors: Yusuf Ahmad Sani, Adamu Alhaji Yakubu, Alhaji Abdullahi Jamilu, Joel Omeke, Ibrahim Jumare Sambo
Abstract:
With the rising global threat of food security, cancer, and related diseases (carcinogenic) because of increased usage of inorganic substances in agricultural food production, the Ministry of Food Agriculture and Livestock of the Republic of Turkey organized an International Workshop on Organic Agriculture between 8 – 12th December 2014 at the International Agricultural Research and Training Center, Izmir. About 21 countries, including Nigeria, were invited to attend the training workshop. Several topics on organic agriculture were presented by renowned scholars, ranging from regulation, certification, crop, animal, seed production, pest and disease management, soil composting, and marketing of organic agricultural products, among others. This paper purposely selected two countries (Macedonia and Iran) out of the 21 countries to assess their level of development in terms of organic agriculture as compared to Nigeria. Macedonia, with a population of only 2.1 million people as of 2014, started organic agriculture in 2005 with only 266ha of land and has grown significantly to over 5,000ha in 2010, covering such crops as cereals (62%), forage (20%) fruit orchard (7%), vineyards (5%), vegetables (4%), oil seed and industrial crops (1%) each. Others are organic beekeeping from 110 hives to over 15,000 certified colonies. As part of government commitment, the level of government subsidy for organic products was 30% compared to the direct support for conventional agricultural products. About 19 by-laws were introduced on organic agricultural production that was fully consistent with European Union regulations. The republic of Iran, on the other hand, embarked on organic agriculture for the fact, that the country recorded the highest rate of cancer disease in the world, with over 30,000 people dying every year and 297 people diagnosed every day. However, the host country, Turkey, is well advanced in organic agricultural production and now being the largest exporter of organic products to Europe and other parts of the globe. A technical trip to one of the villages that are under the government scheme on organic agriculture reveals that organic agriculture was based on market-demand-driven and the support of the government was very visible, linking the farmers with private companies that provide inputs to them while the companies purchase the products at harvest with high premium price. However, in Nigeria, research on organic agriculture was very recent, and there was very scanty information on organic agriculture due to poor documentation and very low awareness, even among the elites. The paper, therefore, recommends that the government should provide funds to NARIs to conduct research on organic agriculture and to establish clear government policy and good pre-conditions for sustainable organic agricultural production in the country.Keywords: organic agriculture, food security, food safety, food nutrition
Procedia PDF Downloads 50583 Comparative Analysis of Reinforcement Learning Algorithms for Autonomous Driving
Authors: Migena Mana, Ahmed Khalid Syed, Abdul Malik, Nikhil Cherian
Abstract:
In recent years, advancements in deep learning enabled researchers to tackle the problem of self-driving cars. Car companies use huge datasets to train their deep learning models to make autonomous cars a reality. However, this approach has certain drawbacks in that the state space of possible actions for a car is so huge that there cannot be a dataset for every possible road scenario. To overcome this problem, the concept of reinforcement learning (RL) is being investigated in this research. Since the problem of autonomous driving can be modeled in a simulation, it lends itself naturally to the domain of reinforcement learning. The advantage of this approach is that we can model different and complex road scenarios in a simulation without having to deploy in the real world. The autonomous agent can learn to drive by finding the optimal policy. This learned model can then be easily deployed in a real-world setting. In this project, we focus on three RL algorithms: Q-learning, Deep Deterministic Policy Gradient (DDPG), and Proximal Policy Optimization (PPO). To model the environment, we have used TORCS (The Open Racing Car Simulator), which provides us with a strong foundation to test our model. The inputs to the algorithms are the sensor data provided by the simulator such as velocity, distance from side pavement, etc. The outcome of this research project is a comparative analysis of these algorithms. Based on the comparison, the PPO algorithm gives the best results. When using PPO algorithm, the reward is greater, and the acceleration, steering angle and braking are more stable compared to the other algorithms, which means that the agent learns to drive in a better and more efficient way in this case. Additionally, we have come up with a dataset taken from the training of the agent with DDPG and PPO algorithms. It contains all the steps of the agent during one full training in the form: (all input values, acceleration, steering angle, break, loss, reward). This study can serve as a base for further complex road scenarios. Furthermore, it can be enlarged in the field of computer vision, using the images to find the best policy.Keywords: autonomous driving, DDPG (deep deterministic policy gradient), PPO (proximal policy optimization), reinforcement learning
Procedia PDF Downloads 148582 The Impact of Reducing Road Traffic Speed in London on Noise Levels: A Comparative Study of Field Measurement and Theoretical Calculation
Authors: Jessica Cecchinelli, Amer Ali
Abstract:
The continuing growth in road traffic and the resultant impact on the level of pollution and safety especially in urban areas have led local and national authorities to reduce traffic speed and flow in major towns and cities. Various boroughs of London have recently reduced the in-city speed limit from 30mph to 20mph mainly to calm traffic, improve safety and reduce noise and vibration. This paper reports the detailed field measurements using noise sensor and analyser and the corresponding theoretical calculations and analysis of the noise levels on a number of roads in the central London Borough of Camden where speed limit was reduced from 30mph to 20mph in all roads except the major routes of the ‘Transport for London (TfL)’. The measurements, which included the key noise levels and scales at residential streets and main roads, were conducted during weekdays and weekends normal and rush hours. The theoretical calculations were done according to the UK procedure ‘Calculation of Road Traffic Noise 1988’ and with conversion to the European L-day, L-evening, L-night, and L-den and other important levels. The current study also includes comparable data and analysis from previously measured noise in the Borough of Camden and other boroughs of central London. Classified traffic flow and speed on the roads concerned were observed and used in the calculation part of the study. Relevant data and description of the weather condition are reported. The paper also reports a field survey in the form of face-to-face interview questionnaires, which was carried out in parallel with the field measurement of noise, in order to ascertain the opinions and views of local residents and workers in the reduced speed zones of 20mph. The main findings are that the reduction in speed had reduced the noise pollution on the studied zones and that the measured and calculated noise levels for each speed zone are closely matched. Among the other findings was that of the field survey of the opinions and views of the local residents and workers in the reduced speed 20mph zones who supported the scheme and felt that it had improved the quality of life in their areas giving a sense of calmness and safety particularly for families with children, the elderly, and encouraged pedestrians and cyclists. The key conclusions are that lowering the speed limit in built-up areas would not just reduce the number of serious accidents but it would also reduce the noise pollution and promote clean modes of transport particularly walking and cycling. The details of the site observations and the corresponding calculations together with critical comparative analysis and relevant conclusions will be reported in the full version of the paper.Keywords: noise calculation, noise field measurement, road traffic noise, speed limit in london, survey of people satisfaction
Procedia PDF Downloads 424581 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters
Authors: L. Vivet, L. Benabou, O. Simon
Abstract:
With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging
Procedia PDF Downloads 93580 Wearable Antenna for Diagnosis of Parkinson’s Disease Using a Deep Learning Pipeline on Accelerated Hardware
Authors: Subham Ghosh, Banani Basu, Marami Das
Abstract:
Background: The development of compact, low-power antenna sensors has resulted in hardware restructuring, allowing for wireless ubiquitous sensing. The antenna sensors can create wireless body-area networks (WBAN) by linking various wireless nodes across the human body. WBAN and IoT applications, such as remote health and fitness monitoring and rehabilitation, are becoming increasingly important. In particular, Parkinson’s disease (PD), a common neurodegenerative disorder, presents clinical features that can be easily misdiagnosed. As a mobility disease, it may greatly benefit from the antenna’s nearfield approach with a variety of activities that can use WBAN and IoT technologies to increase diagnosis accuracy and patient monitoring. Methodology: This study investigates the feasibility of leveraging a single patch antenna mounted (using cloth) on the wrist dorsal to differentiate actual Parkinson's disease (PD) from false PD using a small hardware platform. The semi-flexible antenna operates at the 2.4 GHz ISM band and collects reflection coefficient (Γ) data from patients performing five exercises designed for the classification of PD and other disorders such as essential tremor (ET) or those physiological disorders caused by anxiety or stress. The obtained data is normalized and converted into 2-D representations using the Gabor wavelet transform (GWT). Data augmentation is then used to expand the dataset size. A lightweight deep-learning (DL) model is developed to run on the GPU-enabled NVIDIA Jetson Nano platform. The DL model processes the 2-D images for feature extraction and classification. Findings: The DL model was trained and tested on both the original and augmented datasets, thus doubling the dataset size. To ensure robustness, a 5-fold stratified cross-validation (5-FSCV) method was used. The proposed framework, utilizing a DL model with 1.356 million parameters on the NVIDIA Jetson Nano, achieved optimal performance in terms of accuracy of 88.64%, F1-score of 88.54, and recall of 90.46%, with a latency of 33 seconds per epoch.Keywords: antenna, deep-learning, GPU-hardware, Parkinson’s disease
Procedia PDF Downloads 7579 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors
Procedia PDF Downloads 435578 Alternative Epinephrine Injector to Combat Allergy Induced Anaphylaxis
Authors: Jeremy Bost, Matthew Brett, Jacob Flynn, Weihui Li
Abstract:
One response during anaphylaxis is reduced blood pressure due to blood vessels relaxing and dilating. Epinephrine causes the blood vessels to constrict, which raises blood pressure to counteract the symptoms. When going through an allergic reaction, an Epinephrine injector is used to administer a shot of epinephrine intramuscularly. Epinephrine injectors have become an integral part of day-to-day life for people with allergies. Current Epinephrine injectors (EpiPen) are completely mechanical and have no sensors to monitor the vital signs of patients or give suggestions the optimal time for the shot. The EpiPens are also large and inconvenient to carry daily. The current price of an EpiPen is roughly 600$ for a pack of two. This makes carrying an EpiPen very expensive, especially when they need to be switched out when the epinephrine expires. This new design is in the form of a bracelet, which has the ability to inject epinephrine. The bracelet will be equipped with vital signs monitors that can aid the patient to sense the allergic reaction. The vital signs that would be of interest are blood pressure, heart rate and Electrodermal activity (EDA). The heart rate of the patient will be tracked by a photoplethysmograph (PPG) that is incorporated into the sensors. The heart rate is expected to increase during anaphylaxis. Blood pressure will be monitored through a radar sensor, which monitors the phase changes in electromagnetic waves as they reflect off of the blood vessel. EDA is under autonomic control. Allergen-induced anaphylaxis is caused by a release of chemical mediators from mast cells and basophils, thus changes the autonomic activity of the patient. So by measuring EDA, it will give the wearer an alert on how their autonomic nervous system is reacting. After the vital signs are collected, they will be sent to an application on a smartphone to be analyzed, which can then alert an emergency contact if the epinephrine injector on the bracelet is activated. Overall, this design creates a safer system by aiding the user in keeping track of their epinephrine injector, while making it easier to track their vital signs. Also, our design will be more affordable and more convenient to replace. Rather than replacing the entire product, only the needle and drug will be switched out and not the entire design.Keywords: allergy, anaphylaxis, epinephrine, injector, vital signs monitor
Procedia PDF Downloads 252577 The Nexus between Social Entrepreneurship and Youth Empowerment
Authors: Aaron G. Laylo
Abstract:
This paper mainly assumes that social entrepreneurship contributes significantly to youth empowerment i.e., work and community engagement. Two questions are thus raised in order to establish this hypothesis: 1) First, how does social entrepreneurship contribute to youth empowerment?; and 2) secondly, why is social entrpreneurship significantly incremental to youth empowerment? This research aims a) to investigate on the social aspect of entrepreneurship; b) to explore challenges in youth empowerment particularly in respect to work and community engagement; and c) to inquire into whether social enterprises have truly served as a catalyst for, thus an effective response to, youth empowerment. It must be emphasized that young people, which comprise 1.8 billion in a world of seven billion are an asset; Apparently, how to maximize that potential is crucial. By utilizing exploratory research design, the paper endeavors to generate new ideas in regards to both components, develop tentative theories on social entrepreneurship, and refine certain issues that are under observation and seek scholarly attention— a rather emerging phenomenon vis a vis the challenge to empower a significant cluster of the society. Case studies will be utilized as an approach in order to comparatively analyze youth-driven social enterprises in the Philippines that have been widely recognized as successful insofar as social impact is concerned. As most scholars attested, social entrepreneurship is still at its infancy stage. Youth empowerment, meanwhile, is yet a vast area to explore insofar as academic research is concerned. Programs and projects that advocate the pursuit of these components abound. However, academic research is yet to be undertaken to see and understand their social and economic relevance. This research is also an opportunity for scholars to explore, understand, and make sense of the promise that lies in social entrepreneurship research and how it can serve as a catalyst for youth empowerment. Youth-driven social enterprises can be an influential tool in sustaining development across the globe as they intend to provide opportunities for optimal economic productivity that recognizes social inclusion. Ultimately, this study should be able to contribute to both research and development-in-practice communities for the greater good of the society. By establishing the nexus between these two components, the research may contribute to fostering greater exploration of the benefits that both may yield to human progress as well as the gaps that have to be filled in by various policy stakeholders relevant to these units.Keywords: social entpreneurship, youth, empowerment, social inclusion
Procedia PDF Downloads 304576 Dynamic Ambulance Deployment to Reduce Ambulance Response Times Using Geographic Information Systems
Authors: Masoud Swalehe, Semra Günay
Abstract:
Developed countries are losing many lives to non-communicable diseases as compared to their developing counterparts. The effects of these diseases are mostly sudden and manifest at a very short time prior to death or a dangerous attack and this has consolidated the significance of emergency medical system (EMS) as one of the vital areas of healthcare service delivery. The primary objective of this research is to reduce ambulance response times (RT) of Eskişehir province EMS since a number of studies have established a relationship between ambulance response times and survival chances of patients especially out of hospital cardiac arrest (OHCA) victims. It has been found out that patients who receive out of hospital medical attention in few (4) minutes after cardiac arrest because of low ambulance response times stand higher chances of survival than their counterparts who take longer times (more than 12 minutes) to receive out of hospital medical care because of higher ambulance response times. The study will make use of geographic information systems (GIS) technology to dynamically reallocate ambulance resources according to demand and time so as to reduce ambulance response times. Geospatial-time distribution of ambulance calls (demand) will be used as a basis for optimal ambulance deployment using system status management (SSM) strategy to achieve much demand coverage with the same number of ambulance resources to cause response time reduction. Drive-time polygons will be used to come up with time specific facility coverage areas and suggesting additional facility candidate sites where ambulance resources can be moved to serve higher demands making use of network analysis techniques. Emergency Ambulance calls’ data from 1st January 2014 to 31st December 2014 obtained from Eskişehir province health directorate will be used in this study. This study will focus on the reduction of ambulance response times which is a key Emergency Medical Services performance indicator.Keywords: emergency medical services, system status management, ambulance response times, geographic information system, geospatial-time distribution, out of hospital cardiac arrest
Procedia PDF Downloads 300575 Cost-Effective Mechatronic Gaming Device for Post-Stroke Hand Rehabilitation
Authors: A. Raj Kumar, S. Bilaloglu
Abstract:
Stroke is a leading cause of adult disability worldwide. We depend on our hands for our activities of daily living(ADL). Although many patients regain the ability to walk, they continue to experience long-term hand motor impairments. As the number of individuals with young stroke is increasing, there is a critical need for effective approaches for rehabilitation of hand function post-stroke. Motor relearning for dexterity requires task-specific kinesthetic, tactile and visual feedback. However, when a stroke results in both sensory and motor impairment, it becomes difficult to ascertain when and what type of sensory substitutions can facilitate motor relearning. In an ideal situation, real-time task-specific data on the ability to learn and data-driven feedback to assist such learning will greatly assist rehabilitation for dexterity. We have found that kinesthetic and tactile information from the unaffected hand can assist patients re-learn the use of optimal fingertip forces during a grasp and lift task. Measurement of fingertip grip force (GF), load forces (LF), their corresponding rates (GFR and LFR), and other metrics can be used to gauge the impairment level and progress during learning. Currently ATI mini force-torque sensors are used in research settings to measure and compute the LF, GF, and their rates while grasping objects of different weights and textures. Use of the ATI sensor is cost prohibitive for deployment in clinical or at-home rehabilitation. A cost effective mechatronic device is developed to quantify GF, LF, and their rates for stroke rehabilitation purposes using off-the-shelf components such as load cells, flexi-force sensors, and an Arduino UNO microcontroller. A salient feature of the device is its integration with an interactive gaming environment to render a highly engaging user experience. This paper elaborates the integration of kinesthetic and tactile sensing through computation of LF, GF and their corresponding rates in real time, information processing, and interactive interfacing through augmented reality for visual feedback.Keywords: feedback, gaming, kinesthetic, rehabilitation, tactile
Procedia PDF Downloads 240574 Research on Configuration of Large-Scale Linear Array Feeder Truss Parabolic Cylindrical Antenna of Satellite
Authors: Chen Chuanzhi, Guo Yunyun
Abstract:
The large linear array feeding parabolic cylindrical antenna of the satellite has the ability of large-area line focusing, multi-directional beam clusters simultaneously in a certain azimuth plane and elevation plane, corresponding quickly to different orientations and different directions in a wide frequency range, dual aiming of frequency and direction, and combining space power. Therefore, the large-diameter parabolic cylindrical antenna has become one of the new development directions of spaceborne antennas. Limited by the size of the rocked fairing, the large-diameter spaceborne antenna is required to be small mass and have a deployment function. After being orbited, the antenna can be deployed by expanding and be stabilized. However, few types of structures can be used to construct large cylindrical shell structures in existing structures, which greatly limits the development and application of such antennas. Aiming at high structural efficiency, the geometrical characteristics of parabolic cylinders and mechanism topological mapping law to the expandable truss are studied, and the basic configuration of deployable truss with cylindrical shell is structured. Then a modular truss parabolic cylindrical antenna is designed in this paper. The antenna has the characteristics of stable structure, high precision of reflecting surface formation, controllable motion process, high storage rate, and lightweight, etc. On the basis of the overall configuration comprehensive theory and optimization method, the structural stiffness of the modular truss parabolic cylindrical antenna is improved. And the bearing density and impact resistance of support structure are improved based on the internal tension optimal distribution method of reflector forming. Finally, a truss-type cylindrical deployable support structure with high constriction-deployment ratio, high stiffness, controllable deployment, and low mass is successfully developed, laying the foundation for the application of large-diameter parabolic cylindrical antennas in satellite antennas.Keywords: linear array feed antenna, truss type, parabolic cylindrical antenna, spaceborne antenna
Procedia PDF Downloads 158573 IoT Based Soil Moisture Monitoring System for Indoor Plants
Authors: Gul Rahim Rahimi
Abstract:
The IoT-based soil moisture monitoring system for indoor plants is designed to address the challenges of maintaining optimal moisture levels in soil for plant growth and health. The system utilizes sensor technology to collect real-time data on soil moisture levels, which is then processed and analyzed using machine learning algorithms. This allows for accurate and timely monitoring of soil moisture levels, ensuring plants receive the appropriate amount of water to thrive. The main objectives of the system are twofold: to keep plants fresh and healthy by preventing water deficiency and to provide users with comprehensive insights into the water content of the soil on a daily and hourly basis. By monitoring soil moisture levels, users can identify patterns and trends in water consumption, allowing for more informed decision-making regarding watering schedules and plant care. The scope of the system extends to the agriculture industry, where it can be utilized to minimize the efforts required by farmers to monitor soil moisture levels manually. By automating the process of soil moisture monitoring, farmers can optimize water usage, improve crop yields, and reduce the risk of plant diseases associated with over or under-watering. Key technologies employed in the system include the Capacitive Soil Moisture Sensor V1.2 for accurate soil moisture measurement, the Node MCU ESP8266-12E Board for data transmission and communication, and the Arduino framework for programming and development. Additionally, machine learning algorithms are utilized to analyze the collected data and provide actionable insights. Cloud storage is utilized to store and manage the data collected from multiple sensors, allowing for easy access and retrieval of information. Overall, the IoT-based soil moisture monitoring system offers a scalable and efficient solution for indoor plant care, with potential applications in agriculture and beyond. By harnessing the power of IoT and machine learning, the system empowers users to make informed decisions about plant watering, leading to healthier and more vibrant indoor environments.Keywords: IoT-based, soil moisture monitoring, indoor plants, water management
Procedia PDF Downloads 51572 Assessment of Radiation Protection Measures in Diagnosis and Treatment: A Critical Review
Authors: Buhari Samaila, Buhari Maidamma
Abstract:
Background: The use of ionizing radiation in medical diagnostics and treatment is indispensable for accurate imaging and effective cancer therapies. However, radiation exposure carries inherent risks, necessitating strict protection measures to safeguard both patients and healthcare workers. This review critically examines the existing radiation protection measures in diagnostic radiology and radiotherapy, highlighting technological advancements, regulatory frameworks, and challenges. Objective: The objective of this review is to critically evaluate the effectiveness of current radiation protection measures in diagnostic and therapeutic radiology, focusing on minimizing patient and staff exposure to ionizing radiation while ensuring optimal clinical outcomes and propose future directions for improvement. Method: A comprehensive literature review was conducted, covering scientific studies, regulatory guidelines, and international standards on radiation protection in both diagnostic radiology and radiotherapy. Emphasis was placed on ALARA principles, dose optimization techniques, and protective measures for both patients and healthcare workers. Results: Radiation protection measures in diagnostic radiology include the use of shielding devices, minimizing exposure times, and employing advanced imaging technologies to reduce dose. In radiotherapy, accurate treatment planning and image-guided techniques enhance patient safety, while shielding and dose monitoring safeguard healthcare personnel. Challenges such as limited infrastructure in low-income settings and gaps in healthcare worker training persist, impacting the overall efficacy of protection strategies. Conclusion: While significant advancements have been made in radiation protection, challenges remain in optimizing safety, especially in resource-limited settings. Future efforts should focus on enhancing training, investing in advanced technologies, and strengthening regulatory compliance to ensure continuous improvement in radiation safety practices.Keywords: radiation protection, diagnostic radiology, radiotherapy, ALARA, patient safety, healthcare worker safety
Procedia PDF Downloads 24571 Development and Validation of a Coronary Heart Disease Risk Score in Indian Type 2 Diabetes Mellitus Patients
Authors: Faiz N. K. Yusufi, Aquil Ahmed, Jamal Ahmad
Abstract:
Diabetes in India is growing at an alarming rate and the complications caused by it need to be controlled. Coronary heart disease (CHD) is one of the complications that will be discussed for prediction in this study. India has the second most number of diabetes patients in the world. To the best of our knowledge, there is no CHD risk score for Indian type 2 diabetes patients. Any form of CHD has been taken as the event of interest. A sample of 750 was determined and randomly collected from the Rajiv Gandhi Centre for Diabetes and Endocrinology, J.N.M.C., A.M.U., Aligarh, India. Collected variables include patients data such as sex, age, height, weight, body mass index (BMI), blood sugar fasting (BSF), post prandial sugar (PP), glycosylated haemoglobin (HbA1c), diastolic blood pressure (DBP), systolic blood pressure (SBP), smoking, alcohol habits, total cholesterol (TC), triglycerides (TG), high density lipoprotein (HDL), low density lipoprotein (LDL), very low density lipoprotein (VLDL), physical activity, duration of diabetes, diet control, history of antihypertensive drug treatment, family history of diabetes, waist circumference, hip circumference, medications, central obesity and history of CHD. Predictive risk scores of CHD events are designed by cox proportional hazard regression. Model calibration and discrimination is assessed from Hosmer Lemeshow and area under receiver operating characteristic (ROC) curve. Overfitting and underfitting of the model is checked by applying regularization techniques and best method is selected between ridge, lasso and elastic net regression. Youden’s index is used to choose the optimal cut off point from the scores. Five year probability of CHD is predicted by both survival function and Markov chain two state model and the better technique is concluded. The risk scores for CHD developed can be calculated by doctors and patients for self-control of diabetes. Furthermore, the five-year probabilities can be implemented as well to forecast and maintain the condition of patients.Keywords: coronary heart disease, cox proportional hazard regression, ROC curve, type 2 diabetes Mellitus
Procedia PDF Downloads 219570 An Analysis of Pick Travel Distances for Non-Traditional Unit Load Warehouses with Multiple P/D Points
Authors: Subir S. Rao
Abstract:
Existing warehouse configurations use non-traditional aisle designs with a central P/D point in their models, which is mathematically simple but less practical. Many warehouses use multiple P/D points to avoid congestion for pickers, and different warehouses have different flow policies and infrastructure for using the P/D points. Many warehouses use multiple P/D points with non-traditional aisle designs in their analytical models. Standard warehouse models introduce one-sided multiple P/D points in a flying-V warehouse and minimize pick distance for a one-way travel between an active P/D point and a pick location with P/D points, assuming uniform flow rates. A simulation of the mathematical model generally uses four fixed configurations of P/D points which are on two different sides of the warehouse. It can be easily proved that if the source and destination P/D points are both chosen randomly, in a uniform way, then minimizing the one-way travel is the same as minimizing the two-way travel. Another warehouse configuration analytically models the warehouse for multiple one-sided P/D points while keeping the angle of the cross-aisles and picking aisles as a decision variable. The minimization of the one-way pick travel distance from the P/D point to the pick location by finding the optimal position/angle of the cross-aisle and picking aisle for warehouses having different numbers of multiple P/D points with variable flow rates is also one of the objectives. Most models of warehouses with multiple P/D points are one-way travel models and we extend these analytical models to minimize the two-way pick travel distance wherein the destination P/D is chosen optimally for the return route, which is not similar to minimizing the one-way travel. In most warehouse models, the return P/D is chosen randomly, but in our research, the return route P/D point is chosen optimally. Such warehouses are common in practice, where the flow rates at the P/D points are flexible and depend totally on the position of the picks. A good warehouse management system is efficient in consolidating orders over multiple P/D points in warehouses where the P/D is flexible in function. In the latter arrangement, pickers and shrink-wrap processes are not assigned to particular P/D points, which ultimately makes the P/D points more flexible and easy to use interchangeably for picking and deposits. The number of P/D points considered in this research uniformly increases from a single-central one to a maximum of each aisle symmetrically having a P/D point below it.Keywords: non-traditional warehouse, V cross-aisle, multiple P/D point, pick travel distance
Procedia PDF Downloads 40