Search results for: back propagation algorithm
592 O-LEACH: The Problem of Orphan Nodes in the LEACH of Routing Protocol for Wireless Sensor Networks
Authors: Wassim Jerbi, Abderrahmen Guermazi, Hafedh Trabelsi
Abstract:
The optimum use of coverage in wireless sensor networks (WSNs) is very important. LEACH protocol called Low Energy Adaptive Clustering Hierarchy, presents a hierarchical clustering algorithm for wireless sensor networks. LEACH is a protocol that allows the formation of distributed cluster. In each cluster, LEACH randomly selects some sensor nodes called cluster heads (CHs). The selection of CHs is made with a probabilistic calculation. It is supposed that each non-CH node joins a cluster and becomes a cluster member. Nevertheless, some CHs can be concentrated in a specific part of the network. Thus, several sensor nodes cannot reach any CH. to solve this problem. We created an O-LEACH Orphan nodes protocol, its role is to reduce the sensor nodes which do not belong the cluster. The cluster member called Gateway receives messages from neighboring orphan nodes. The gateway informs CH having the neighboring nodes that not belong to any group. However, Gateway called (CH') attaches the orphaned nodes to the cluster and then collected the data. O-Leach enables the formation of a new method of cluster, leads to a long life and minimal energy consumption. Orphan nodes possess enough energy and seeks to be covered by the network. The principal novel contribution of the proposed work is O-LEACH protocol which provides coverage of the whole network with a minimum number of orphaned nodes and has a very high connectivity rates.As a result, the WSN application receives data from the entire network including orphan nodes. The proper functioning of the Application requires, therefore, management of intelligent resources present within each the network sensor. The simulation results show that O-LEACH performs better than LEACH in terms of coverage, connectivity rate, energy and scalability.Keywords: WSNs; routing; LEACH; O-LEACH; Orphan nodes; sub-cluster; gateway; CH’
Procedia PDF Downloads 371591 EcoMush: Mapping Sustainable Mushroom Production in Bangladesh
Authors: A. A. Sadia, A. Emdad, E. Hossain
Abstract:
The increasing importance of mushrooms as a source of nutrition, health benefits, and even potential cancer treatment has raised awareness of the impact of climate-sensitive variables on their cultivation. Factors like temperature, relative humidity, air quality, and substrate composition play pivotal roles in shaping mushroom growth, especially in Bangladesh. Oyster mushrooms, a commonly cultivated variety in this region, are particularly vulnerable to climate fluctuations. This research explores the climatic dynamics affecting oyster mushroom cultivation and, presents an approach to address these challenges and provides tangible solutions to fortify the agro-economy, ensure food security, and promote the sustainability of this crucial food source. Using climate and production data, this study evaluates the performance of three clustering algorithms -KMeans, OPTICS, and BIRCH- based on various quality metrics. While each algorithm demonstrates specific strengths, the findings provide insights into their effectiveness for this specific dataset. The results yield essential information, pinpointing the optimal temperature range of 13°C-22°C, the unfavorable temperature threshold of 28°C and above, and the ideal relative humidity range of 75-85% with the suitable production regions in three different seasons: Kharif-1, 2, and Robi. Additionally, a user-friendly web application is developed to support mushroom farmers in making well-informed decisions about their cultivation practices. This platform offers valuable insights into the most advantageous periods for oyster mushroom farming, with the overarching goal of enhancing the efficiency and profitability of mushroom farming.Keywords: climate variability, mushroom cultivation, clustering techniques, food security, sustainability, web-application
Procedia PDF Downloads 69590 Multi-Criteria Optimal Management Strategy for in-situ Bioremediation of LNAPL Contaminated Aquifer Using Particle Swarm Optimization
Authors: Deepak Kumar, Jahangeer, Brijesh Kumar Yadav, Shashi Mathur
Abstract:
In-situ remediation is a technique which can remediate either surface or groundwater at the site of contamination. In the present study, simulation optimization approach has been used to develop management strategy for remediating LNAPL (Light Non-Aqueous Phase Liquid) contaminated aquifers. Benzene, toluene, ethyl benzene and xylene are the main component of LNAPL contaminant. Collectively, these contaminants are known as BTEX. In in-situ bioremediation process, a set of injection and extraction wells are installed. Injection wells supply oxygen and other nutrient which convert BTEX into carbon dioxide and water with the help of indigenous soil bacteria. On the other hand, extraction wells check the movement of plume along downstream. In this study, optimal design of the system has been done using PSO (Particle Swarm Optimization) algorithm. A comprehensive management strategy for pumping of injection and extraction wells has been done to attain a maximum allowable concentration of 5 ppm and 4.5 ppm. The management strategy comprises determination of pumping rates, the total pumping volume and the total running cost incurred for each potential injection and extraction well. The results indicate a high pumping rate for injection wells during the initial management period since it facilitates the availability of oxygen and other nutrients necessary for biodegradation, however it is low during the third year on account of sufficient oxygen availability. This is because the contaminant is assumed to have biodegraded by the end of the third year when the concentration drops to a permissible level.Keywords: groundwater, in-situ bioremediation, light non-aqueous phase liquid, BTEX, particle swarm optimization
Procedia PDF Downloads 445589 Experimental Investigation of Absorbent Regeneration Techniques to Lower the Cost of Combined CO₂ and SO₂ Capture Process
Authors: Bharti Garg, Ashleigh Cousins, Pauline Pearson, Vincent Verheyen, Paul Feron
Abstract:
The presence of SO₂ in power plant flue gases makes flue gas desulfurization (FGD) an essential requirement prior to post combustion CO₂ (PCC) removal facilities. Although most of the power plants worldwide deploy FGD in order to comply with environmental regulations, generally the achieved SO₂ levels are not sufficiently low for the flue gases to enter the PCC unit. The SO₂ level in the flue gases needs to be less than 10 ppm to effectively operate the PCC installation. The existing FGD units alone cannot bring down the SO₂ levels to or below 10 ppm as required for CO₂ capture. It might require an additional scrubber along with the existing FGD unit to bring the SO₂ to the desired levels. The absence of FGD units in Australian power plants brings an additional challenge. SO₂ concentrations in Australian power station flue gas emissions are in the range of 100-600 ppm. This imposes a serious barrier on the implementation of standard PCC technologies in Australia. CSIRO’s developed CS-Cap process is a unique solution to capture SO₂ and CO₂ in a single column with single absorbent which can potentially bring cost-effectiveness to the commercial deployment of carbon capture in Australia, by removing the need for FGD. Estimated savings of removing SO₂ through a similar process as CS-Cap is around 200 MMUSD for a 500 MW Australian power plant. Pilot plant trials conducted to generate the proof of concept resulted in 100% removal of SO₂ from flue gas without utilising standard limestone-based FGD. In this work, removal of absorbed sulfur from aqueous amine absorbents generated in the pilot plant trials has been investigated by reactive crystallisation and thermal reclamation. More than 95% of the aqueous amines can be reclaimed back from the sulfur loaded absorbent via reactive crystallisation. However, the recovery of amines through thermal reclamation is limited and depends on the sulfur loading on the spent absorbent. The initial experimental work revealed that reactive crystallisation is a better fit for CS-Cap’s sulfur-rich absorbent especially when it is also capable of generating K₂SO₄ crystals of highly saleable quality ~ 99%. Initial cost estimation carried on both the technologies resulted in almost similar capital expenditure; however, the operating cost is considerably higher in thermal reclaimer than that in crystalliser. The experimental data generated in the laboratory from both the regeneration techniques have been used to generate the simulation model in Aspen Plus. The simulation model illustrates the economic benefits which could be gained by removing flue gas desulfurization prior to standard PCC unit and replacing it with a CS-Cap absorber column co-capturing CO₂ and SO₂, and it's absorbent regeneration system which would be either reactive crystallisation or thermal reclamation.Keywords: combined capture, cost analysis, crystallisation, CS-Cap, flue gas desulfurisation, regeneration, sulfur, thermal reclamation
Procedia PDF Downloads 127588 Screening Tools and Its Accuracy for Common Soccer Injuries: A Systematic Review
Authors: R. Christopher, C. Brandt, N. Damons
Abstract:
Background: The sequence of prevention model states that by constant assessment of injury, injury mechanisms and risk factors are identified, highlighting that collecting and recording of data is a core approach for preventing injuries. Several screening tools are available for use in the clinical setting. These screening techniques only recently received research attention, hence there is a dearth of inconsistent and controversial data regarding their applicability, validity, and reliability. Several systematic reviews related to common soccer injuries have been conducted; however, none of them addressed the screening tools for common soccer injuries. Objectives: The purpose of this study was to conduct a review of screening tools and their accuracy for common injuries in soccer. Methods: A systematic scoping review was performed based on the Joanna Briggs Institute procedure for conducting systematic reviews. Databases such as SPORT Discus, Cinahl, Medline, Science Direct, PubMed, and grey literature were used to access suitable studies. Some of the key search terms included: injury screening, screening, screening tool accuracy, injury prevalence, injury prediction, accuracy, validity, specificity, reliability, sensitivity. All types of English studies dating back to the year 2000 were included. Two blind independent reviewers selected and appraised articles on a 9-point scale for inclusion as well as for the risk of bias with the ACROBAT-NRSI tool. Data were extracted and summarized in tables. Plot data analysis was done, and sensitivity and specificity were analyzed with their respective 95% confidence intervals. I² statistic was used to determine the proportion of variation across studies. Results: The initial search yielded 95 studies, of which 21 were duplicates, and 54 excluded. A total of 10 observational studies were included for the analysis: 3 studies were analysed quantitatively while the remaining 7 were analysed qualitatively. Seven studies were graded low and three studies high risk of bias. Only high methodological studies (score > 9) were included for analysis. The pooled studies investigated tools such as the Functional Movement Screening (FMS™), the Landing Error Scoring System (LESS), the Tuck Jump Assessment, the Soccer Injury Movement Screening (SIMS), and the conventional hamstrings to quadriceps ratio. The accuracy of screening tools was of high reliability, sensitivity and specificity (calculated as ICC 0.68, 95% CI: 52-0.84; and 0.64, 95% CI: 0.61-0.66 respectively; I² = 13.2%, P=0.316). Conclusion: Based on the pooled results from the included studies, the FMS™ has a good inter-rater and intra-rater reliability. FMS™ is a screening tool capable of screening for common soccer injuries, and individual FMS™ scores are a better determinant of performance in comparison with the overall FMS™ score. Although meta-analysis could not be done for all the included screening tools, qualitative analysis also indicated good sensitivity and specificity of the individual tools. Higher levels of evidence are, however, needed for implication in evidence-based practice.Keywords: accuracy, screening tools, sensitivity, soccer injuries, specificity
Procedia PDF Downloads 179587 A Visual Analytics Tool for the Structural Health Monitoring of an Aircraft Panel
Authors: F. M. Pisano, M. Ciminello
Abstract:
Aerospace, mechanical, and civil engineering infrastructures can take advantages from damage detection and identification strategies in terms of maintenance cost reduction and operational life improvements, as well for safety scopes. The challenge is to detect so called “barely visible impact damage” (BVID), due to low/medium energy impacts, that can progressively compromise the structure integrity. The occurrence of any local change in material properties, that can degrade the structure performance, is to be monitored using so called Structural Health Monitoring (SHM) systems, in charge of comparing the structure states before and after damage occurs. SHM seeks for any "anomalous" response collected by means of sensor networks and then analyzed using appropriate algorithms. Independently of the specific analysis approach adopted for structural damage detection and localization, textual reports, tables and graphs describing possible outlier coordinates and damage severity are usually provided as artifacts to be elaborated for information extraction about the current health conditions of the structure under investigation. Visual Analytics can support the processing of monitored measurements offering data navigation and exploration tools leveraging the native human capabilities of understanding images faster than texts and tables. Herein, a SHM system enrichment by integration of a Visual Analytics component is investigated. Analytical dashboards have been created by combining worksheets, so that a useful Visual Analytics tool is provided to structural analysts for exploring the structure health conditions examined by a Principal Component Analysis based algorithm.Keywords: interactive dashboards, optical fibers, structural health monitoring, visual analytics
Procedia PDF Downloads 124586 Teaching Timber: The Role of the Architectural Student and Studio Course within an Interdisciplinary Research Project
Authors: Catherine Sunter, Marius Nygaard, Lars Hamran, Børre Skodvin, Ute Groba
Abstract:
Globally, the construction and operation of buildings contribute up to 30% of annual green house gas emissions. In addition, the building sector is responsible for approximately a third of global waste. In this context, the utilization of renewable resources in buildings, especially materials that store carbon, will play a significant role in the growing city. These are two reasons for introducing wood as a building material with a growing relevance. A third is the potential economic value in countries with a forest industry that is not currently used to capacity. In 2013, a four-year interdisciplinary research project titled “Wood Be Better” was created, with the principle goal to produce and publicise knowledge that would facilitate increased use of wood in buildings in urban areas. The research team consisted of architects, engineers, wood technologists and mycologists, both from research institutions and industrial organisations. Five structured work packages were included in the initial research proposal. Work package 2 was titled “Design-based research” and proposed using architecture master courses as laboratories for systematic architectural exploration. The aim was twofold: to provide students with an interdisciplinary team of experts from consultancies and producers, as well as teachers and researchers, that could offer the latest information on wood technologies; whilst at the same time having the studio course test the effects of the use of wood on the functional, technical and tectonic quality within different architectural projects on an urban scale, providing results that could be fed back into the research material. The aim of this article is to examine the successes and failures of this pedagogical approach in an architecture school, as well as the opportunities for greater integration between academic research projects, industry experts and studio courses in the future. This will be done through a set of qualitative interviews with researchers, teaching staff and students of the studio courses held each semester since spring 2013. These will investigate the value of the various experts of the course; the different themes of each course; the response to the urban scale, architectural form and construction detail; the effect of working with the goals of a research project; and the value of the studio projects to the research. In addition, six sample projects will be presented as case studies. These will show how the projects related to the research and could be collected and further analysed, innovative solutions that were developed during the course, different architectural expressions that were enabled by timber, and how projects were used as an interdisciplinary testing ground for integrated architectural and engineering solutions between the participating institutions. The conclusion will reflect on the original intentions of the studio courses, the opportunities and challenges faced by students, researchers and teachers, the educational implications, and on the transparent and inclusive discourse between the architectural researcher, the architecture student and the interdisciplinary experts.Keywords: architecture, interdisciplinary, research, studio, students, wood
Procedia PDF Downloads 312585 Study and Simulation of a Dynamic System Using Digital Twin
Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli
Abstract:
Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models
Procedia PDF Downloads 148584 In Silico Study of Antiviral Drugs Against Three Important Proteins of Sars-Cov-2 Using Molecular Docking Method
Authors: Alireza Jalalvand, Maryam Saleh, Somayeh Behjat Khatouni, Zahra Bahri Najafi, Foroozan Fatahinia, Narges Ismailzadeh, Behrokh Farahmand
Abstract:
Object: In the last two decades, the recent outbreak of Coronavirus (SARS-CoV-2) imposed a global pandemic in the world. Despite the increasing prevalence of the disease, there are no effective drugs to treat it. A suitable and rapid way to afford an effective drug and treat the global pandemic is a computational drug study. This study used molecular docking methods to examine the potential inhibition of over 50 antiviral drugs against three fundamental proteins of SARS-CoV-2. METHODS: Through a literature review, three important proteins (a key protease, RNA-dependent RNA polymerase (RdRp), and spike) were selected as drug targets. Three-dimensional (3D) structures of protease, spike, and RdRP proteins were obtained from the Protein Data Bank. Protein had minimal energy. Over 50 antiviral drugs were considered candidates for protein inhibition and their 3D structures were obtained from drug banks. The Autodock 4.2 software was used to define the molecular docking settings and run the algorithm. RESULTS: Five drugs, including indinavir, lopinavir, saquinavir, nelfinavir, and remdesivir, exhibited the highest inhibitory potency against all three proteins based on the binding energies and drug binding positions deduced from docking and hydrogen-bonding analysis. Conclusions: According to the results, among the drugs mentioned, saquinavir and lopinavir showed the highest inhibitory potency against all three proteins compared to other drugs. It may enter laboratory phase studies as a dual-drug treatment to inhibit SARS-CoV-2.Keywords: covid-19, drug repositioning, molecular docking, lopinavir, saquinavir
Procedia PDF Downloads 88583 CO2 Emission and Cost Optimization of Reinforced Concrete Frame Designed by Performance Based Design Approach
Authors: Jin Woo Hwang, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
As greenhouse effect has been recognized as serious environmental problem of the world, interests in carbon dioxide (CO2) emission which comprises major part of greenhouse gas (GHG) emissions have been increased recently. Since construction industry takes a relatively large portion of total CO2 emissions of the world, extensive studies about reducing CO2 emissions in construction and operation of building have been carried out after the 2000s. Also, performance based design (PBD) methodology based on nonlinear analysis has been robustly developed after Northridge Earthquake in 1994 to assure and assess seismic performance of building more exactly because structural engineers recognized that prescriptive code based design approach cannot address inelastic earthquake responses directly and assure performance of building exactly. Although CO2 emissions and PBD approach are recent rising issues on construction industry and structural engineering, there were few or no researches considering these two issues simultaneously. Thus, the objective of this study is to minimize the CO2 emissions and cost of building designed by PBD approach in structural design stage considering structural materials. 4 story and 4 span reinforced concrete building optimally designed to minimize CO2 emissions and cost of building and to satisfy specific seismic performance (collapse prevention in maximum considered earthquake) of building satisfying prescriptive code regulations using non-dominated sorting genetic algorithm-II (NSGA-II). Optimized design result showed that minimized CO2 emissions and cost of building were acquired satisfying specific seismic performance. Therefore, the methodology proposed in this paper can be used to reduce both CO2 emissions and cost of building designed by PBD approach.Keywords: CO2 emissions, performance based design, optimization, sustainable design
Procedia PDF Downloads 406582 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 281581 Improvement Performances of the Supersonic Nozzles at High Temperature Type Minimum Length Nozzle
Authors: W. Hamaidia, T. Zebbiche
Abstract:
This paper presents the design of axisymmetric supersonic nozzles, in order to accelerate a supersonic flow to the desired Mach number and that having a small weight, in the same time gives a high thrust. The concerned nozzle gives a parallel and uniform flow at the exit section. The nozzle is divided into subsonic and supersonic regions. The supersonic portion is independent to the upstream conditions of the sonic line. The subsonic portion is used to give a sonic flow at the throat. In this case, nozzle gives a uniform and parallel flow at the exit section. It’s named by minimum length Nozzle. The study is done at high temperature, lower than the dissociation threshold of the molecules, in order to improve the aerodynamic performances. Our aim consists of improving the performances both by the increase of exit Mach number and the thrust coefficient and by reduction of the nozzle's mass. The variation of the specific heats with the temperature is considered. The design is made by the Method of Characteristics. The finite differences method with predictor-corrector algorithm is used to make the numerical resolution of the obtained nonlinear algebraic equations. The application is for air. All the obtained results depend on three parameters which are exit Mach number, the stagnation temperature, the chosen mesh in characteristics. A numerical simulation of nozzle through Computational Fluid Dynamics-FASTRAN was done to determine and to confirm the necessary design parameters.Keywords: flux supersonic flow, axisymmetric minimum length nozzle, high temperature, method of characteristics, calorically imperfect gas, finite difference method, trust coefficient, mass of the nozzle, specific heat at constant pressure, air, error
Procedia PDF Downloads 150580 A Generation Outside: Afghan Refugees in Greece 2003-2016
Authors: Kristina Colovic, Mari Janikian, Nikolaos Takis, Fotini-Sonia Apergi
Abstract:
A considerable number of Afghan asylum seekers in Greece are still waiting for answers about their future and status for personal, social and societal advancement. Most have been trapped in a stalemate of continuously postponed or temporarily progressed levels of integration into the EU/Greek process of asylum. Limited quantitative research exists investigating the psychological effects of long-term displacement among Afghans refugees in Greece. The purpose of this study is to investigate factors that are associated with and predict psychological distress symptoms among this population. Data from a sample of native Afghan nationals (N > 70) living in Greece for approximately the last ten years will be collected from May to July 2016. Criteria for participation include the following: being 18 years of age or older, and emigration from Afghanistan to Greece from 2003 onwards (i.e., long-term refugees or part of the 'old system of asylum'). Snowball sampling will be used to recruit participants, as this is considered the most effective option when attempting to study refugee populations. Participants will complete self-report questionnaires, consisting of the Afghan Symptom Checklist (ASCL), a culturally validated measure of psychological distress, the World Health Organization Quality of Life scale (WHOQOL-BREF), an adapted version of the Comprehensive Trauma Inventory-104 (CTI-104), and a modified Psychological Acculturation Scale. All instruments will be translated in Greek, through the use of forward- and back-translations by bilingual speakers of English and Greek, following WHO guidelines. A pilot study with 5 Afghan participants will take place to check for discrepancies in understanding and for further adapting the instruments as needed. Demographic data, including age, gender, year of arrival to Greece and current asylum status will be explored. Three different types of analyses (descriptive statistics, bivariate correlations, and multivariate linear regression) will be used in this study. Descriptive findings for respondent demographics, psychological distress symptoms, traumatic life events and quality of life will be reported. Zero-order correlations will assess the interrelationships among demographic, traumatic life events, psychological distress, and quality of life variables. Lastly, a multivariate linear regression model will be estimated. The findings from the study will contribute to understanding the determinants of acculturation, distress and trauma on daily functioning for Afghans in Greece. The main implications of the current study will be to advocate for capacity building and empower communities through effective program evaluation and design for mental health services for all refugee populations in Greece.Keywords: Afghan refugees, evaluation, Greece, mental health, quality of life
Procedia PDF Downloads 288579 Paramedic Strength and Flexibility: Findings of a 6-Month Workplace Exercise Randomised Controlled Trial
Authors: Jayden R. Hunter, Alexander J. MacQuarrie, Samantha C. Sheridan, Richard High, Carolyn Waite
Abstract:
Workplace exercise programs have been recommended to improve the musculoskeletal fitness of paramedics with the aim of reducing injury rates, and while they have shown efficacy in other occupations, they have not been delivered and evaluated in Australian paramedics to our best knowledge. This study investigated the effectiveness of a 6-month workplace exercise program (MedicFit; MF) to improve paramedic fitness with or without health coach (HC) support. A group of regional Australian paramedics (n=76; 43 male; mean ± SD 36.5 ± 9.1 years; BMI 28.0 ± 5.4 kg/m²) were randomised at the station level to either exercise with remote health coach support (MFHC; n=30), exercise without health coach support (MF; n=23), or no-exercise control (CON; n=23) groups. MFHC and MF participants received a 6-month, low-moderate intensity resistance and flexibility exercise program to be performed ƒ on station without direct supervision. Available exercise equipment included dumbbells, resistance bands, Swiss balls, medicine balls, kettlebells, BOSU balls, yoga mats, and foam rollers. MFHC and MF participants were also provided with a comprehensive exercise manual including sample exercise sessions aimed at improving musculoskeletal strength and flexibility which included exercise prescription (i.e. sets, reps, duration, load). Changes to upper-body (push-ups), lower-body (wall squat) and core (plank hold) strength and flexibility (back scratch and sit-reach tests) after the 6-month intervention were analysed using repeated measures ANOVA to compare changes between groups and over time. Upper-body (+20.6%; p < 0.01; partial eta squared = 0.34 [large effect]) and lower-body (+40.8%; p < 0.05; partial eta squared = 0.08 (moderate effect)) strength increased significantly with no interaction or group effects. Changes to core strength (+1.4%; p=0.17) and both upper-body (+19.5%; p=0.56) and lower-body (+3.3%; p=0.15) flexibility were non-significant with no interaction or group effects observed. While upper- and lower-body strength improved over the course of the intervention, providing a 6-month workplace exercise program with or without health coach support did not confer any greater strength or flexibility benefits than exercise testing alone (CON). Although exercise adherence was not measured, it is possible that participants require additional methods of support such as face-to-face exercise instruction and guidance and individually-tailored exercise programs to achieve adequate participation and improvements in musculoskeletal fitness. This presents challenges for more remote paramedic stations without regular face-to-face access to suitably qualified exercise professionals, and future research should investigate the effectiveness of other forms of exercise delivery and guidance for these paramedic officers such as remotely-facilitated digital exercise prescription and monitoring.Keywords: workplace exercise, paramedic health, strength training, flexibility training
Procedia PDF Downloads 139578 A QoS Aware Cluster Based Routing Algorithm for Wireless Mesh Network Using LZW Lossless Compression
Authors: J. S. Saini, P. P. K. Sandhu
Abstract:
The multi-hop nature of Wireless Mesh Networks and the hasty progression of throughput demands results in multi- channels and multi-radios structures in mesh networks, but the main problem of co-channels interference reduces the total throughput, specifically in multi-hop networks. Quality of Service mentions a vast collection of networking technologies and techniques that guarantee the ability of a network to make available desired services with predictable results. Quality of Service (QoS) can be directed at a network interface, towards a specific server or router's performance, or in specific applications. Due to interference among various transmissions, the QoS routing in multi-hop wireless networks is formidable task. In case of multi-channel wireless network, since two transmissions using the same channel may interfere with each other. This paper has considered the Destination Sequenced Distance Vector (DSDV) routing protocol to locate the secure and optimised path. The proposed technique also utilizes the Lempel–Ziv–Welch (LZW) based lossless data compression and intra cluster data aggregation to enhance the communication between the source and the destination. The use of clustering has the ability to aggregate the multiple packets and locates a single route using the clusters to improve the intra cluster data aggregation. The use of the LZW based lossless data compression has ability to reduce the data packet size and hence it will consume less energy, thus increasing the network QoS. The MATLAB tool has been used to evaluate the effectiveness of the projected technique. The comparative analysis has shown that the proposed technique outperforms over the existing techniques.Keywords: WMNS, QOS, flooding, collision avoidance, LZW, congestion control
Procedia PDF Downloads 338577 Iot-Based Interactive Patient Identification and Safety Management System
Authors: Jonghoon Chun, Insung Kim, Jonghyun Lim, Gun Ro
Abstract:
We believe that it is possible to provide a solution to reduce patient safety accidents by displaying correct medical records and prescription information through interactive patient identification. Our system is based on the use of smart bands worn by patients and these bands communicate with the hybrid gateways which understand both BLE and Wifi communication protocols. Through the convergence of low-power Bluetooth (BLE) and hybrid gateway technology, which is one of short-range wireless communication technologies, we implement ‘Intelligent Patient Identification and Location Tracking System’ to prevent medical malfunction frequently occurring in medical institutions. Based on big data and IOT technology using MongoDB, smart band (BLE, NFC function) and hybrid gateway, we develop a system to enable two-way communication between medical staff and hospitalized patients as well as to store locational information of the patients in minutes. Based on the precise information provided using big data systems, such as location tracking and movement of in-hospital patients wearing smart bands, our findings include the fact that a patient-specific location tracking algorithm can more efficiently operate HIS (Hospital Information System) and other related systems. Through the system, we can always correctly identify patients using identification tags. In addition, the system automatically determines whether the patient is a scheduled for medical service by the system in use at the medical institution, and displays the appropriateness of the medical treatment and the medical information (medical record and prescription information) on the screen and voice. This work was supported in part by the Korea Technology and Information Promotion Agency for SMEs (TIPA) grant funded by the Korean Small and Medium Business Administration (No. S2410390).Keywords: BLE, hybrid gateway, patient identification, IoT, safety management, smart band
Procedia PDF Downloads 311576 'Utsadhara': Rejuvenating the Dead River Edge into an Urban Activity Space along the Banks of River Hooghly
Authors: Aparna Saha, Tuhin Ahmed
Abstract:
West Bengal has a number of important rivers, each with its distinctive character and a story. Traditionally, cities have ‘divulged’ to rivers at the river edges and rivers have been an inseparable part of the urban experience. Considering the research aspect, the area is taken in Barrackpore, a small but important outgrowth of Kolkata Municipal Association, West Bengal. Barrackpore, at present, has ample inadequate public open spaces at the neighborhood level where people of different socio-cultural, economic, and religious backgrounds can come together and engage in various leisure activities, but there is no opportunity either, where people can learn about and explore the rich history of the settlement. Pertaining to these issues forms the backdrop of this research paper which has been conceptualized as a place from space that will bring people back to the river and increase community interactions and will also celebrate and commemorate towards the historical importance of the river and its edges. The entire precinct bordering the river represents the transition from pre-independence (Raj era) to Sepoy phase (Swaraj era), finally culminating into the Gandhian philosophy which is being projected into the already existing Gandhi Ghat. The ultimate aim of the paper entitled ‘Utsadhara- Rejuvenating the dead river edge into an urban activity space along the banks of river Hooghly’ is to create a socio-cultural space keeping the heritage identity intact through judicious use of the water body. Also, a balance is kept between the natural ecosystem and the cosmetic development of the surrounding open spaces. It can be duly achieved by the aforementioned methodology provided in the document, but mainly it would focus into preserving the historic ethnicity of the place by holding its character through various facts and figures as well as features. Most importantly the natural topography of the place is left intact. The second priority is given in terms of hierarchy of well connected public plazas, podiums where people from different socio-economic backgrounds irrespective of age and sex could socialize and reach towards venturing into a cordial relationship with one another. The third priority is to provide a platform for the common mass for showcasing their skills and talent through different art and craft forms which in turn would enhance their individual self and also the community as a whole through economic rise. Apart from this here some spaces are created in accordance to different age groups or class of people. The paper intends to see the river as a major multifunctional public space to attract people for different activities and re-establish the relationship of the river with the settlement. Hence, it is apprehended that the paper is not only intended to a simple riverfront conservation project but unlike others it is a place which is created for the people, by the people and of the people towards a holistic community development through a sustainable approach.Keywords: holistic community development, public activity space, river-urban precinct, urban dead space
Procedia PDF Downloads 135575 A Linearly Scalable Family of Swapped Networks
Authors: Richard Draper
Abstract:
A supercomputer can be constructed from identical building blocks which are small parallel processors connected by a network referred to as the local network. The routers have unused ports which are used to interconnect the building blocks. These connections are referred to as the global network. The address space has a global and a local component (g, l). The conventional way to connect the building blocks is to connect (g, l) to (g’,l). If there are K blocks, this requires K global ports in each router. If a block is of size M, the result is a machine with KM routers having diameter two. To increase the size of the machine to 2K blocks, each router connects to only half of the other blocks. The result is a larger machine but also one with greater diameter. This is a crude description of how the network of the CRAY XC® is designed. In this paper, a family of interconnection networks using routers with K global and M local ports is defined. Coordinates are (c,d, p) and the global connections are (c,d,p)↔(c’,p,d) which swaps p and d. The network is denoted D3(K,M) and is called a Swapped Dragonfly. D3(K,M) has KM2 routers and has diameter three, regardless of the size of K. To produce a network of size KM2 conventionally, diameter would be an increasing function of K. The family of Swapped Dragonflies has other desirable properties: 1) D3(K,M) scales linearly in K and quadratically in M. 2) If L < K, D3(K,M) contains many copies of D3(L,M). 3) If L < M, D3(K,M) contains many copies of D3(K,L). 4) D3(K,M) can perform an all-to-all exchange in KM2+KM time which is only slightly more than the time to do a one-to-all. This paper makes several contributions. It is the first time that a swap has been used to define a linearly scalable family of networks. Structural properties of this new family of networks are thoroughly examined. A synchronizing packet header is introduced. It specifies the path to be followed and it makes it possible to define highly parallel communication algorithm on the network. Among these is an all-to-all exchange in time KM2+KM. To demonstrate the effectiveness of the swap properties of the network of the CRAY XC® and D3(K,16) are compared.Keywords: all-to-all exchange, CRAY XC®, Dragonfly, interconnection network, packet switching, swapped network, topology
Procedia PDF Downloads 122574 Scalable Performance Testing: Facilitating The Assessment Of Application Performance Under Substantial Loads And Mitigating The Risk Of System Failures
Authors: Solanki Ravirajsinh
Abstract:
In the software testing life cycle, failing to conduct thorough performance testing can result in significant losses for an organization due to application crashes and improper behavior under high user loads in production. Simulating large volumes of requests, such as 5 million within 5-10 minutes, is challenging without a scalable performance testing framework. Leveraging cloud services to implement a performance testing framework makes it feasible to handle 5-10 million requests in just 5-10 minutes, helping organizations ensure their applications perform reliably under peak conditions. Implementing a scalable performance testing framework using cloud services and tools like JMeter, EC2 instances (Virtual machine), cloud logs (Monitor errors and logs), EFS (File storage system), and security groups offers several key benefits for organizations. Creating performance test framework using this approach helps optimize resource utilization, effective benchmarking, increased reliability, cost savings by resolving performance issues before the application is released. In performance testing, a master-slave framework facilitates distributed testing across multiple EC2 instances to emulate many concurrent users and efficiently handle high loads. The master node orchestrates the test execution by coordinating with multiple slave nodes to distribute the workload. Slave nodes execute the test scripts provided by the master node, with each node handling a portion of the overall user load and generating requests to the target application or service. By leveraging JMeter's master-slave framework in conjunction with cloud services like EC2 instances, EFS, CloudWatch logs, security groups, and command-line tools, organizations can achieve superior scalability and flexibility in their performance testing efforts. In this master-slave framework, JMeter must be installed on both the master and each slave EC2 instance. The master EC2 instance functions as the "brain," while the slave instances operate as the "body parts." The master directs each slave to execute a specified number of requests. Upon completion of the execution, the slave instances transmit their results back to the master. The master then consolidates these results into a comprehensive report detailing metrics such as the number of requests sent, encountered errors, network latency, response times, server capacity, throughput, and bandwidth. Leveraging cloud services, the framework benefits from automatic scaling based on the volume of requests. Notably, integrating cloud services allows organizations to handle more than 5-10 million requests within 5 minutes, depending on the server capacity of the hosted website or application.Keywords: identify crashes of application under heavy load, JMeter with cloud Services, Scalable performance testing, JMeter master and slave using cloud Services
Procedia PDF Downloads 27573 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89572 The Political Economy of Media Privatisation in Egypt: State Mechanisms and Continued Control
Authors: Mohamed Elmeshad
Abstract:
During the mid-1990's Egypt had become obliged to implement the Economic Reform and Structural Adjustment Program that included broad economic liberalization, expansion of the private sector and a contraction the size of government spending. This coincided as well with attempts to appear more democratic and open to liberalizing public space and discourse. At the same time, economic pressures and the proliferation of social media access and activism had led to increased pressure to open a mediascape and remove it from the clutches of the government, which had monopolized print and broadcast mass media for over 4 decades by that point. However, the mechanisms that governed the privatization of mass media allowed for sustained government control, even through the prism of ostensibly privately owned newspapers and television stations. These mechanisms involve barriers to entry from a financial and security perspective, as well as operational capacities of distribution and access to means of production. The power dynamics between mass media establishments and the state were moulded during this period in a novel way. Power dynamics within media establishments had also formed under such circumstances. The changes in the country's political economy itself somehow mirrored these developments. This paper will examine these dynamics and shed light on the political economy of Egypt's newly privatized mass media in the early 2000's especially. Methodology: This study will rely on semi-structured interviews from individuals involved with these changes from the perspective of the media organizations. It also will map out the process of media privatization by looking at the administrative, operative and legislative institutions and contexts in order to attempt to draw conclusions on methods of control and the role of the state during the process of privatization. Finally, a brief discourse analysis will be necessary in order to aptly convey how these factors ultimately reflected on media output. Findings and conclusion: The development of Egyptian private, “independent” mirrored the trajectory of transitions in the country’s political economy. Liberalization of the economy meant that a growing class of business owners would explore opportunities that such new markets would offer. However the regime’s attempts to control access to certain forms of capital, especially in sectors such as the media affected the structure of print and broadcast media, as well as the institutions that would govern them. Like the process of liberalisation, much of the regime’s manoeuvring with regards to privatization of media had been haphazardly used to indirectly expand the regime and its ruling party’s ability to retain influence, while creating a believable façade of openness. In this paper, we will attempt to uncover these mechanisms and analyse our findings in ways that explain how the manifestations prevalent in the context of a privatizing media space in a transitional Egypt provide evidence of both the intentions of this transition, and the ways in which it was being held back.Keywords: business, mass media, political economy, power, privatisation
Procedia PDF Downloads 227571 Establishing a Computational Screening Framework to Identify Environmental Exposures Using Untargeted Gas-Chromatography High-Resolution Mass Spectrometry
Authors: Juni C. Kim, Anna R. Robuck, Douglas I. Walker
Abstract:
The human exposome, which includes chemical exposures over the lifetime and their effects, is now recognized as an important measure for understanding human health; however, the complexity of the data makes the identification of environmental chemicals challenging. The goal of our project was to establish a computational workflow for the improved identification of environmental pollutants containing chlorine or bromine. Using the “pattern. search” function available in the R package NonTarget, we wrote a multifunctional script that searches mass spectral clusters from untargeted gas-chromatography high-resolution mass spectrometry (GC-HRMS) for the presence of spectra consistent with chlorine and bromine-containing organic compounds. The “pattern. search” function was incorporated into a different function that allows the evaluation of clusters containing multiple analyte fragments, has multi-core support, and provides a simplified output identifying listing compounds containing chlorine and/or bromine. The new function was able to process 46,000 spectral clusters in under 8 seconds and identified over 150 potential halogenated spectra. We next applied our function to a deidentified dataset from patients diagnosed with primary biliary cholangitis (PBC), primary sclerosing cholangitis (PSC), and healthy controls. Twenty-two spectra corresponded to potential halogenated compounds in the PSC and PBC dataset, including six significantly different in PBC patients, while four differed in PSC patients. We have developed an improved algorithm for detecting halogenated compounds in GC-HRMS data, providing a strategy for prioritizing exposures in the study of human disease.Keywords: exposome, metabolome, computational metabolomics, high-resolution mass spectrometry, exposure, pollutants
Procedia PDF Downloads 138570 YOLO-IR: Infrared Small Object Detection in High Noise Images
Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long
Abstract:
Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion
Procedia PDF Downloads 74569 Medical Diagnosis of Retinal Diseases Using Artificial Intelligence Deep Learning Models
Authors: Ethan James
Abstract:
Over one billion people worldwide suffer from some level of vision loss or blindness as a result of progressive retinal diseases. Many patients, particularly in developing areas, are incorrectly diagnosed or undiagnosed whatsoever due to unconventional diagnostic tools and screening methods. Artificial intelligence (AI) based on deep learning (DL) convolutional neural networks (CNN) have recently gained a high interest in ophthalmology for its computer-imaging diagnosis, disease prognosis, and risk assessment. Optical coherence tomography (OCT) is a popular imaging technique used to capture high-resolution cross-sections of retinas. In ophthalmology, DL has been applied to fundus photographs, optical coherence tomography, and visual fields, achieving robust classification performance in the detection of various retinal diseases including macular degeneration, diabetic retinopathy, and retinitis pigmentosa. However, there is no complete diagnostic model to analyze these retinal images that provide a diagnostic accuracy above 90%. Thus, the purpose of this project was to develop an AI model that utilizes machine learning techniques to automatically diagnose specific retinal diseases from OCT scans. The algorithm consists of neural network architecture that was trained from a dataset of over 20,000 real-world OCT images to train the robust model to utilize residual neural networks with cyclic pooling. This DL model can ultimately aid ophthalmologists in diagnosing patients with these retinal diseases more quickly and more accurately, therefore facilitating earlier treatment, which results in improved post-treatment outcomes.Keywords: artificial intelligence, deep learning, imaging, medical devices, ophthalmic devices, ophthalmology, retina
Procedia PDF Downloads 181568 Motives for Reshoring from China to Europe: A Hierarchical Classification of Companies
Authors: Fabienne Fel, Eric Griette
Abstract:
Reshoring, whether concerning back-reshoring or near-reshoring, is a quite recent phenomenon. Despite the economic and political interest of this topic, academic research questioning determinants of reshoring remains rare. Our paper aims at contributing to fill this gap. In order to better understand the reasons for reshoring, we conducted a study among 280 French firms during spring 2016, three-quarters of which sourced, or source, in China. 105 firms in the sample have reshored all or part of their Chinese production or supply in recent years, and we aimed to establish a typology of the motives that drove them to this decision. We asked our respondents about the history of their Chinese supplies, their current reshoring strategies, and their motivations. Statistical analysis was performed with SPSS 22 and SPAD 8. Our results show that change in commercial and financial terms with China is the first motive explaining the current reshoring movement from this country (it applies to 54% of our respondents). A change in corporate strategy is the second motive (30% of our respondents); the reshoring decision follows a change in companies’ strategies (upgrading, implementation of a CSR policy, or a 'lean management' strategy). The third motive (14% of our sample) is a mere correction of the initial offshoring decision, considered as a mistake (under-estimation of hidden costs, non-quality and non-responsiveness problems). Some authors emphasize that developing a short supply chain, involving geographic proximity between design and production, gives a competitive advantage to companies wishing to offer innovative products. Admittedly 40% of our respondents indicate that this motive could have played a part in their decision to reshore, but this reason was not enough for any of them and is not an intrinsic motive leading to leaving Chinese suppliers. Having questioned our respondents about the importance given to various problems leading them to reshore, we then performed a Principal Components Analysis (PCA), associated with an Ascending Hierarchical Classification (AHC), based on Ward criterion, so as to point out more specific motivations. Three main classes of companies should be distinguished: -The 'Cost Killers' (23% of the sample), which reshore their supplies from China only because of higher procurement costs and so as to find lower costs elsewhere. -The 'Realists' (50% of the sample), giving equal weight or importance to increasing procurement costs in China and to the quality of their supplies (to a large extend). Companies being part of this class tend to take advantage of this changing environment to change their procurement strategy, seeking suppliers offering better quality and responsiveness. - The 'Voluntarists' (26% of the sample), which choose to reshore their Chinese supplies regardless of higher Chinese costs, to obtain better quality and greater responsiveness. We emphasize that if the main driver for reshoring from China is indeed higher local costs, it is should not be regarded as an exclusive motivation; 77% of the companies in the sample, are also seeking, sometimes exclusively, more reactive suppliers, liable to quality, respect for the environment and intellectual property.Keywords: China, procurement, reshoring, strategy, supplies
Procedia PDF Downloads 326567 Evaluation of Requests and Outcomes of Magnetic Resonance Imaging Assessing for Cauda Equina Syndrome at a UK Trauma Centre
Authors: Chris Cadman, Marcel Strauss
Abstract:
Background: In 2020, the University Hospital Wishaw in the United Kingdom became the centre for trauma and orthopaedics within its health board. This resulted in the majority of patients with suspected cauda equina syndrome (CES) being assessed and imaged at this site, putting an increased demand on MR imaging and displacing other previous activity. Following this transition, imaging requests for CES did not always follow national guidelines and would often be missing important clinical and safety information. There also appeared to be a very low positive scan rate compared with previously reported studies. In an attempt to improve patient selection and reduce the burden of CES imaging at this site clinical audit was performed. Methods: A total of 250 consecutive patients imaged to assess for CES were evaluated. Patients had to have presented to either the emergency or orthopaedic department acutely with a presenting complaint of suspected CES. Patients were excluded if they were not admitted acutely or were assessed by other clinical specialities. In total, 233 patients were included. Requests were assessed for appropriate clinical history, accurate and complete clinical assessment and MRI safety information. Clinical assessment was allocated a score of 1-6 based on information relating to history of pain, level of pain, dermatomes/myotomes affected, peri-anal paraesthesia/anaesthesia, anal tone and post-void bladder volume with each element scoring one point. Images were assessed for positive findings of CES, acquired spinal stenosis or nerve root compression. Results: Overall, 73% of requests had a clear clinical history of CES. The urgency of the request for imaging was given in 23% of cases. The mean clinical assessment score was 3.7 out of a total of 6. Overall, 2% of scans were positive for CES, 29% had acquired spinal stenosis and 30% had nerve root compression. For patients with CES, 75% had acute neurological signs compared with 68% of the study population. CES patients had a mean clinical history score of 5.3 compared with 3.7 for the study population. Overall, 95% of requests had appropriate MRI safety information. Discussion: it study included 233 patients who underwent specialist assessment and referral for MR imaging for suspected CES. Despite the serious nature of this condition, a large proportion of imaging requests did not have a clear clinical query of CES and the level of urgency was not given, which could potentially lead to a delay in imaging and treatment. Clinical examination was often also incomplete, which can make triaging of patients presenting with similar symptoms challenging. The positive rate for CES was only 2%, much below other studies which had positive rates of 6–40% with a large meta-analysis finding a mean positive rate of 19%. These findings demonstrate an opportunity to improve the quality of imaging requests for suspected CES. This may help to improve patient selection for imaging and result in a positive rate for CES imaging that is more in line with other centres.Keywords: cauda equina syndrome, acute back pain, MRI, spine
Procedia PDF Downloads 11566 A Rapid Prototyping Tool for Suspended Biofilm Growth Media
Authors: Erifyli Tsagkari, Stephanie Connelly, Zhaowei Liu, Andrew McBride, William Sloan
Abstract:
Biofilms play an essential role in treating water in biofiltration systems. The biofilm morphology and function are inextricably linked to the hydrodynamics of flow through a filter, and yet engineers rarely explicitly engineer this interaction. We develop a system that links computer simulation and 3-D printing to optimize and rapidly prototype filter media to optimize biofilm function with the hypothesis that biofilm function is intimately linked to the flow passing through the filter. A computational model that numerically solves the incompressible time-dependent Navier Stokes equations coupled to a model for biofilm growth and function is developed. The model is imbedded in an optimization algorithm that allows the model domain to adapt until criteria on biofilm functioning are met. This is applied to optimize the shape of filter media in a simple flow channel to promote biofilm formation. The computer code links directly to a 3-D printer, and this allows us to prototype the design rapidly. Its validity is tested in flow visualization experiments and by microscopy. As proof of concept, the code was constrained to explore a small range of potential filter media, where the medium acts as an obstacle in the flow that sheds a von Karman vortex street that was found to enhance the deposition of bacteria on surfaces downstream. The flow visualization and microscopy in the 3-D printed realization of the flow channel validated the predictions of the model and hence its potential as a design tool. Overall, it is shown that the combination of our computational model and the 3-D printing can be effectively used as a design tool to prototype filter media to optimize biofilm formation.Keywords: biofilm, biofilter, computational model, von karman vortices, 3-D printing.
Procedia PDF Downloads 142565 Robust Segmentation of Salient Features in Automatic Breast Ultrasound (ABUS) Images
Authors: Lamees Nasser, Yago Diez, Robert Martí, Joan Martí, Ibrahim Sadek
Abstract:
Automated 3D breast ultrasound (ABUS) screening is a novel modality in medical imaging because of its common characteristics shared with other ultrasound modalities in addition to the three orthogonal planes (i.e., axial, sagittal, and coronal) that are useful in analysis of tumors. In the literature, few automatic approaches exist for typical tasks such as segmentation or registration. In this work, we deal with two problems concerning ABUS images: nipple and rib detection. Nipple and ribs are the most visible and salient features in ABUS images. Determining the nipple position plays a key role in some applications for example evaluation of registration results or lesion follow-up. We present a nipple detection algorithm based on color and shape of the nipple, besides an automatic approach to detect the ribs. In point of fact, rib detection is considered as one of the main stages in chest wall segmentation. This approach consists of four steps. First, images are normalized in order to minimize the intensity variability for a given set of regions within the same image or a set of images. Second, the normalized images are smoothed by using anisotropic diffusion filter. Next, the ribs are detected in each slice by analyzing the eigenvalues of the 3D Hessian matrix. Finally, a breast mask and a probability map of regions detected as ribs are used to remove false positives (FP). Qualitative and quantitative evaluation obtained from a total of 22 cases is performed. For all cases, the average and standard deviation of the root mean square error (RMSE) between manually annotated points placed on the rib surface and detected points on rib borders are 15.1188 mm and 14.7184 mm respectively.Keywords: Automated 3D Breast Ultrasound, Eigenvalues of Hessian matrix, Nipple detection, Rib detection
Procedia PDF Downloads 330564 An Ensemble System of Classifiers for Computer-Aided Volcano Monitoring
Authors: Flavio Cannavo
Abstract:
Continuous evaluation of the status of potentially hazardous volcanos plays a key role for civil protection purposes. The importance of monitoring volcanic activity, especially for energetic paroxysms that usually come with tephra emissions, is crucial not only for exposures to the local population but also for airline traffic. Presently, real-time surveillance of most volcanoes worldwide is essentially delegated to one or more human experts in volcanology, who interpret data coming from different kind of monitoring networks. Unfavorably, the high nonlinearity of the complex and coupled volcanic dynamics leads to a large variety of different volcanic behaviors. Moreover, continuously measured parameters (e.g. seismic, deformation, infrasonic and geochemical signals) are often not able to fully explain the ongoing phenomenon, thus making the fast volcano state assessment a very puzzling task for the personnel on duty at the control rooms. With the aim of aiding the personnel on duty in volcano surveillance, here we introduce a system based on an ensemble of data-driven classifiers to infer automatically the ongoing volcano status from all the available different kind of measurements. The system consists of a heterogeneous set of independent classifiers, each one built with its own data and algorithm. Each classifier gives an output about the volcanic status. The ensemble technique allows weighting the single classifier output to combine all the classifications into a single status that maximizes the performance. We tested the model on the Mt. Etna (Italy) case study by considering a long record of multivariate data from 2011 to 2015 and cross-validated it. Results indicate that the proposed model is effective and of great power for decision-making purposes.Keywords: Bayesian networks, expert system, mount Etna, volcano monitoring
Procedia PDF Downloads 246563 Gold-Mediated Modification of Apoferritin Surface with Targeting Antibodies
Authors: Simona Dostalova, Pavel Kopel, Marketa Vaculovicova, Vojtech Adam, Rene Kizek
Abstract:
Protein apoferritin seems to be a very promising structure for use as a nanocarrier. It is prepared from intracellular ferritin protein naturally found in most organisms. The role of ferritin proteins is to store and transport ferrous ions. Apoferritin is a hollow protein cage without ferrous ions that can be prepared from ferritin by reduction with thioglycolic acid or dithionite. The structure of apoferritin is composed of 24 protein subunits, creating a sphere with 12 nm in diameter. The inner cavity has a diameter of 8 nm. The drug encapsulation process is based on the response of apoferritin structure to the pH changes of surrounding solution. In low pH, apoferritin is disassembled into individual subunits and its structure is “opened”. It can then be mixed with any desired cytotoxic drug and after adjustment of pH back to neutral the subunits are reconnected again and the drug is encapsulated within the apoferritin particles. Excess drug molecules can be removed by dialysis. The receptors for apoferritin, SCARA5 and TfR1 can be found in the membrane of both healthy and cancer cells. To enhance the specific targeting of apoferritin nanocarrier, it is possible to modify its surface with targeting moieties, such as antibodies. To ensure sterically correct complex, we used a a peptide linker based on a protein G with N-terminus affinity towards Fc region of antibodies. To connect the peptide to the surface of apoferritin, the C-terminus of peptide was made of cysteine with affinity to gold. The surface of apoferritin with encapsulated doxorubicin (ApoDox) was coated either with gold nanoparticles (ApoDox-Nano) or gold (III) chloride hydrate reduced with sodium borohydride (ApoDox-HAu). The applied amount of gold in form of gold (III) chloride hydrate was 10 times higher than in the case of gold nanoparticles. However, after removal of the excess unbound ions by electrophoretic separation, the concentration of gold on the surface of apoferritin was only 6 times higher for ApoDox-HAu in comparison with ApoDox-Nano. Moreover, the reduction with sodium borohydride caused a loss of doxorubicin fluorescent properties (excitation maximum at 480 nm with emission maximum at 600 nm) and thus its biological activity. Fluorescent properties of ApoDox-Nano were similar to the unmodified ApoDox, therefore it was more suited for the intended use. To evaluate the specificity of apoferritin modified with antibodies, we used ELISA-like method with the surface of microtitration plate wells coated by the antigen (goat anti-human IgG antibodies). To these wells, we applied ApoDox without targeting antibodies and ApoDox-Nano modified with targeting antibodies (human IgG antibodies). The amount of unmodified ApoDox on antigen after incubation and subsequent rinsing with water was 5 times lower than in the case of ApoDox-Nano modified with targeting antibodies. The modification of non-gold ApoDox with antibodies caused no change in its targeting properties. It can therefore be concluded that the demonstrated procedure allows us to create nanocarrier with enhanced targeting properties, suitable for nanomedicine.Keywords: apoferritin, doxorubicin, nanocarrier, targeting antibodies
Procedia PDF Downloads 389