Search results for: event injuries
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1716

Search results for: event injuries

1296 Optimal Pressure Control and Burst Detection for Sustainable Water Management

Authors: G. K. Viswanadh, B. Rajasekhar, G. Venkata Ramana

Abstract:

Water distribution networks play a vital role in ensuring a reliable supply of clean water to urban areas. However, they face several challenges, including pressure control, pump speed optimization, and burst event detection. This paper combines insights from two studies to address these critical issues in Water distribution networks, focusing on the specific context of Kapra Municipality, India. The first part of this research concentrates on optimizing pressure control and pump speed in complex Water distribution networks. It utilizes the EPANET- MATLAB Toolkit to integrate EPANET functionalities into the MATLAB environment, offering a comprehensive approach to network analysis. By optimizing Pressure Reduce Valves (PRVs) and variable speed pumps (VSPs), this study achieves remarkable results. In the Benchmark Water Distribution System (WDS), the proposed PRV optimization algorithm reduces average leakage by 20.64%, surpassing the previous achievement of 16.07%. When applied to the South-Central and East zone WDS of Kapra Municipality, it identifies PRV locations that were previously missed by existing algorithms, resulting in average leakage reductions of 22.04% and 10.47%. These reductions translate to significant daily Water savings, enhancing Water supply reliability and reducing energy consumption. The second part of this research addresses the pressing issue of burst event detection and localization within the Water Distribution System. Burst events are a major contributor to Water losses and repair expenses. The study employs wireless sensor technology to monitor pressure and flow rate in real time, enabling the detection of pipeline abnormalities, particularly burst events. The methodology relies on transient analysis of pressure signals, utilizing Cumulative Sum and Wavelet analysis techniques to robustly identify burst occurrences. To enhance precision, burst event localization is achieved through meticulous analysis of time differentials in the arrival of negative pressure waveforms across distinct pressure sensing points, aided by nodal matrix analysis. To evaluate the effectiveness of this methodology, a PVC Water pipeline test bed is employed, demonstrating the algorithm's success in detecting pipeline burst events at flow rates of 2-3 l/s. Remarkably, the algorithm achieves a localization error of merely 3 meters, outperforming previously established algorithms. This research presents a significant advancement in efficient burst event detection and localization within Water pipelines, holding the potential to markedly curtail Water losses and the concomitant financial implications. In conclusion, this combined research addresses critical challenges in Water distribution networks, offering solutions for optimizing pressure control, pump speed, burst event detection, and localization. These findings contribute to the enhancement of Water Distribution System, resulting in improved Water supply reliability, reduced Water losses, and substantial cost savings. The integrated approach presented in this paper holds promise for municipalities and utilities seeking to improve the efficiency and sustainability of their Water distribution networks.

Keywords: pressure reduce valve, complex networks, variable speed pump, wavelet transform, burst detection, CUSUM (Cumulative Sum), water pipeline monitoring

Procedia PDF Downloads 87
1295 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement

Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini

Abstract:

Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.

Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis

Procedia PDF Downloads 138
1294 The Impact of Hosting an On-Site Vocal Concert in Preschool on Music Inspiration and Learning Among Preschoolers

Authors: Meiying Liao, Poya Huang

Abstract:

The aesthetic domain is one of the six major domains in the Taiwanese preschool curriculum, encompassing visual arts, music, and dramatic play. Its primary objective is to cultivate children’s abilities in exploration and awareness, expression and creation, and response and appreciation. The purpose of this study was to explore the effects of hosting a vocal music concert on aesthetic inspiration and learning among preschoolers in a preschool setting. The primary research method employed was a case study focusing on a private preschool in Northern Taiwan that organized a school-wide event featuring two vocalists. The concert repertoires included children’s songs, folk songs, and arias performed in Mandarin, Hakka, English, German, and Italian. In addition to professional performances, preschool teachers actively participated by presenting a children’s song. A total of 5 classes, comprising approximately 150 preschoolers, along with 16 teachers and staff, participated in the event. Data collection methods included observation, interviews, and documents. Results indicated that both teachers and children thoroughly enjoyed the concert, with high levels of acceptance when the program was appropriately designed and hosted. Teachers reported that post-concert discussions with children revealed the latter’s ability to recall people, events, and elements observed during the performance, expressing their impressions of the most memorable segments. The concert effectively achieved the goals of the aesthetic domain, particularly in fostering response and appreciation. It also inspired preschoolers’ interest in music. Many teachers noted an increased desire for performance among preschoolers after exposure to the concert, with children imitating the performers and their expressions. Remarkably, one class extended this experience by incorporating it into the curriculum, autonomously organizing a high-quality concert in the music learning center. Parents also reported that preschoolers enthusiastically shared their concert experiences at home. In conclusion, despite being a single event, the positive responses from preschoolers towards the music performance suggest a meaningful impact. These experiences extended into the curriculum, as firsthand exposure to performances allowed teachers to deepen related topics, fostering a habit of autonomous learning in the designated learning centers.

Keywords: concert, early childhood music education, aesthetic education, music develpment

Procedia PDF Downloads 49
1293 Simulation Aided Life Cycle Sustainability Assessment Framework for Manufacturing Design and Management

Authors: Mijoh A. Gbededo, Kapila Liyanage, Ilias Oraifige

Abstract:

Decision making for sustainable manufacturing design and management requires critical considerations due to the complexity and partly conflicting issues of economic, social and environmental factors. Although there are tools capable of assessing the combination of one or two of the sustainability factors, the frameworks have not adequately integrated all the three factors. Case study and review of existing simulation applications also shows the approach lacks integration of the sustainability factors. In this paper we discussed the development of a simulation based framework for support of a holistic assessment of sustainable manufacturing design and management. To achieve this, a strategic approach is introduced to investigate the strengths and weaknesses of the existing decision supporting tools. Investigation reveals that Discrete Event Simulation (DES) can serve as a rock base for other Life Cycle Analysis frameworks. Simio-DES application optimizes systems for both economic and competitive advantage, Granta CES EduPack and SimaPro collate data for Material Flow Analysis and environmental Life Cycle Assessment, while social and stakeholders’ analysis is supported by Analytical Hierarchy Process, a Multi-Criteria Decision Analysis method. Such a common and integrated framework creates a platform for companies to build a computer simulation model of a real system and assess the impact of alternative solutions before implementing a chosen solution.

Keywords: discrete event simulation, life cycle sustainability analysis, manufacturing, sustainability

Procedia PDF Downloads 279
1292 Utility of Thromboelastography to Reduce Coagulation-Related Mortality and Blood Component Rate in Neurosurgery ICU

Authors: Renu Saini, Deepak Agrawal

Abstract:

Background: Patients with head and spinal cord injury frequently have deranged coagulation profiles and require blood products transfusion perioperatively. Thromboelastography (TEG) is a ‘bedside’ global test of coagulation which may have role in deciding the need of transfusion in such patients. Aim: To assess the usefulness of TEG in department of neurosurgery in decreasing transfusion rates and coagulation-related mortality in traumatic head and spinal cord injury. Method and Methodology: A retrospective comparative study was carried out in the department of neurosurgery over a period of 1 year. There are two groups in this study. ‘Control’ group constitutes the patients in whom data was collected over 6 months (1/6/2009-31/12/2009) prior to installation of TEG machine. ‘Test’ group includes patients in whom data was collected over 6months (1/1/2013-30/6/2013) post TEG installation. Total no. of platelet, FFP, and cryoprecipitate transfusions were noted in both groups along with in hospital mortality and length of stay. Result: Both groups were matched in age and sex of patients, number of head and spinal cord injury cases, number of patients with thrombocytopenia and number of patients who underwent operation. Total 178 patients (135 head injury and 43 spinal cord injury patents) were admitted in neurosurgery department during time period June 2009 to December 2009 i.e. prior to TEG installation and after TEG installation a total of 243 patients(197 head injury and 46 spinal cord injury patents) were admitted. After TEG introduction platelet transfusion significantly reduced (p=0.000) compare to control group (67 units to 34 units). Mortality rate was found significantly reduced after installation (77 patients to 57 patients, P=0.000). Length of stay was reduced significantly (Prior installation 1-211days and after installation 1-115days, p=0.02). Conclusion: Bedside TEG can dramatically reduce platelet transfusion components requirement in department of neurosurgery. TEG also lead to a drastic decrease in mortality rate and length of stay in patients with traumatic head and spinal cord injuries. We recommend its use as a standard of care in the patients with traumatic head and spinal cord injuries.

Keywords: blood component transfusion, mortality, neurosurgery ICU, thromboelastography

Procedia PDF Downloads 325
1291 Application of Simulation of Discrete Events in Resource Management of Massive Concreting

Authors: Mohammad Amin Hamedirad, Seyed Javad Vaziri Kang Olyaei

Abstract:

Project planning and control are one of the most critical issues in the management of construction projects. Traditional methods of project planning and control, such as the critical path method or Gantt chart, are not widely used for planning projects with discrete and repetitive activities, and one of the problems of project managers is planning the implementation process and optimal allocation of its resources. Massive concreting projects is also a project with discrete and repetitive activities. This study uses the concept of simulating discrete events to manage resources, which includes finding the optimal number of resources considering various limitations such as limitations of machinery, equipment, human resources and even technical, time and implementation limitations using analysis of resource consumption rate, project completion time and critical points analysis of the implementation process. For this purpose, the concept of discrete-event simulation has been used to model different stages of implementation. After reviewing the various scenarios, the optimal number of allocations for each resource is finally determined to reach the maximum utilization rate and also to reduce the project completion time or reduce its cost according to the existing constraints. The results showed that with the optimal allocation of resources, the project completion time could be reduced by 90%, and the resulting costs can be reduced by up to 49%. Thus, allocating the optimal number of project resources using this method will reduce its time and cost.

Keywords: simulation, massive concreting, discrete event simulation, resource management

Procedia PDF Downloads 148
1290 Examining the Concept of Sustainability in the Scenery Architecture of Naqsh-e-Jahan Square

Authors: Mahmood Naghizadeh, Maryam Memarian, Hourshad Irvash

Abstract:

Following the rise in the world population and the upward growth of urbanization, the design, planning, and management of the site scenery for the purpose of presentation and expansion of sustainable site scenery has turned to be the greatest concern to experts. Since the fundamental principles of the site scenery change more and less haphazardly over time, sustainable site scenery can be viewed as an ideal goal because both sustainability and dynamism come into view in urban site scenery and it wouldn’t be designed according to a set of pre-determined principles. Sustainable site scenery, as the ongoing interaction between idealism and pragmatism with sustainability factors, is a dynamic phenomenon created by bringing cultural, historical, social and natural scenery together. Such an interaction is not to subdue other factors but to reinforce the aforementioned factors. The sustainable site scenery is a persistently occurring event not only has attenuated over time but has gained strength. The sustainability of a site scenery or an event over time depends on its site identity which grows out of its continuous association with the past. The sustainability of a site scene or an event in a time frame intertwined with the identity of the place from past to present. This past history supports the present and future of the scene. The result of such a supportive role is the sustainability of site scenery. Isfahan Naqsh-e-Jahan Square is one of the most outstanding squares in the world and the best embodiment of Iranian site scenery architecture. This square is an arena that brings people together and a dynamic city center comprising various urban and religious complexes, spaces and facilities and is considered as one of the most favorable traditional urban space of Iran. Such a place can illustrate many factors related to sustainable site scenery. One the other hand, there are still no specific principles concerning sustainability in the architecture of site scenery. Meanwhile, sustainability is recognized as a rather modern view in architecture. The purpose of this research is to identify factors involved in sustainability in general and to examine their effects on site scenery architecture in particular. Finally, these factors will be studied with taking Naqsh-e-Jahan Square into account. This research adopts an analytic-descriptive approach that has benefited from the review of literature available in library studies and the documents related to sustainability and site scenery architecture. The statistical population used for the purpose of this research includes square constructed during the Safavid dynasty and Naqsh-e-Jahan Square was picked out as the case study. The purpose of this paper is to come up with a rough definition of sustainable site scenery and demonstrate this concept by analyzing it and recognizing the social, economic and ecological aspects of this project.

Keywords: Naqsh-e-Jahan Square, site scenery architecture, sustainability, sustainable site scenery

Procedia PDF Downloads 312
1289 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 408
1288 Developing a Systemic Monoclonal Antibody Therapy for the Treatment of Large Burn Injuries

Authors: Alireza Hassanshahi, Xanthe Strudwick, Zlatko Kopecki, Allison J Cowin

Abstract:

Studies have shown that Flightless (Flii) is elevated in human wounds, including burns, and reducing the level of Flii is a promising approach for improving wound repair and reducing scar formation. The most effective approach has been to neutralise Flii activity using localized, intradermal application of function blocking monoclonal antibodies. However, large surface area burns are difficult to treat by intradermal injection of therapeutics, so the aim of this study was to investigate if a systemic injection of a monoclonal antibody against Flii could improve healing in mice following burn injury. Flii neutralizing antibodies (FnAbs) were labelled with Alxa-Fluor-680 for biodistribution studies and the healing effects of systemically administered FnAbs to mice with burn injuries. A partial thickness, 7% (70mm2) total body surface area scald burn injury was created on the dorsal surface of mice (n=10/group), and 100µL of Alexa-Flour-680-labeled FnAbs were injected into the intraperitoneal cavity (IP) at time of injury. The burns were imaged on days 0, 1, 2, 3, 4, and 7 using IVIS Lumina S5 Imaging System, and healing was assessed macroscopically, histologically, and using immunohistochemistry. Fluorescent radiance efficiency measurements showed that IP injected Alexa-Fluor-680-FnAbs localized at the site of burn injury from day 1, remaining there for the whole 7-day study. The burns treated with FnAbs showed a reduction in macroscopic wound area and an increased rate of epithelialization compared to controls. Immunohistochemistry for NIMP-R14 showed a reduction in the inflammatory infiltrate, while CD31/VEGF staining showed improved angiogenesis post-systemic FnAb treatment. These results suggest that systemically administered FnAbs are active within the burn site and can improve healing outcomes. The clinical application of systemically injected Flii monoclonal antibodies could therefore be a potential approach for promoting the healing of large surface area burns immediately after injury.

Keywords: biodistribution, burn, flightless, systemic, fnAbs

Procedia PDF Downloads 172
1287 Discriminant Analysis of Pacing Behavior on Mass Start Speed Skating

Authors: Feng Li, Qian Peng

Abstract:

The mass start speed skating (MSSS) is a new event for the 2018 PyeongChang Winter Olympics and will be an official race for the 2022 Beijing Winter Olympics. Considering that the event rankings were based on points gained on laps, it is worthwhile to investigate the pacing behavior on each lap that directly influences the ranking of the race. The aim of this study was to detect the pacing behavior and performance on MSSS regarding skaters’ level (SL), competition stage (semi-final/final) (CS) and gender (G). All the men's and women's races in the World Cup and World Championships were analyzed in the 2018-2019 and 2019-2020 seasons. As a result, a total of 601 skaters from 36 games were observed. ANOVA for repeated measures was applied to compare the pacing behavior on each lap, and the three-way ANOVA for repeated measures was used to identify the influence of SL, CS, and G on pacing behavior and total time spent. In general, the results showed that the pacing behavior from fast to slow were cluster 1—laps 4, 8, 12, 15, 16, cluster 2—laps 5, 9, 13, 14, cluster 3—laps 3, 6, 7, 10, 11, and cluster 4—laps 1 and 2 (p=0.000). For CS, the total time spent in the final was less than the semi-final (p=0.000). For SL, top-level skaters spent less total time than the middle-level and low-level (p≤0.002), while there was no significant difference between the middle-level and low-level (p=0.214). For G, the men’s skaters spent less total time than women on all laps (p≤0.048). This study could help to coach staff better understand the pacing behavior regarding SL, CS, and G, further providing references concerning promoting the pacing strategy and decision making before and during the race.

Keywords: performance analysis, pacing strategy, winning strategy, winter Olympics

Procedia PDF Downloads 193
1286 An Event-Related Potentials Study on the Processing of English Subjunctive Mood by Chinese ESL Learners

Authors: Yan Huang

Abstract:

Event-related potentials (ERPs) technique helps researchers to make continuous measures on the whole process of language comprehension, with an excellent temporal resolution at the level of milliseconds. The research on sentence processing has developed from the behavioral level to the neuropsychological level, which brings about a variety of sentence processing theories and models. However, the applicability of these models to L2 learners is still under debate. Therefore, the present study aims to investigate the neural mechanisms underlying English subjunctive mood processing by Chinese ESL learners. To this end, English subject clauses with subjunctive moods are used as the stimuli, all of which follow the same syntactic structure, “It is + adjective + that … + (should) do + …” Besides, in order to examine the role that language proficiency plays on L2 processing, this research deals with two groups of Chinese ESL learners (18 males and 22 females, mean age=21.68), namely, high proficiency group (Group H) and low proficiency group (Group L). Finally, the behavioral and neurophysiological data analysis reveals the following findings: 1) Syntax and semantics interact with each other on the SECOND phase (300-500ms) of sentence processing, which is partially in line with the Three-phase Sentence Model; 2) Language proficiency does affect L2 processing. Specifically, for Group H, it is the syntactic processing that plays the dominant role in sentence processing while for Group L, semantic processing also affects the syntactic parsing during the THIRD phase of sentence processing (500-700ms). Besides, Group H, compared to Group L, demonstrates a richer native-like ERPs pattern, which further demonstrates the role of language proficiency in L2 processing. Based on the research findings, this paper also provides some enlightenment for the L2 pedagogy as well as the L2 proficiency assessment.

Keywords: Chinese ESL learners, English subjunctive mood, ERPs, L2 processing

Procedia PDF Downloads 131
1285 Therapeutic Effects of Toll Like Receptor 9 Ligand CpG-ODN on Radiation Injury

Authors: Jianming Cai

Abstract:

Exposure to ionizing radiation causes severe damage to human body and an safe and effective radioprotector is urgently required for alleviating radiation damage. In 2008, flagellin, an agonist of TLR5, was found to exert radioprotective effects on radiation injury through activating NF-kB signaling pathway. From then, the radioprotective effects of TLR ligands has shed new lights on radiation protection. CpG-ODN is an unmethylated oligonucleotide which activates TLR9 signaling pathway. In this study, we demonstrated that CpG-ODN has therapeutic effects on radiation injuries induced by γ ray and 12C6+ heavy ion particles. Our data showed that CpG-ODN increased the survival rate of mice after whole body irradiation and increased the number of leukocytes as well as the bone marrow cells. CpG-ODN also alleviated radiation damage on intestinal crypt through regulating apoptosis signaling pathway including bcl2, bax, and caspase 3 etc. By using a radiation-induced pulmonary fibrosis model, we found that CpG-ODN could alleviate structural damage, within 20 week after whole–thorax 15Gy irradiation. In this model, Th1/Th2 imbalance induced by irradiation was also reversed by CpG-ODN. We also found that TGFβ-Smad signaling pathway was regulated by CpG-ODN, which accounts for the therapeutic effects of CpG-ODN in radiation-induced pulmonary injury. On another hand, for high LET radiation protection, we investigated protective effects of CpG-ODN against 12C6+ heavy ion irradiation and found that after CpG-ODN treatment, the apoptosis and cell cycle arrest induced by 12C6+ irradiation was reduced. CpG-ODN also reduced the expression of Bax and caspase 3, while increased the level of bcl2. Then we detected the effect of CpG-ODN on heavy ion induced immune dysfunction. Our data showed that CpG-ODN increased the survival rate of mice and also the leukocytes after 12C6+ irradiation. Besides, the structural damage of immune organ such as thymus and spleen was also alleviated by CpG-ODN treatment. In conclusion, we found that TLR9 ligand, CpG-ODN reduced radiation injuries in response to γ ray and 12C6+ heavy ion irradiation. On one hand, CpG-ODN inhibited the activation of apoptosis induced by radiation through regulating bcl2, bax and caspase 3. On another hand, through activating TLR9, CpG-ODN recruit MyD88-IRAK-TRAF6 complex, activating TAK1, IRF5 and NF-kB pathway, and thus alleviates radiation damage. This study provides novel insights into protection and therapy of radiation damages.

Keywords: TLR9, CpG-ODN, radiation injury, high LET radiation

Procedia PDF Downloads 480
1284 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling

Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong

Abstract:

This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.

Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system

Procedia PDF Downloads 316
1283 Integrating a Security Operations Centre with an Organization’s Existing Procedures, Policies and Information Technology Systems

Authors: M. Mutemwa

Abstract:

A Cybersecurity Operation Centre (SOC) is a centralized hub for network event monitoring and incident response. SOCs are critical when determining an organization’s cybersecurity posture because they can be used to detect, analyze and report on various malicious activities. For most organizations, a SOC is not part of the initial design and implementation of the Information Technology (IT) environment but rather an afterthought. As a result, it is not natively a plug and play component; therefore, there are integration challenges when a SOC is introduced into an organization. A SOC is an independent hub that needs to be integrated with existing procedures, policies and IT systems of an organization such as the service desk, ticket logging system, reporting, etc. This paper discussed the challenges of integrating a newly developed SOC to an organization’s existing IT environment. Firstly, the paper begins by looking at what data sources should be incorporated into the Security Information and Event Management (SIEM) such as which host machines, servers, network end points, software, applications, web servers, etc. for security posture monitoring. That is which systems need to be monitored first and the order by which the rest of the systems follow. Secondly, the paper also describes how to integrate the organization’s ticket logging system with the SOC SIEM. That is how the cybersecurity related incidents should be logged by both analysts and non-technical employees of an organization. Also the priority matrix for incident types and notifications of incidents. Thirdly, the paper looks at how to communicate awareness campaigns from the SOC and also how to report on incidents that are found inside the SOC. Lastly, the paper looks at how to show value for the large investments that are poured into designing, building and running a SOC.

Keywords: cybersecurity operation centre, incident response, priority matrix, procedures and policies

Procedia PDF Downloads 153
1282 Competing Risks Modeling Using within Node Homogeneity Classification Tree

Authors: Kazeem Adesina Dauda, Waheed Babatunde Yahya

Abstract:

To design a tree that maximizes within-node homogeneity, there is a need for a homogeneity measure that is appropriate for event history data with multiple risks. We consider the use of Deviance and Modified Cox-Snell residuals as a measure of impurity in Classification Regression Tree (CART) and compare our results with the results of Fiona (2008) in which homogeneity measures were based on Martingale Residual. Data structure approach was used to validate the performance of our proposed techniques via simulation and real life data. The results of univariate competing risk revealed that: using Deviance and Cox-Snell residuals as a response in within node homogeneity classification tree perform better than using other residuals irrespective of performance techniques. Bone marrow transplant data and double-blinded randomized clinical trial, conducted in other to compare two treatments for patients with prostate cancer were used to demonstrate the efficiency of our proposed method vis-à-vis the existing ones. Results from empirical studies of the bone marrow transplant data showed that the proposed model with Cox-Snell residual (Deviance=16.6498) performs better than both the Martingale residual (deviance=160.3592) and Deviance residual (Deviance=556.8822) in both event of interest and competing risks. Additionally, results from prostate cancer also reveal the performance of proposed model over the existing one in both causes, interestingly, Cox-Snell residual (MSE=0.01783563) outfit both the Martingale residual (MSE=0.1853148) and Deviance residual (MSE=0.8043366). Moreover, these results validate those obtained from the Monte-Carlo studies.

Keywords: within-node homogeneity, Martingale residual, modified Cox-Snell residual, classification and regression tree

Procedia PDF Downloads 272
1281 Effects of Lower and Upper Body Plyometric Training on Electrocardiogram Parameters of University Athletes

Authors: T. N. Uzor, C. O. Akosile, G. O. Emeahara

Abstract:

Plyometric training is a form of specialised strength training that uses fast muscular contractions to improve power and speed in sports conditioning by coaches and athletes. Despite its useful role in sports conditioning programme, the information about plyometric training on the athletes cardiovascular health especially Electrocardiogram (ECG) has not been established in the literature. The purpose of the study was to determine the effects of lower and upper body plyometric training on ECG of athletes. The study was guided by three null hypotheses. Quasi–experimental research design was adopted for the study. Seventy-two university male athletes constituted the population of the study. Thirty male athletes aged 18 to 24 years volunteered to participate in the study, but only twenty-three completed the study. The volunteered athletes were apparently healthy, physically active and free of any lower and upper extremity bone injuries for past one year and they had no medical or orthopedic injuries that may affect their participation in the study. Ten subjects were purposively assigned to one of the three groups: lower body plyometric training (LBPT), upper body plyometric training (UBPT), and control (C). Training consisted of six plyometric exercises: lower (ankle hops, squat jumps, tuck jumps) and upper body plyometric training (push-ups, medicine ball-chest throws and side throws) with moderate intensity. The general data were collated and analysed using Statistical Package for Social Science (SPSS version 22.0). The research questions were answered using mean and standard deviation, while paired samples t-test was also used to test for the hypotheses. The results revealed that athletes who were trained using LBPT had reduced ECG parameters better than those in the control group. The results also revealed that athletes who were trained using both LBPT and UBPT indicated lack of significant differences following ten weeks plyometric training than those in the control group in the ECG parameters except in Q wave, R wave and S wave (QRS) complex. Based on the findings of the study, it was recommended among others that coaches should include both LBPT and UBPT as part of athletes’ overall training programme from primary to tertiary institution to optimise performance as well as reduce the risk of cardiovascular diseases and promotes good healthy lifestyle.

Keywords: concentric, eccentric, electrocardiogram, plyometric

Procedia PDF Downloads 143
1280 Sweet to Bitter Perception Parageusia: Case of Posterior Inferior Cerebellar Artery Territory Diaschisis

Authors: I. S. Gandhi, D. N. Patel, M. Johnson, A. R. Hirsch

Abstract:

Although distortion of taste perception following a cerebrovascular event may seem to be a frivolous consequence of a classic stroke presentation, altered taste perception places patients at an increased risk for malnutrition, weight loss, and depression, all of which negatively impact the quality of life. Impaired taste perception can result from a wide variety of cerebrovascular lesions to various locations, including pons, insular cortices, and ventral posteromedial nucleus of the thalamus. Wallenberg syndrome, also known as a lateral medullary syndrome, has been described to impact taste; however, specific sweet to bitter taste dysgeusia from a territory infarction is an infrequent event; as such, a case is presented. One year prior to presentation, this 64-year-old right-handed woman, suffered a right posterior inferior cerebellar artery aneurysm rupture with resultant infarction, culminating in a ventriculoperitoneal shunt placement. One and half months after this event, she noticed the gradual onset of lack of ability to taste sweet, to eventually all sweet food tasting bitter. Since the onset of her chemosensory problems, the patient has lost 60-pounds. Upon gustatory testing, the patient's taste threshold showed ageusia to sucrose and hydrochloric acid, while normogeusia to sodium chloride, urea, and phenylthiocarbamide. The gustatory cortex is made in part by the right insular cortex as well as the right anterior operculum, which are primarily involved in the sensory taste modalities. In this model, sweet is localized in the posterior-most along with the rostral aspect of the right insular cortex, notably adjacent to the region responsible for bitter taste. The sweet to bitter dysgeusia in our patient suggests the presence of a lesion in this localization. Although the primary lesion in this patient was located in the right medulla of the brainstem, neurodegeneration in the rostal and posterior-most aspect, of the right insular cortex may have occurred due to diaschisis. Diaschisis has been described as neurophysiological changes that occur in remote regions to a focal brain lesion. Although hydrocephalus and vasospasm due to aneurysmal rupture may explain the distal foci of impairment, the gradual onset of dysgeusia is more indicative of diaschisis. The perception of sweet, now tasting bitter, suggests that in the absence of sweet taste reception, the intrinsic bitter taste of food is now being stimulated rather than sweet. In the evaluation and treatment of taste parageusia secondary to cerebrovascular injury, prophylactic neuroprotective measures may be worthwhile. Further investigation is warranted.

Keywords: diaschisis, dysgeusia, stroke, taste

Procedia PDF Downloads 180
1279 The Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster: A Qualitative study

Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon

Abstract:

In a disaster event, sharing patient information between the pre-hospital Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre- EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors that are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality, and the data were analyzed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system that can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analyzed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospital staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.

Keywords: emergency medical teams, communication, information and communication technologies, disaster

Procedia PDF Downloads 126
1278 Modeling of Thermo Acoustic Emission Memory Effect in Rocks of Varying Textures

Authors: Vladimir Vinnikov

Abstract:

The paper proposes a model of an inhomogeneous rock mass with initially random distribution of microcracks on mineral grain boundaries. It describes the behavior of cracks in a medium under the effect of thermal field, the medium heated instantaneously to a predetermined temperature. Crack growth occurs according to the concept of fracture mechanics provided that the stress intensity factor K exceeds the critical value of Kc. The modeling of thermally induced acoustic emission memory effects is based on the assumption that every event of crack nucleation or crack growth caused by heating is accompanied with a single acoustic emission event. Parameters of the thermally induced acoustic emission memory effect produced by cyclic heating and cooling (with the temperature amplitude increasing from cycle to cycle) were calculated for several rock texture types (massive, banded, and disseminated). The study substantiates the adaptation of the proposed model to humidity interference with the thermally induced acoustic emission memory effect. The influence of humidity on the thermally induced acoustic emission memory effect in quasi-homogeneous and banded rocks is estimated. It is shown that such modeling allows the structure and texture of rocks to be taken into account and the influence of interference factors on the distinctness of the thermally induced acoustic emission memory effect to be estimated. The numerical modeling can be used to obtain information about the thermal impacts on rocks in the past and determine the degree of rock disturbance by means of non-destructive testing.

Keywords: crack growth, cyclic heating and cooling, rock texture, thermo acoustic emission memory effect

Procedia PDF Downloads 271
1277 Approximate-Based Estimation of Single Event Upset Effect on Statistic Random-Access Memory-Based Field-Programmable Gate Arrays

Authors: Mahsa Mousavi, Hamid Reza Pourshaghaghi, Mohammad Tahghighi, Henk Corporaal

Abstract:

Recently, Statistic Random-Access Memory-based (SRAM-based) Field-Programmable Gate Arrays (FPGAs) are widely used in aeronautics and space systems where high dependability is demanded and considered as a mandatory requirement. Since design’s circuit is stored in configuration memory in SRAM-based FPGAs; they are very sensitive to Single Event Upsets (SEUs). In addition, the adverse effects of SEUs on the electronics used in space are much higher than in the Earth. Thus, developing fault tolerant techniques play crucial roles for the use of SRAM-based FPGAs in space. However, fault tolerance techniques introduce additional penalties in system parameters, e.g., area, power, performance and design time. In this paper, an accurate estimation of configuration memory vulnerability to SEUs is proposed for approximate-tolerant applications. This vulnerability estimation is highly required for compromising between the overhead introduced by fault tolerance techniques and system robustness. In this paper, we study applications in which the exact final output value is not necessarily always a concern meaning that some of the SEU-induced changes in output values are negligible. We therefore define and propose Approximate-based Configuration Memory Vulnerability Factor (ACMVF) estimation to avoid overestimating configuration memory vulnerability to SEUs. In this paper, we assess the vulnerability of configuration memory by injecting SEUs in configuration memory bits and comparing the output values of a given circuit in presence of SEUs with expected correct output. In spite of conventional vulnerability factor calculation methods, which accounts any deviations from the expected value as failures, in our proposed method a threshold margin is considered depending on user-case applications. Given the proposed threshold margin in our model, a failure occurs only when the difference between the erroneous output value and the expected output value is more than this margin. The ACMVF is subsequently calculated by acquiring the ratio of failures with respect to the total number of SEU injections. In our paper, a test-bench for emulating SEUs and calculating ACMVF is implemented on Zynq-7000 FPGA platform. This system makes use of the Single Event Mitigation (SEM) IP core to inject SEUs into configuration memory bits of the target design implemented in Zynq-7000 FPGA. Experimental results for 32-bit adder show that, when 1% to 10% deviation from correct output is considered, the counted failures number is reduced 41% to 59% compared with the failures number counted by conventional vulnerability factor calculation. It means that estimation accuracy of the configuration memory vulnerability to SEUs is improved up to 58% in the case that 10% deviation is acceptable in output results. Note that less than 10% deviation in addition result is reasonably tolerable for many applications in approximate computing domain such as Convolutional Neural Network (CNN).

Keywords: fault tolerance, FPGA, single event upset, approximate computing

Procedia PDF Downloads 198
1276 The Impact of Pediatric Cares, Infections and Vaccines on Community and People’s Lives

Authors: Nashed Atef Nashed Farag

Abstract:

Introduction: Reporting adverse events following vaccination remains a challenge. WHO has mandated pharmacovigilance centers around the world to submit Adverse Events Following Immunization (AEFI) reports from different countries to a large electronic database of adverse drug event data called Vigibase. Despite sufficient information about AEFIs on Vigibase, they are not available to the general public. However, the WHO has an alternative website called VigiAccess, an open-access website that serves as an archive for reported adverse reactions and AEFIs. The aim of the study was to establish a reporting model for a number of commonly used vaccines in the VigiAccess system. Methods: On February 5, 2018, VigiAccess comprehensively searched for ESSI reports on the measles vaccine, oral polio vaccine (OPV), yellow fever vaccine, pneumococcal vaccine, rotavirus vaccine, meningococcal vaccine, tetanus vaccine, and tuberculosis vaccine (BCG). These are reports from all pharmacovigilance centers around the world since they joined the WHO Drug Monitoring Program. Results: After an extensive search, VigiAccess found 9,062 AEFIs from the measles vaccine, 185,829 AEFIs from the OPV vaccine, 24,577 AEFIs from the yellow fever vaccine, 317,208 AEFIs from the pneumococcal vaccine, 73,513 AEFIs from the rotavirus vaccine, and 145,447 AEFIs from meningococcal cal vaccine, 22,781 EI FI vaccines against tetanus and 35,556 BCG vaccines against AEFI. Conclusion: The study found that among the eight vaccines examined, pneumococcal vaccines were associated with the highest number of AEFIs, while measles vaccines were associated with the fewest AEFIs.

Keywords: surgical approach, anatomical approach, decompression, axillary nerve, quadrangular space adverse events following immunization, cameroon, COVID-19 vaccines, nOPV, ODK vaccines, adverse reactions, VigiAccess, adverse event reporting

Procedia PDF Downloads 72
1275 Simultaneous Bilateral Patella Tendon Rupture: A Systematic Review

Authors: André Rui Coelho Fernandes, Mariana Rufino, Divakar Hamal, Amr Sousa, Emma Fossett, Kamalpreet Cheema

Abstract:

Aim: A single patella tendon rupture is relatively uncommon, but a simultaneous bilateral event is a rare occurrence and has been scarcely reviewed in the literature. This review was carried out to analyse the existing literature on this event, with the aim of proposing a standardised approach to the diagnosis and management of this injury. Methods: A systematic review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Three independent reviewers conducted searches in PubMed, OvidSP for Medline and Embase, as well as Cochrane Library using the same search strategy. From a total of 183 studies, 45 were included, i.e. 90 patellas. Results: 46 patellas had a Type 1 Rupture equating to 51%, with Type 3 being the least common, with only 7 patellas sustaining this injury. The mean Insall-Salvio ratio for each knee was 1.62 (R) and 1.60 (L) Direct Primary Repair was the most common surgical technique compared to Tendon Reconstruction, with End to End and Transosseous techniques split almost equally. Brace immobilisation was preferred over cast, with a mean start to weight-bearing of 3.23 weeks post-op. Conclusions: Bilateral patellar tendon rupture is a rare injury that should be considered in patients with knee extensor mechanism disruption. The key limitation of this study was the low number of patients encompassed by the eligible literature. There is space for a higher level of evidence study, specifically regarding surgical treatment choice and methods, as well as post-operative management, which could potentially improve the outcomes in the management of this injury.

Keywords: trauma and orthopaedic surgery, bilateral patella, tendon rupture, trauma

Procedia PDF Downloads 135
1274 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms

Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.

Abstract:

Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.

Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment

Procedia PDF Downloads 402
1273 Impact of Unusual Dust Event on Regional Climate in India

Authors: Kanika Taneja, V. K. Soni, Kafeel Ahmad, Shamshad Ahmad

Abstract:

A severe dust storm generated from a western disturbance over north Pakistan and adjoining Afghanistan affected the north-west region of India between May 28 and 31, 2014, resulting in significant reductions in air quality and visibility. The air quality of the affected region degraded drastically. PM10 concentration peaked at a very high value of around 1018 μgm-3 during dust storm hours of May 30, 2014 at New Delhi. The present study depicts aerosol optical properties monitored during the dust days using ground based multi-wavelength Sky radiometer over the National Capital Region of India. High Aerosol Optical Depth (AOD) at 500 nm was observed as 1.356 ± 0.19 at New Delhi while Angstrom exponent (Alpha) dropped to 0.287 on May 30, 2014. The variation in the Single Scattering Albedo (SSA) and real n(λ) and imaginary k(λ) parts of the refractive index indicated that the dust event influences the optical state to be more absorbing. The single scattering albedo, refractive index, volume size distribution and asymmetry parameter (ASY) values suggested that dust aerosols were predominant over the anthropogenic aerosols in the urban environment of New Delhi. The large reduction in the radiative flux at the surface level caused significant cooling at the surface. Direct Aerosol Radiative Forcing (DARF) was calculated using a radiative transfer model during the dust period. A consistent increase in surface cooling was evident, ranging from -31 Wm-2 to -82 Wm-2 and an increase in heating of the atmosphere from 15 Wm-2 to 92 Wm-2 and -2 Wm-2 to 10 Wm-2 at top of the atmosphere.

Keywords: aerosol optical properties, dust storm, radiative transfer model, sky radiometer

Procedia PDF Downloads 377
1272 Extreme Value Theory Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. In this paper, the results for the reliability of diesel generator fans were calculated through Extreme Value Theory. The Extreme Value Theory is not widely used in the engineering field. Its usage is well known in other areas such as hydrology, meteorology, finance. The significance of this theory is in the fact that unlike the other statistical methods it is focused on rare and extreme values, and not on average. It should be noted that this theory is not designed exclusively for extreme events, but for extreme values in any event. Therefore, this is a great opportunity to apply the theory and test if it could be applied in this situation. The significance of the work is the calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know the time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. The results achieved in this method will show the approximation of time for which the fans will work as they should, and the percentage of probability of fans working more than certain estimated time. Extreme Value Theory can be applied not only for rare and extreme events, but for any event that has values which we can consider as extreme.

Keywords: extreme value theory, lifetime, reliability analysis, statistic, time to failure

Procedia PDF Downloads 327
1271 Non-Uniform Filter Banks-based Minimum Distance to Riemannian Mean Classifition in Motor Imagery Brain-Computer Interface

Authors: Ping Tan, Xiaomeng Su, Yi Shen

Abstract:

The motion intention in the motor imagery braincomputer interface is identified by classifying the event-related desynchronization (ERD) and event-related synchronization ERS characteristics of sensorimotor rhythm (SMR) in EEG signals. When the subject imagines different limbs or different parts moving, the rhythm components and bandwidth will change, which varies from person to person. How to find the effective sensorimotor frequency band of subjects is directly related to the classification accuracy of brain-computer interface. To solve this problem, this paper proposes a Minimum Distance to Riemannian Mean Classification method based on Non-Uniform Filter Banks. During the training phase, the EEG signals are decomposed into multiple different bandwidt signals by using multiple band-pass filters firstly; Then the spatial covariance characteristics of each frequency band signal are computered to be as the feature vectors. these feature vectors will be classified by the MDRM (Minimum Distance to Riemannian Mean) method, and cross validation is employed to obtain the effective sensorimotor frequency bands. During the test phase, the test signals are filtered by the bandpass filter of the effective sensorimotor frequency bands, and the extracted spatial covariance feature vectors will be classified by using the MDRM. Experiments on the BCI competition IV 2a dataset show that the proposed method is superior to other classification methods.

Keywords: non-uniform filter banks, motor imagery, brain-computer interface, minimum distance to Riemannian mean

Procedia PDF Downloads 125
1270 Next Generation UK Storm Surge Model for the Insurance Market: The London Case

Authors: Iacopo Carnacina, Mohammad Keshtpoor, Richard Yablonsky

Abstract:

Non-structural protection measures against flooding are becoming increasingly popular flood risk mitigation strategies. In particular, coastal flood insurance impacts not only private citizens but also insurance and reinsurance companies, who may require it to retain solvency and better understand the risks they face from a catastrophic coastal flood event. In this context, a framework is presented here to assess the risk for coastal flooding across the UK. The area has a long history of catastrophic flood events, including the Great Flood of 1953 and the 2013 Cyclone Xaver storm, both of which led to significant loss of life and property. The current framework will leverage a technology based on a hydrodynamic model (Delft3D Flexible Mesh). This flexible mesh technology, coupled with a calibration technique, allows for better utilisation of computational resources, leading to higher resolution and more detailed results. The generation of a stochastic set of extra tropical cyclone (ETC) events supports the evaluation of the financial losses for the whole area, also accounting for correlations between different locations in different scenarios. Finally, the solution shows a detailed analysis for the Thames River, leveraging the information available on flood barriers and levees. Two realistic disaster scenarios for the Greater London area are simulated: In the first scenario, the storm surge intensity is not high enough to fail London’s flood defences, but in the second scenario, London’s flood defences fail, highlighting the potential losses from a catastrophic coastal flood event.

Keywords: storm surge, stochastic model, levee failure, Thames River

Procedia PDF Downloads 232
1269 Applying the Global Trigger Tool in German Hospitals: A Retrospective Study in Surgery and Neurosurgery

Authors: Mareen Brosterhaus, Antje Hammer, Steffen Kalina, Stefan Grau, Anjali A. Roeth, Hany Ashmawy, Thomas Gross, Marcel Binnebosel, Wolfram T. Knoefel, Tanja Manser

Abstract:

Background: The identification of critical incidents in hospitals is an essential component of improving patient safety. To date, various methods have been used to measure and characterize such critical incidents. These methods are often viewed by physicians and nurses as external quality assurance, and this creates obstacles to the reporting events and the implementation of recommendations in practice. One way to overcome this problem is to use tools that directly involve staff in measuring indicators of quality and safety of care in the department. One such instrument is the global trigger tool (GTT), which helps physicians and nurses identify adverse events by systematically reviewing randomly selected patient records. Based on so-called ‘triggers’ (warning signals), indications of adverse events can be given. While the tool is already used internationally, its implementation in German hospitals has been very limited. Objectives: This study aimed to assess the feasibility and potential of the global trigger tool for identifying adverse events in German hospitals. Methods: A total of 120 patient records were randomly selected from two surgical, and one neurosurgery, departments of three university hospitals in Germany over a period of two months per department between January and July, 2017. The records were reviewed using an adaptation of the German version of the Institute for Healthcare Improvement Global Trigger Tool to identify triggers and adverse event rates per 1000 patient days and per 100 admissions. The severity of adverse events was classified using the National Coordinating Council for Medication Error Reporting and Prevention. Results: A total of 53 adverse events were detected in the three departments. This corresponded to adverse event rates of 25.5-72.1 per 1000 patient-days and from 25.0 to 60.0 per 100 admissions across the three departments. 98.1% of identified adverse events were associated with non-permanent harm without (Category E–71.7%) or with (Category F–26.4%) the need for prolonged hospitalization. One adverse event (1.9%) was associated with potentially permanent harm to the patient. We also identified practical challenges in the implementation of the tool, such as the need for adaptation of the global trigger tool to the respective department. Conclusions: The global trigger tool is feasible and an effective instrument for quality measurement when adapted to the departmental specifics. Based on our experience, we recommend a continuous use of the tool thereby directly involving clinicians in quality improvement.

Keywords: adverse events, global trigger tool, patient safety, record review

Procedia PDF Downloads 249
1268 Upper Jurassic Foraminiferal Assemblages and Palaeoceanographical Changes in the Central Part of the East European Platform

Authors: Clementine Colpaert, Boris L. Nikitenko

Abstract:

The Upper Jurassic foraminiferal assemblages of the East European Platform have been strongly investigated through the 20th century with biostratigraphical and in smaller degree palaeoecological and palaeobiogeographical purposes. Over the Late Jurassic, the platform was a shallow epicontinental sea that extended from Tethys to the Artic through the Pechora Sea and further toward the northeast in the West Siberian Sea. Foraminiferal assemblages of the Russian Sea were strongly affected by sea-level changes and were controlled by alternated Boreal to Peritethyan influences. The central part of the East European Platform displays very rich and diverse foraminiferal assemblages. Two sections have been analyzed; the Makar'yev Section in the Moscow Depression and the Gorodishi Section in the Yl'yanovsk Depression. Based on the evolution of foraminiferal assemblages, palaeoenvironment has been reconstructed, and sea-level changes have been refined. The aim of this study is to understand palaeoceanographical changes throughout the Oxfordian – Kimmeridgian of the central part of the Russian Sea. The Oxfordian was characterized by a general transgressive event with intermittency of small regressive phases. The platform was connected toward the south with Tethys and Peritethys. During the Middle Oxfordian, opening of a pathway of warmer water from the North-Tethys region to the Boreal Realm favoured the migration of planktonic foraminifera and the appearance of new benthic taxa. It is associated with increased temperature and primary production. During the Late Oxfordian, colder water inputs associated with the microbenthic community crisis may be a response to the closure of this warm-water corridor and the disappearance of planktonic foraminifera. The microbenthic community crisis is probably due to the increased sedimentation rate in the transition from the maximum flooding surface to a second-order regressive event, increasing productivity and inputs of organic matter along with sharp decrease of oxygen into the sediment. It is following during the Early Kimmeridgian by a replacement of foraminiferal assemblages. The almost all Kimmeridgian is characterized by the abundance of many common with Boreal and Subboreal Realm. Connections toward the South began again dominant after a small regressive event recorded during the Late Kimmeridgian and associated with the abundance of many common taxa with Subboreal Realm and Peritethys such as Crimea and Caucasus taxa. Foraminiferal assemblages of the East European Platform are strongly affected by palaeoecological changes and may display a very good model for biofacies typification under Boreal and Subboreal environments. The East European Platform appears to be a key area for the understanding of Upper Jurassic big scale palaeoceanographical changes, being connected with Boreal to Peritethyan basins.

Keywords: foraminifera, palaeoceanography, palaeoecology, upper jurassic

Procedia PDF Downloads 247
1267 Simulating Human Behavior in (Un)Built Environments: Using an Actor Profiling Method

Authors: Hadas Sopher, Davide Schaumann, Yehuda E. Kalay

Abstract:

This paper addresses the shortcomings of architectural computation tools in representing human behavior in built environments, prior to construction and occupancy of those environments. Evaluating whether a design fits the needs of its future users is currently done solely post construction, or is based on the knowledge and intuition of the designer. This issue is of high importance when designing complex buildings such as hospitals, where the quality of treatment as well as patient and staff satisfaction are of major concern. Existing computational pre-occupancy human behavior evaluation methods are geared mainly to test ergonomic issues, such as wheelchair accessibility, emergency egress, etc. As such, they rely on Agent Based Modeling (ABM) techniques, which emphasize the individual user. Yet we know that most human activities are social, and involve a number of actors working together, which ABM methods cannot handle. Therefore, we present an event-based model that manages the interaction between multiple Actors, Spaces, and Activities, to describe dynamically how people use spaces. This approach requires expanding the computational representation of Actors beyond their physical description, to include psychological, social, cultural, and other parameters. The model presented in this paper includes cognitive abilities and rules that describe the response of actors to their physical and social surroundings, based on the actors’ internal status. The model has been applied in a simulation of hospital wards, and showed adaptability to a wide variety of situated behaviors and interactions.

Keywords: agent based modeling, architectural design evaluation, event modeling, human behavior simulation, spatial cognition

Procedia PDF Downloads 264